Sunteți pe pagina 1din 669

Structural design optimization considering

uncertainties
Structures and Infrastructures Series

ISSN 1747-7735

Book Series Editor:

Dan M. Frangopol
Professor of Civil Engineering and
Fazlur R. Khan Endowed Chair of Structural Engineering and Architecture
Department of Civil and Environmental Engineering
Center for Advanced Technology for Large Structural Systems (ATLSS Center)
Lehigh University
Bethlehem, PA, USA

Volume 1
Structural design optimization
considering uncertainties

Edit ed by

Yiannis Tsompanakis1,
Nikos D. Lagaros2 &
Manolis Papadrakakis3
1
Department of Applied Sciences, Technical University of Crete,
University Campus, Chania, Crete, Greece
2,3
Institute of Structural Analysis & Seismic Research,
Faculty of Civil Engineering,
National Technical University of Athens,
Zografou Campus, Athens, Greece

LONDON / LEIDEN / NEW YORK / PHILADELPHIA / SINGAPORE


Colophon

Book Series Editor :


Dan M. Frangopol

Volume Editors:
Yiannis Tsompanakis, Nikos D. Lagaros and Manolis Papadrakakis

Cover illustration:
Objective space of the M-3OU multi-criteria optimization problem
Nikos D. Lagaros
September 2007
This edition published in the Taylor & Francis e-Library, 2008.
“To purchase your own copy of this or any of Taylor & Francis or Routledge’s
collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.”

Taylor & Francis is an imprint of the Taylor & Francis Group,


an informa business

©2008 Taylor & Francis Group, London, UK


All rights reserved. No part of this publication or the information
contained herein may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, electronic, mechanical,
by photocopying, recording or otherwise, without written prior
permission from the publishers.

Although all care is taken to ensure integrity and the quality of this
publication and the information herein, no responsibility is
assumed by the publishers nor the author for any damage to the
property or persons as a result of operation or use of this
publication and/or the information contained herein.

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library

Library of Congress Cataloging-in-Publication Data

Structural design optimization considering uncertainties / Edited by


Yiannis Tsompanakis, Nikos D. Lagaros & Manolis Papadrakakis.
p. cm. – (Structures and infrastructures series ; 1747-7735)
Includes bibliographical references and index.
ISBN 978-0-415-45260-1 (hardcover : alk. paper)
ISBN 978-0-203-93852-2 (e-book)
1. Structural optimization. I. Tsompanakis, Yiannis. 1969–
II. Lagaros, Nikos D. 1970– III. Papadrakakis, Manolis. 1948–

TA658.8.S73 2007
624.1 7713–dc22
2007040343

Published by: Taylor & Francis/Balkema


P.O. Box 447, 2300 AK Leiden, The Netherlands
e-mail: Pub.NL@tandf.co.uk
www.balkema.nl, www.taylorandfrancis.co.uk,
www.crcpress.com

ISBN 0-203-93852-6 Master e-book ISBN

ISBN13 978-0-415-45260-1(Hbk)
ISBN13 978-0-203-93852-2(eBook)
Structures and Infrastructures Series: ISSN 1747-7735
Volume 1
Table of Contents

Editorial IX
About the Book Series Editor XI
Foreword XIII
Preface XV
Brief Curriculum Vitae of the Editors XXI
List of Contributors XXIII
Author Data XXV

PART 1
Reliability-Based Design Optimization (RBDO)

1 Principles of reliability-based design optimization 3


Alaa Chateauneuf , University Blaise Pascal, France

2 Reliability-based optimization of engineering structures 31


John D. Sørensen, Aalborg University, Aalborg, Denmark

3 Reliability analysis and reliability-based design optimization


using moment methods 57
Sang Hoon Lee, Northwestern University, Evanston, IL, USA
Byung Man Kwak, Korea Advanced Institute of Science
and Technology, Daejeon, Korea
Jae Sung Huh, Korea Aerospace Research Institute,
Daejeon, Korea

4 Efficient approaches for system reliability-based design


optimization 87
Efstratios Nikolaidis, University of Toledo, Toledo, OH, USA
Zissimos P. Mourelatos, Oakland University, Rochester, MI, USA
Jinghong Liang, Oakland University, Rochester, MI, USA
VI Contents

5 Nondeterministic formulations of analytical target cascading


for decomposition-based design optimization under uncertainty 115
Michael Kokkolaras, University of Michigan, Ann Arbor, MI, USA
Panos Y. Papalambros, University of Michigan, Ann Arbor, MI, USA

6 Design optimization of stochastic dynamic systems by algebraic


reduced order models 135
Gary Weickum, University of Colorado at Boulder, Boulder, CO, USA
Matt Allen, University of Colorado at Boulder, Boulder, CO, USA
Kurt Maute, University of Colorado at Boulder, Boulder, CO, USA
Dan M. Frangopol, Lehigh University, Bethlehem, PA, USA

7 Stochastic system design optimization using stochastic simulation 155


Alexandros A. Taflanidis, California Institute of Technology, CA, USA
James L. Beck, California Institute of Technology, CA, USA

8 Numerical and semi-numerical methods for reliability-based


design optimization 189
Ghias Kharmanda, Aleppo University, Aleppo, Syria

9 Advances in solution methods for reliability-based design optimization 217


Alaa Chateauneuf , University Blaise Pascal, France
Younes Aoues, University Blaise Pascal, France

10 Non-probabilistic design optimization with insufficient data


using possibility and evidence theories 247
Zissimos P. Mourelatos, Oakland University, Rochester, MI, USA
Jun Zhou, Oakland University, Rochester, MI, USA

11 A decoupled approach to reliability-based topology


optimization for structural synthesis 281
Neal M. Patel, University of Notre Dame, Notre Dame, IN, USA
John E. Renaud, University of Notre Dame, Notre Dame, IN, USA
Donald Tillotson, University of Notre Dame, Notre Dame, IN, USA
Harish Agarwal, General Electric Global Research, Niskayuna, NY, USA
Andrés Tovar, National University of Colombia, Bogota, Colombia

12 Sample average approximations in reliability-based structural


optimization: Theory and applications 307
Johannes O. Royset, Naval Postgraduate School, Monterey, CA, USA
Elijah Polak, University of California, Berkeley, CA, USA

13 Cost-benefit optimization for maintained structures 335


Rüdiger Rackwitz, Technical University of Munich, Munich, Germany
Andreas E. Joanni, Technical University of Munich, Munich, Germany

14 A reliability-based maintenance optimization methodology 369


Wu Y.-T., Applied Research Associates Inc., Raleigh, NC, USA
C o n t e n t s VII

15 Overview of reliability analysis and design capabilities in


DAKOTA with Application to shape optimization of MEMS 401
Michael S. Eldred, Sandia National Laboratories,
Albuquerque, NM, USA
Barron J. Bichon, Vanderbilt University, Nashville, TN, USA
Brian M. Adams, Sandia National Laboratories,
Albuquerque, NM, USA
Sankaran Mahadevan, Vanderbilt University, Nashville, TN, USA

PART 2
Robust Design Optimization (RDO)

16 Structural robustness and its relationship to reliability 435


Jorge E. Hurtado, National University of Colombia,
Manizales, Colombia

17 Maximum robustness design of trusses via semidefinite


programming 471
Yoshihiro Kanno, University of Tokyo, Tokyo, Japan
Izuru Takewaki, Kyoto University, Kyoto, Japan

18 Design optimization and robustness of structures against


uncertainties based on Taylor series expansion 499
Ioannis Doltsinis, University of Stuttgart, Stuttgart, Germany

19 Info-gap robust design of passively controlled structures


with load and model uncertainties 531
Izuru Takewaki, Kyoto University, Kyoto, Japan
Yakov Ben-Haim, Technion, Haifa, Israel

20 Genetic algorithms in structural optimum design


using convex models of uncertainty 549
Sara Ganzerli, Gonzaga University, Spokane, WA, USA
Paul De Palma, Gonzaga University, Spokane, WA, USA

21 Metamodel-based computational techniques for solving structural


optimization problems considering uncertainties 567
Nikos D. Lagaros, National Technical University of Athens,
Athens, Greece
Yiannis Tsompanakis, Technical University of Crete, Chania, Greece
Michalis Fragiadakis, University of Thessaly, Volos, Greece
Vagelis Plevris, National Technical University of Athens, Athens, Greece
Manolis Papadrakakis, National Technical University of Athens,
Athens, Greece

References 599
Author index 631
Subject index 633
Editorial

Welcome to the New Book Series Structures and Infrastructures.


Our knowledge to model, analyze, design, maintain, manage and predict the life-
cycle performance of structures and infrastructures is continually growing. However,
the complexity of these systems continues to increase and an integrated approach
is necessary to understand the effect of technological, environmental, economical,
social and political interactions on the life-cycle performance of engineering structures
and infrastructures. In order to accomplish this, methods have to be developed to
systematically analyze structure and infrastructure systems, and models have to be
formulated for evaluating and comparing the risks and benefits associated with various
alternatives. We must maximize the life-cycle benefits of these systems to serve the needs
of our society by selecting the best balance of the safety, economy and sustainability
requirements despite imperfect information and knowledge.
In recognition of the need for such methods and models, the aim of this Book Series
is to present research, developments, and applications written by experts on the most
advanced technologies for analyzing, predicting and optimizing the performance of
structures and infrastructures such as buildings, bridges, dams, underground con-
struction, offshore platforms, pipelines, naval vessels, ocean structures, nuclear power
plants, and also airplanes, aerospace and automotive structures.
The scope of this Book Series covers the entire spectrum of structures and infrastruc-
tures. Thus it includes, but is not restricted to, mathematical modeling, computer and
experimental methods, practical applications in the areas of assessment and evalua-
tion, construction and design for durability, decision making, deterioration modeling
and aging, failure analysis, field testing, structural health monitoring, financial plan-
ning, inspection and diagnostics, life-cycle analysis and prediction, loads, maintenance
strategies, management systems, nondestructive testing, optimization of maintenance
and management, specifications and codes, structural safety and reliability, system
analysis, time-dependent performance, rehabilitation, repair, replacement, reliability
and risk management, service life prediction, strengthening and whole life costing.
This Book Series is intended for an audience of researchers, practitioners, and
students world-wide with a background in civil, aerospace, mechanical, marine and
automotive engineering, as well as people working in infrastructure maintenance,
monitoring, management and cost analysis of structures and infrastructures. Some vol-
umes are monographs defining the current state of the art and/or practice in the field,
and some are textbooks to be used in undergraduate (mostly seniors), graduate and
X Editorial

postgraduate courses. This Book Series is affiliated to Structure and Infrastruc-


ture Engineering (http://www.informaworld.com/sie), an international peer-reviewed
journal which is included in the Science Citation Index.
If you like to contribute to this Book Series as an author or editor, please contact the
Book Series Editor (dan.frangopol@lehigh.edu) or the Publisher (pub.nl@tandf.co.uk).
A book proposal form can be downloaded at www.balkema.nl.
It is now up to you, authors, editors, and readers, to make Structures and
Infrastructures a success.

Dan M. Frangopol
Book Series Editor
About the Book Series Editor

Dr. Dan M. Frangopol is the first holder of the Fazlur


R. Khan Endowed Chair of Structural Engineering and
Architecture at Lehigh University, Bethlehem, Pennsylvania,
USA, and a Professor in the Department of Civil and
Environmental Engineering at Lehigh University. He is also
an Emeritus Professor of Civil Engineering at the Univer-
sity of Colorado at Boulder, USA, where he taught for more
than two decades (1983–2006). Before joining the Univer-
sity of Colorado, he worked for four years (1979–1983)
in structural design with A. Lipski Consulting Engineers in
Brussels, Belgium. In 1976, he received his doctorate in
Applied Sciences from the University of Liège, Belgium, and holds an honorary doc-
torate degree (Doctor Honoris Causa) and a B.S. degree from the Technical University
of Civil Engineering in Bucharest, Romania. He is a Fellow of the American Society of
Civil Engineers (ASCE), American Concrete Institute (ACI), and International Associ-
ation for Bridge and Structural Engineering (IABSE). He is also an Honorary Member
of both the Romanian Academy of Technical Sciences and the Portuguese Association
for Bridge Maintenance and Safety. He is the initiator and organizer of the Fazlur
R. Khan Lecture Series (www.lehigh.edu/frkseries) at Lehigh University.
Dan Frangopol is an experienced researcher and consultant to industry and govern-
ment agencies, both nationally and abroad. His main areas of expertise are structural
reliability, structural optimization, bridge engineering, and life-cycle analysis, design,
maintenance, monitoring, and management of structures and infrastructures. He is
the Founding President of the International Association for Bridge Maintenance and
Safety (IABMAS, www.iabmas.org) and of the International Association for Life-Cycle
Civil Engineering (IALCCE, www.ialcce.org), and Past Director of the Consortium on
Advanced Life-Cycle Engineering for Sustainable Civil Environments (COALESCE).
He is also the Chair of the Executive Board of the International Association for
Structural Safety and Reliability (IASSAR, www.columbia.edu/cu/civileng/iassar) and
the Vice-President of the International Society for Health Monitoring of Intelligent
Infrastructures (ISHMII, www.ishmii.org). Dan Frangopol is the recipient of several
prestigious awards including the 2007 ASCE Ernest Howard Award, the 2006 IABSE
OPAC Award, the 2006 Elsevier Munro Prize, the 2006 T. Y. Lin Medal, the 2005
ASCE Nathan M. Newmark Medal, the 2004 Kajima Research Award, the 2003
XII A b o u t t h e B o o k S e r i e s E d i t o r

ASCE Moisseiff Award, the 2002 JSPS Fellowship Award for Research in Japan, the
2001 ASCE J. James R. Croes Medal, the 2001 IASSAR Research Prize, the 1998 and
2004 ASCE State-of-the-Art of Civil Engineering Award, and the 1996 Distinguished
Probabilistic Methods Educator Award of the Society of Automotive Engineers (SAE).
Dan Frangopol is the Founding Editor-in-Chief of Structure and Infrastructure
Engineering (Taylor & Francis, www.informaworld.com/sie) an international peer-
reviewed journal, which is included in the Science Citation Index. This journal is
dedicated to recent advances in maintenance, management, and life-cycle performance
of a wide range of structures and infrastructures. He is the author or co-author of over
400 refereed publications, and co-author, editor or co-editor of more than 20 books
published by ASCE, Balkema, CIMNE, CRC Press, Elsevier, McGraw-Hill, Taylor &
Francis, and Thomas Telford and an editorial board member of several international
journals. Additionally, he has chaired and organized several national and international
structural engineering conferences and workshops. Dan Frangopol has supervised over
70 Ph.D. and M.Sc. students. Many of his former students are professors at major
universities in the United States, Asia, Europe, and South America, and several are
prominent in professional practice and research laboratories.
For additional information on Dan M. Frangopol’s activities, please visit
www.lehigh.edu/∼dmf206/
Foreword

The aim of structural optimization is to achieve the best possible design by maximi-
zing benefits under conflicting criteria. Uncertainties are unavoidable in the structural
optimization process. Therefore, a realistic optimal design process should definitely
consider uncertainties. Two broad types of uncertainty have to be considered: (a)
uncertainty associated with randomness, the so-called aleatory uncertainty, and (b)
uncertainty associated with imperfect modeling, the so-called epistemic uncertainty. It
has been clearly demonstrated that both aleatory and epistemic uncertainties can be
treated, separately or combined, and analyzed using the principles of probability and
statistics. Structural reliability theory has been developed during the past decades to
handle problems considering such uncertainties. This continuous development has had
considerable impact in recent years on structural optimization.
The purpose of this book is to present the latest research findings in the field of
structural optimization considering uncertainties. A wide variety of topics are covered
by leading researchers. The first part (Chapters 1 to 15) is devoted to reliability-based
design optimization, and the second part (Chapters 16 to 21) deals with robust design
optimization. To provide the reader with a good overview of pertinent literature,
all cited papers and additional references on the topics discussed, are collected in a
comprehensive list of references.
The Book Series Editor would like to express his appreciation to the Editors and
all Authors who contributed to this book. It is his hope that this first volume in
the Structures and Infrastructures Book Series will generate a lot of interest and help
engineers to design the best structural systems under uncertainty.

Dan M. Frangopol
Book Series Editor
Bethlehem, Pennsylvania
November 2, 2007
Preface

Uncertainties are inherent in engineering problems and the scatter of structural param-
eters from their nominal ideal values is unavoidable. The response of structural systems
can sometimes be very sensitive to uncertainties encountered in the material properties,
manufacturing conditions, external loading conditions and analytical and/or numerical
modelling. In recent years, probabilistic-based formulations of optimization problems
have been developed to account for uncertainties through stochastic simulation and
probabilistic analysis. Stochastic analysis methods have been developed significantly
over the last two decades and have stimulated the interest for the probabilistic opti-
mum design of structures. There are mainly two design formulations that account for
probabilistic systems response: Reliability-Based Design Optimization (RBDO) and
Robust Design Optimization (RDO). The main goal of RBDO methods is to design
for safety with respect to extreme events. RDO methods primarily seek to minimize
the influence of stochastic variations on the mean design.
The selected contributions of this book deal with the use of probabilistic methods
for optimal design of different types of structures and various considerations of uncer-
tainties. This volume is a collective book of twenty-one self-contained chapters, which
present state-of-the-art theoretical advances and applications in various fields of prob-
abilistic computational mechanics. The first fifteen chapters of the book are focused
on RBDO theory and applications, while the rest of the chapters deal with advances in
RDO and combined RBDO-RDO theory and applications. Apart from the reference
list that is given separately for each chapter, a complete list of references is also pro-
vided for the reader. In order to obtain contributions that cover a wide spectrum of
engineering problems, the problem of optimum design is considered in a broad sense.
The probabilistic framework allows for a consistent treatment of both cost and safety.
In what follows a short description of the book content is presented.
In the introductory chapter by Chateauneuf, the fundamental theoretical and compu-
tational issues related to RBDO are described and the advantages of RBDO compared
to conventional deterministic optimization approaches are outlined. This chapter
emphasizes the role of uncertainties in deriving a “true’’ optimal solution, defined
as the best compromise between cost minimization and safety assurance. The pre-
sented RBDO formulations cover various important probabilistic issues (theoretical,
computational and practical), such as multi-component reliability analysis, safety fac-
tor calibration, multi-objective applications, as well as a great variety of engineering
applications, such as topology, maintenance and time-variant problems.
XVI Preface

The theoretical basis for reliability-based structural optimization is described by


Sørensen within the framework of Bayesian statistical decision theory. This contri-
bution presents the latest findings in RBDO with respect to three major types of
decision problems with increased degree of complexity and uncertainty: a) decisions
with given information (e.g. planning of new structures), b) decisions when new infor-
mation is provided (e.g. for re-assessment and retrofitting of existing structures), c)
decisions involving planning of experiments/inspections to obtain new information
(e.g. for inspection planning). Furthermore, RBDO issues related to decisions with
systematic reconstruction are also discussed. Reliability-based, cost-benefit problems
are formulated and exemplified with structural optimization. Illustrative examples
are presented including a simple introductory example, a decision problem related
to bridge re-assessment and a reliability-based decision problem for offshore wind
turbines.
Lee, Kwak and Huh deal with reliability analysis and reliability-based design opti-
mization using moment methods. By using this approach, a finite number of statistical
moments of a system response function are calculated and the probability density
function (PDF) of the system response is identified by empirical distribution sys-
tems, such as the Pearson or the Johnson system. In this chapter, a full factorial
moment method (FFMM) procedure is introduced for reliability analysis calculations.
A response surface augmented moment method (RSMM) is developed to construct a
series of approximate response surface for enhancing the efficiency of FFMM. The
probability of failure is calculated using an empirical distribution system and the first
four statistical moments of system’s performance function are calculated from appro-
priate design simulations. The design sensitivity of the probability of failure, required
during RBDO process, is calculated in a semi-analytic way using moment methods.
As stated in the chapter by Nikolaidis, Mourelatos and Liang, a designer faces many
challenges when applying RBDO to engineering systems. The high computational cost
required for RBDO and the efficient computation of the system failure probability
are the two principal challenges. As a result, most RBDO studies are restricted to the
safety levels of the individual failure modes. In order to overcome this deficiency, two
efficient approaches for RBDO are presented in this chapter. Both approaches appor-
tion optimally the system reliability among the failure modes by considering the target
values of the failure probabilities of the modes as design variables. The first approach
uses a sequential optimization and reliability assessment (SORA) approach, while the
second system RBDO approach uses a single-loop method where the searches for
the optimum design and for the most probable failure points proceed simultaneously.
The two approaches are illustrated and compared on characteristic design examples.
Moreover, it is shown that the single-loop approach, enhanced with an active set
strategy, is considerably more efficient than the SORA approach.
According to the work of Kokkolaras and Papalambros, design subproblems are
formulated and solved so that their solution can be integrated to represent the optimal
design of the decomposed system. This approach requires appropriate problem for-
mulation and coordination of the distributed, multilevel system design problem. The
presented analytical target cascading (ATC) is a methodology suitable for multilevel
optimal design problems. Design targets are cascaded to lower levels using the model-
based, hierarchical decomposition of the original design problem. An optimization
problem is posed and solved for each design subproblem to minimize deviations from
Preface XVII

propagated targets. Solving the subproblems and using an appropriate coordination


strategy the overall system compatibility is preserved.
The required computational effort motivated Weickum, Allen, Maute and Frangopol
to address the need for developing efficient numerical probabilistic techniques for
the reliability analysis and design optimization of stochastic dynamic systems. This
work seeks to alleviate the computational costs for optimizing dynamic systems by
employing reduced order models. The key to utilize reduced order models in stochastic
analysis and optimization lies in making them adaptable to design changes and varia-
tions of the random parameters. For this purpose, an extended reduced order model
(EROM) method, which is a reduced order model accounting for parameter changes,
is integrated into stochastic analysis and design optimization. The application of the
proposed EROM is tested both for deterministic and probabilistic optimization of the
characteristic connecting rod example.
Taflanidis and Beck consider a two stage framework for efficient implementation
of RBDO of dynamical systems under stochastic excitation (e.g. earthquake, wind or
wave loading), where uncertainties are assumed for both the excitation characteristics
and the structural model adopted. In the first stage a novel approach, the so called
stochastic subset optimization (SSO), is implemented for iteratively identifying a sub-
set of the original design space that has high probability of containing the optimal
design variables. The second stage adopts a stochastic optimization algorithm to pin-
point, if needed, the optimal design variables within that subset. Topics related to the
combination of the two different stages, in order to enhance the overall efficiency of
the presented methodology, are also discussed. An illustrative example for the seismic
retrofitting, via viscous dampers, is presented. The minimization of the expected life-
cycle cost is adopted as the design objective, in which the cost associated with damage
caused by future earthquakes is calculated by stochastic simulation via a realistic prob-
abilistic model for the structure and the ground motion that involves the formulation
of an effective loss function model.
Kharmanda discusses in his contribution issues related to RBDO formulation and
solution procedures. The RBDO formulation is defined as a nonlinear mathematical
programming problem in which the mean values of uncertain system parameters are
used as design variables and its weight or cost is optimized subjected to prescribed
probabilistic constraints. In this chapter, recent developments for the efficient RBDO
problem solving using semi-numerical and numerical techniques are presented. Follow-
ing a detailed description of the proposed methods, their efficiency is demonstrated in
computationally demanding dynamic applications. The obtained results as well as the
computational implications of the methods are compared and their advantages and
disadvantages are highlighted in a comprehensive manner.
In the contribution by Chateauneuf and Aoues, the main objective is to apply
appropriate numerical methods in order to solve RBDO problems more efficiently.
A comprehensive description of the most commonly used RBDO formulations and the
corresponding numerical methods is provided. A good RBDO algorithm should satisfy
the conditions of efficiency (computation time), precision (accuracy of finding the opti-
mum), generality (capability to deal with different kinds of problems) and robustness
(stability of the convergence for any admissible initial point, local or global conver-
gence criteria, etc). All these aspects are discussed in detail, and effective solutions are
proposed via characteristic test examples.
XVIII Preface

In the chapter by Mourelatos and Zhou, evidence theories are used to account for
uncertainty in structural design with incomplete and/or fuzzy information. A sequen-
tial possibility-based design optimization (SPDO) method is presented which decouples
the design loop and the reliability assessment of each constraint and is also capable
of handling both random and possibilistic design variables. Furthermore, a compu-
tationally efficient optimum design formulation using evidence theory is presented,
which can handle a mixture of epistemic and aleatory uncertainties. Numerical exam-
ples demonstrate the application of possibility and evidence theories in probabilistic
optimum design and highlight the trade-offs among reliability-based, possibility-based
and evidence-based design approaches.
In the chapter by Patel, Renaud, Tillotson, Agarwal, and Tovar, the mode of failure
is considered to be the maximum deflection of the structure in reliability-based topol-
ogy optimization (RBTO). A decoupled approach is employed in which the topology
optimization stage is separate from the reliability analysis. The proposed decoupled
reliability-based design optimization methodology is an approximate technique to
obtain consistent reliable designs at lower computational expense. An efficient non-
gradient Hybrid Cellular Automaton (HCA) method has been implemented in the
proposed decoupled approach for evaluating density changes, while the strain energy
for every new design is evaluated via finite element structural analyses.
The chapter by Royset and Polak presents recent advances in combining Monte
Carlo sampling and nonlinear programming algorithms for RBDO problems utilizing
effective approximation techniques that can lead to the reduction of the excessive
computational cost. More specifically, they present an approach where the reliability
term in the problem formulation is replaced by a statistical estimate of the reliability
obtained by means of Monte Carlo sampling. The authors emphasize on the calculation
of “adaptive optimal’’ sample size which is achieved using sample-adjustment rules by
solving auxiliary optimization tasks during the evolution of RBDO. The efficiency of
the methods is verified in a number of numerical examples arising in design of various
types of structures having a single or multiple limit-state functions, in which reliability
terms are included in both objective and constraint functions.
Rackwitz and Joanni describe theoretical and practical issues leading to cost-efficient
optimization formulations for existing aging structures. In order to establish an effi-
cient methodology for optimizing maintenance, an elaborate model, based on renewal
theory that uses systematic reconstruction or repair schemes after suitable inspection,
is formulated in which life-cycle cost perspective is used. The presented implementation
shows the impact of the choice of the objective function, the risk acceptability and the
transient behaviour of the failure rate. The emphasis is given on concrete structures,
but the described methodology can be applied to any material and any type of engineer-
ing structures. In particular, minimal age-dependent block repairs and maintenance by
inspection and repair have been studied via an illustrative example.
Wu describes in his contribution a reliability-based damage tolerance (RBDT)
approach that employs a systematic approach for probabilistic fracture-mechanics
damage tolerance analysis with maintenance planning under various uncertainties.
Moreover, he presents the successful integration of RBDT in the proposed reliability-
based maintenance optimization (RBMO) methodology, focusing on efficient sampling
and other computational strategies for handling the uncertainties related to structural
maintenance issues (fatigue, failure, inspection, repair, etc). A comparison of different
Preface XIX

versions of the proposed RBMO for analytical benchmark examples as well as for
realistic test cases is presented.
Eldred, Bichon, Adams and Mahadevan present an overview of recent research
related to first and second-order reliability methods. They outline both the forward
reliability analysis of computing probabilities for specified response levels (using the
so-called RIA, i.e. the reliability index approach) and the inverse reliability analy-
sis of computing response levels for specified probabilities (the performance measure
approach or PMA). A number of algorithmic variations are described and the effect
of different limit state approximations, probability integrations, warm starting, most
probable point search algorithms, and Hessian approximations is discussed. Relative
performance of these reliability analysis and design algorithms is presented for several
benchmark test problems as well as for real-world applications related to the prob-
abilistic analysis and design of micro-electro-mechanical systems (MEMS) using the
DAKOTA software.
Hurtado aims at exploiting the complementary nature of RDO and RBDO prob-
abilistic optimization approaches, using effective expansion techniques. Under this
viewpoint, an efficient approximate methodology that integrates RDO and RBDO is
proposed, in an effort to allow the designer to foresee the implications of adopting
RDO or RBDO in the optimization process of probabilistic applications and to com-
bine them in an optimum manner. On this basis, the concept of “robustness assurance’’
in structural design is introduced, in a similar manner to the “quality assurance’’ in
the construction phase. For this purpose, a practical method for robust optimal design
interpreted as entropy minimization is presented. Illustrative examples are presented
to elucidate the advantages of the proposed approach.
The robustness function is a measure of the performance of structural systems and
expresses the greatest level of non-probabilistic uncertainty at which any constraint on
structural performance cannot be violated. Kanno and Takewaki propose an efficient
scheme for robust design optimization of trusses under various uncertainties. The
structural optimization problem is formulated in the framework of an info-gap decision
theory, aiming at maximizing the robustness function and is solved using semi-definite
programming methods. Characteristic truss examples are used to demonstrate the
efficiency of the proposed methodology.
In his chapter, Doltsinis advocates the importance of an elaborate consideration of
random scatter in industrial engineering with regard to reliability, and for securing
standards of operation performance (robustness). For this purpose, synthetic Monte
Carlo sampling and analytic Taylor series expansion that offer alternatives of stochastic
analysis and design improvement are described. The robust optimum design problem
is formulated as a two-criteria task that involves minimization of mean value and
standard deviation of the objective function, while randomness of the constraints is
also considered. Numerical applications justify the efficiency of the proposed approach
are presented with linear and nonlinear structural response.
Takewaki and Ben-Haim present a robust design concept, capable of incorporating
uncertainties for both demand (loads) and capacity (various structural design param-
eters) of a dynamically loaded structure. Since uncertainties are prevalent in many
cases, it is necessary to satisfy critical performance requirements, rather than to opti-
mize performance, and to maximize the robustness to uncertainty. In the proposed
implementation, the so called, “info-gap models of uncertainty’’ are used to represent
XX Preface

uncertainties in the Fourier amplitude spectrum of the dynamic loading and the basic
structural parameters related the vibration model of the structure. Furthermore, earth-
quake input energy is introduced as a new measure of structural performance for
passively controlled structures and uncertainties of damping coefficients of control
devices are also considered.
Ganzerli and De Palma focus on the use of convex models of uncertainty with genetic
algorithms for optimal structural design. Convex models theory together with proba-
bility and fuzzy sets, convex models can be considered part of the so-called “uncertainty
triangle’’. Following, a literature review on convex models and their applications a
description of convex models theory as an efficient alternative way to deal with prob-
lems having severe structural uncertainties, is presented. Subsequently, applications
including the use of convex models of uncertainty combined with genetic algorithms
for optimal structural design of trusses are demonstrated, and directions for further
research in this area are given.
In the last chapter, Lagaros, Tsompanakis, Fragiadakis, Plevris and Papadrakakis
present efficient methodologies for performing standard RBDO and combined
reliability-based and robust design optimization (RRDO) of stochastic structural
systems in a multi-objective optimization framework. The proposed methodologies
incorporate computationally efficient structural optimization and probabilistic analy-
sis procedures. The optimization part is performed with evolutionary methods while
the probabilistic analysis is carried out with the Monte Carlo Simulation (MCS) method
with the Latin Hypercube Sampling (LHS) technique for the reduction of the sample
size. In order to reduce the excessive computational cost and make the whole procedure
feasible for real-world engineering problems the use of Neural Networks (NN) based
metamodels is incorporated in the proposed methodology. The use of NN is motivated
by the time-consuming repeated FE solutions required in the reliability analysis phase
and by the evolutionary optimization algorithm during the optimization process.
The editors of this book would like to express their deep gratitude to all the contri-
butors for their most valuable support during the preparation of this volume and for
their time and effort devoted to the completion of their contributions. In addition, we
are most appreciative to the Book Series Editor, Professor Dan M. Frangopol, for his
kind invitation to edit this volume, for preparing the foreword of this book, and for his
constructive comments and suggestions offered during the publication process. Finally,
the editors would like to thank all the personnel of Taylor and Francis Publishers,
especially Germaine Seijger, Richard Gundel, Lukas Goosen, Tessa Halm, Maartje
Kuipers and Janjaap Blom, for their most valuable support for the publication of this
book.

Yiannis Tsompanakis
Nikos D. Lagaros
Manolis Papadrakakis
September 2007
Brief Curriculum Vitae of the Editors

Yiannis Tsompanakis is Assistant Professor in the Depart-


ment of Applied Sciences of the Technical University of Crete,
Greece, where he teaches structural and computational
mechanics as well as earthquake engineering lessons. His
scientific interests include computational methods in struc-
tural and geotechnical earthquake engineering, structural
optimization, probabilistic mechanics, structural assessment
and the application of artificial intelligence methods in engi-
neering. Dr. Tsompanakis has published many scientific
papers and is the co-editor of several books in computational
mechanics. He is involved in the organization of minisym-
posia and special sessions in international conferences as well as special issues of
scientific journals as guest editor. He serves as a board member in various confer-
ences, organized the COMPDYN-2007 conference together with the other editors of
this book and acts as a co-editor of the resulting selected papers volume.

Nikos D. Lagaros is Lecturer of structural dynamics and


computational mechanics in the School of Civil Engineer-
ing of the National Technical University of Athens, Greece.
His research activity is focused on the development and the
application of novel computational methods and informa-
tion technology to structural and earthquake engineering
analysis and design. In addition, Dr. Lagaros has provided
consulting and expert-witness services to private companies
and federal government agencies in Greece. He also serves
as a member of the editorial board and reviewer of various
international scientific journals. He has published numer-
ous scientific papers, and is the co-editor of a number of forthcoming books, one of
which is dealing with innovative soft computing applications in earthquake engineer-
ing. Nikos Lagaros is co-organizer of COMPDYN 2007 and co-editor of its selected
papers volume.
XXII Brief Curriculum Vitae of the Editors

Manolis Papadrakakis is Professor of Computational Struc-


tural Mechanics in the School of Civil Engineering at the
National Technical University of Athens, Greece. His main
fields of interest are: large-scale, stochastic and adap-
tive finite element applications, nonlinear dynamics, struc-
tural optimization, soil-fluid-structure interaction and soft
computing applications in structural engineering. He is
co-Editor-in-chief of the Computer Methods in Applied
Mechanics and Engineering Journal, an Honorary Editor of
the International Journal of Computational Methods, and an
Editorial Board member of a number of international scien-
tific journals. He is also a member of both the Executive and the General Council of the
International Association for Computational Mechanics, Chairman of the European
Committee on Computational Solid and Structural Mechanics and Vice President of
the John Argyris Foundation. Professor Papadrakakis has chaired many international
conferences and presented numerous invited lectures. He has written and edited var-
ious books and published a large variety of scientific articles in refereed journals and
book chapters.
List of Contributors

Adams, B.M., Sandia National Laboratories, Albuquerque, NM, USA


Agarwal, H., General Electric Global Research, Niskayuna, NY, USA
Allen, M., University of Colorado at Boulder, Boulder, CO, USA
Aoues, Y., University Blaise Pascal, France
Beck, J.L., California Institute of Technology, CA, USA
Ben-Haim, Y., Technion, Haifa, Israel
Bichon, B.J., Vanderbilt University, Nashville, TN, USA
Chateauneuf, A., University Blaise Pascal, France
De Palma, P., Gonzaga University, Spokane, WA, USA
Doltsinis, I., University of Stuttgart, Stuttgart, Germany
Eldred, M.S., Sandia National Laboratories, Albuquerque, NM, USA
Fragiadakis, M., University of Thessaly, Volos, Greece
Frangopol, D.M., Lehigh University, Bethlehem, PA, USA
Ganzerli, S., Gonzaga University, Spokane, WA, USA
Huh, J.S., Korea Aerospace Research Institute, Daejeon, Korea
Hurtado, J.E., National University of Colombia, Manizales, Colombia
Joanni, A.E., Technical University of Munich, Munich, Germany
Kanno, Y., University of Tokyo, Tokyo, Japan
Kharmanda, G., Aleppo University, Aleppo, Syria
Kokkolaras, M., University of Michigan, Ann Arbor, MI, USA
Kwak, B.M., Korea Advanced Institute of Science and Technology, Daejeon, Korea
Lagaros, N.D., National Technical University of Athens, Athens, Greece
Lee, S.H., Northwestern University, Evanston, IL, USA
Liang, J., Oakland University, Rochester, MI, USA
Mahadevan, S., Vanderbilt University, Nashville, TN, USA
Maute, K., University of Colorado at Boulder, Boulder, CO, USA
Mourelatos, Z.P., Oakland University, Rochester, MI, USA
Nikolaidis, E., University of Toledo, Toledo, OH, USA
Papadrakakis, M., National Technical University of Athens, Athens, Greece
Papalambros, P.Y., University of Michigan, Ann Arbor, MI, USA
Patel, N.M., University of Notre Dame, Notre Dame, IN, USA
Plevris, V., National Technical University of Athens, Athens, Greece
Polak, E., University of California, Berkeley, CA, USA
Rackwitz, R., Technical University of Munich, Munich, Germany
XXIV List of Contributors

Renaud, J.E., University of Notre Dame, Notre Dame, IN, USA


Royset, J.O., Naval Postgraduate School, Monterey, CA, USA
Sørensen, J.D., Aalborg University, Aalborg, Denmark
Taflanidis, A.A., California Institute of Technology, CA, USA
Takewaki, I., Kyoto University, Kyoto, Japan
Tillotson, D., University of Notre Dame, Notre Dame, IN, USA
Tovar, A., National University of Colombia, Bogota, Colombia
Tsompanakis, Y., Technical University of Crete, Chania, Greece
Weickum, G., University of Colorado at Boulder, Boulder, CO, USA
Wu, Y.-T., Applied Research Associates Inc., Raleigh, NC, USA
Zhou, J., Oakland University, Rochester, MI, USA
Author Data

Adams, B.M.
Sandia National Laboratories
PO Box 5800, MS 1318
Albuquerque, NM 87185-1318
USA
Phone: (505)284-8845
Fax: (505)284-2518
Email: briadam@sandia.gov

Agarwal, H.
General Electric Global Research
Niskayuna, New York, 12309
USA
Phone: (574) 631-9052
Fax: (574) 631-8341
Email: Harish.Agarwal.6@nd.edu

Allen, M.
Research Assistant
Center for Aerospace Structures
Department of Aerospace Engineering Sciences
University of Colorado at Boulder
Boulder, CO 80309-0429, USA
Phone: (303) 492 0619
Fax: (303) 492 4990
Email: matthewallen1@gmail.com

Aoues, Y.
Laboratory of Civil Engineering
University Blaise Pascal
Complexe Universitaire des Cézeaux, BP 206
63174 Aubière Cedex, France
Phone: +33(0)473407532
Fax: +33(0)473407494
Email: younes.aoues@polytech.univ-bpclermont.fr
XXVI A u t h o r D a t a

Beck, J.L.
Professor
Engineering and Applied Science Division
California Institute of Technology
Pasadena, CA 91125
USA
Phone: (626) 395-4139
Fax: (626) 568-2719
Email: jimbeck@caltech.edu

Ben-Haim, Y.
Professor
Faculty of Mechanical Engineering
Technion – Israel Institute of Technology
Haifa 32000, Israel
Phone: 972-4-829-3262
Fax: 972-4-829-5711
Email: yakov@technion.ac.il

Bichon, B.J.
PhD Student
Civil and Environmental Engineering
Vanderbilt University
VU Station B 351831
Nashville, TN 37235
USA
Phone: 615-322-3040
Fax: 615-322-3365
Email: barron.j.bichon@vanderbilt.edu

Chateauneuf, A.
Professor
Polytech’Clermont-Ferrand
Department of Civil Engineering
University Blaise Pascal
Complexe Universitaire des Cézeaux, BP 206
63174 Aubière Cedex, France
Phone: +33(0)473407526
Fax: +33(0)473407494
Email: alaa.chateauneuf@polytech.univ-bpclermont.fr

De Palma, P.
Professor
Department of Computer Science
School of Engineering and Applied Science
Gonzaga University
Spokane, WA 99258-0026
USA
Author Data XXVII

Phone: 509-323-3908
Email: depalma@gonzaga.edu

Doltsinis, I.
Professor
Institute for Statics and Dynamics of Aerospace Structures
Faculty of Aerospace Engineering and Geodesy
University of Stuttgart
Pfaffenwaldring 27
D-70569 Stuttgart, Germany
Phone: 0711-685-67788
Fax: 0711-685-63644
Email: doltsinis@ica.uni-stuttgart.de

Eldred, M.S.
Sandia National Laboratories
P.O. Box 5800, Mail Stop 1318
Albuquerque, NM 87185-1318
USA
Phone: (505)844-6479
Fax: (505)284-2518
Email: mseldre@sandia.gov

Fragiadakis, M.
Lecturer
Faculty of Civil Engineering
University of Thessaly
Pedion Areos, Volos 383 34, Greece
Phone: +30-210-748 9191
Fax: +30-210-772 1693
Email: mfrag@mail.ntua.gr

Frangopol, D.M.
Professor of Civil Engineering and
Fazlur R. Khan Endowed Chair of Structural Engineering and Architecture
Department of Civil and Environmental Engineering
Center for Advanced Technology for Large Structural Systems (ATLSS Center)
Lehigh University
117 ATLSS Drive, Imbt Labs
Bethlehem, PA 18015-4729, USA
Phone: 610-758-6103 or 610-758-6123
Fax: 610-758-4115 or 610-758-5553
Email: dan.frangopol@lehigh.edu

Ganzerli, S.
Associate Professor
Department of Civil Engineering
School of Engineering
XXVIII A u t h o r D a t a

Gonzaga University
Spokane, WA 99258-0026
USA
Phone: 509-323-3533
Fax: 509-323-5871
Email: ganzerli@gonzaga.edu

Huh, J.S.
Senior Researcher
Engine Department/KHP Development Division
Korea Aerospace Research Institute
45 Eoeun-Dong, Yuseong-Gu
Daejeon 305-330, Republic of Korea
Phone: +82-42-860-2334
Fax: +82-42-860-2626
Email: caesarhjs@nate.com

Hurtado, J.E.
Professor
Universidad Nacional de Colombia
Apartado 127
Manizales
Colombia
Phone: +57-68863990
Fax: +57-68863220
Email: jhurtado14@une.net.co

Joanni, A.E.
Research Engineer
Institute for Materials and Design
Technical Univerisity of Munich
D-80290 München, Germany
Phone: +49 89 289-25038
Fax: +49 89 289-23096
Email: joanni@mb.bv.tum.de

Kanno, Y.
Assistant Professor
Department of Mathematical Informatics
Graduate School of Information Science and Technology
University of Tokyo, Tokyo 113-8656, Japan
Phone & Fax: +81-3-5841-6906
Email: kanno@mist.i.u-tokyo.ac.jp

Kharmanda, G.
Dr Eng
Faculty of Mechanical Engineering
Author Data XXIX

University of Aleppo
Aleppo – Syria
Phone: +963-21-5112 319
Fax: +963-21-3313 910
Email: mgk@scs-net.org

Kokkolaras, M.
Associate Research Scientist, Research Fellow
Optimal Design (ODE) Laboratory
Mechanical Engineering Department
University of Michigan
2250 G.G. Brown Bldg.
2350 Hayward
Ann Arbor, MI 48109-2125, USA
Phone: (734) 615-8991
Fax: (734) 647-8403
Email: mk@umich.edu

Kwak, B.M.
Samsung Chair Professor
Center for Concurrent Engineering Design
Department of Mechanical Engineering
Korea Advanced Institute of Science and Technology
373-1 Guseong-dong, Yuseong-gu
Daejeon 305-701 Republic of Korea
Phone: +82-42-869-3011
Fax: +82-42-869-8270
Email: bmkwak@khp.kaist.ac.kr

Lagaros, N.D.
Lecturer
Institute of Structural Analysis & Seismic Research
Faculty of Civil Engineering
National Technical University of Athens
Zografou Campus
Athens 157 80, Greece
Phone: +30-210-772 2625
Fax: +30-210-772 1693
Email: nlagaros@central.ntua.gr

Lee, S.H.
Postdoctoral Research Fellow
Department of Mechanical Engineering
Northwestern University
2145 Sheridan Road Tech B224
Evanston IL 60201, USA
Phone: +1-847-491-5066
XXX A u t h o r D a t a

Fax: +1-847-491-3915
Email: sanghoon-lee@northwestern.edu

Liang, J.
Graduate Research Assistant
Department of Mechanical Engineering
Oakland University
Rochester, MI 48309-4478
USA
Phone: (248) 370-4185
Fax: (248) 370-4416
Email: jliang@oakland.edu

Mahadevan, S.
Professor
Civil and Environmental Engineering
Vanderbilt University
VU Station B 351831
Nashville, TN 37235, USA
Phone: 615-322-3040
Fax: 615-322-3365
Email: sankaran.mahadevan@vanderbilt.edu

Maute, K.
Associate Professor
Center for Aerospace Structures
Department of Aerospace Engineering Sciences
University of Colorado at Boulder
Room ECAE 183, Campus Box 429
Boulder, Colorado 80309-0429, USA
Phone: (303) 735 2103
Fax: (303) 492 4990
Email: maute@colorado.edu

Mourelatos, Z.P.
Professor
Department of Mechanical Engineering
Oakland University
Rochester, MI 48309-4478
USA
Phone: (248) 370-2686
Fax: (248) 370-4416
Email: mourelat@oakland.edu

Nikolaidis, E.
Professor
Mechanical Industrial and Manufacturing Engineering Department
Author Data XXXI

4035 Nitschke Hall


The University of Toledo
Toledo, OH 43606
USA
Phone: (419) 530-8216
Fax: (419) 530-8206
Email: enikolai@eng.utoledo.edu

Papadrakakis, M.
Professor
Institute of Structural Analysis & Seismic Research
Faculty of Civil Engineering
National Technical University of Athens
Zografou Campus
Athens 157 80, Greece
Phone: +30-210-772 1692 & 4
Fax: +30-210-772 1693
Email: mpapadra@central.ntua.gr

Papalambros, P.Y.
Professor
Director, Optimal Design (ODE) Laboratory
University of Michigan
2250 GG Brown Building
Ann Arbor, Michigan 48104-2125
USA
Phone: (734) 647-8401
Fax: (734) 647-8403
Email: pyp@umich.edu

Patel, N.M.
Graduate Research Assistant
Design Automation Laboratory
Aerospace and Mechanical Engineering
365 Fitzpatrick Hall of Engineering
University of Notre Dame
Notre Dame, Indiana 46556-5637
USA
Phone: (574) 631-9052
Fax: (574) 631-8341
Email: npatel@nd.edu

Plevris, V.
PhD Candidate
Institute of Structural Analysis & Seismic Research
Faculty of Civil Engineering
National Technical University of Athens
XXXII A u t h o r D a t a

Zografou Campus
Athens 157 80, Greece
Phone: +30-210-772-2625
Fax: +30-210-772-1693
Email: vplevris@central.ntua.gr

Polak, E.
Professor Emeritus, Professor in the Graduate School
Department of Electrical Engineering and Computer Sciences
University of California at Berkeley
255M Cory Hall
94720-1770 Berkeley, CA
USA
Phone: 510-642-2644
Fax: 510-841-4546
Email: polak@eecs.berkeley.edu

Rackwitz, R.
Professor
Institute for Materials and Design
Technical Univerisity of Munich
D-80290 München, Germany
Phone: +49 89 289-23050
Fax: +49 89 289-23096
Email: rackwitz@mb.bv.tum.de

Renaud, J.E.
Professor
Design Automation Laboratory
Aerospace and Mechanical Engineering
365 Fitzpatrick Hall of Engineering
University of Notre Dame
Notre Dame, Indiana 46556-5637
USA
Phone: (574) 631-8616
Fax: (574) 631-8341
Email: renaud.2@nd.edu

Royset, J.O.
Assistant Professor
Operations Research Department
Naval Postgraduate School
Monterey, California 93943
USA
Phone: 1-831-656-2578
Fax: 1-831-656-2595
Email: joroyset@nps.edu
Author Data XXXIII

Sørensen, J.D.
Professor
Department of Civil Engineering
Aalborg University
Sohngardsholmsvej 57
9000 Aalborg, Denmark
Phone: +45 9635 8581
Fax: +45 9814 8243
Email: jds@civil.aau.dk

Taflanidis, A.A.
Ph.D Candidate
Engineering and Applied Science Division
California Institute of Technology
Pasadena, CA 91125
USA
Phone: (626) 379-3570
Fax: (626) 568-2719
Email: taflanid@caltech.edu

Takewaki, I.
Professor
Department of Urban and Environmental Engineering
Graduate School of Engineering
Kyoto University
Kyotodaigaku-Katsura, Nishikyo-ku, Kyoto 615-8540
Japan
Phone: +81-75-383-3294
Fax: +81-75-383-3297
Email: takewaki@archi.kyoto-u.ac.jp

Tillotson, D.
Research Assistant
Design Automation Laboratory
Aerospace and Mechanical Engineering
365 Fitzpatrick Hall of Engineering
University of Notre Dame
Notre Dame, Indiana 46556-5637
USA
Phone: (574) 631-8616
Fax: (574) 631-8341
Email: dtillots@nd.edu

Tovar, A.
Assistant Professor
Department of Mechanical and Mechatronic Engineering
Universidad Nacional de Colombia
XXXIV A u t h o r D a t a

Cr. 30 45-03, Of. 453-401


Bogota, Colombia
Phone: +57-3165320 - 3165000 ext. 14062
Fax: +57-3165333 - 3165000 ext. 14065
Email: atovarp@unal.edu.co

Tsompanakis, Y.
Assistant Professor
Department of Applied Sciences
Technical University of Crete
University Campus
Chania 73100, Crete, Greece
Phone: +30 28210 37 634
Fax: +30 28210 37 843
Email: jt@science.tuc.gr

Weickum, G.
Graduate Research Assistant
Center for Aerospace Structures
Department of Aerospace Engineering Sciences
University of Colorado at Boulder
Room ECAE 188, Campus Box 429
Boulder, Colorado 80309-0429
USA
Phone: (303) 492 0619
Fax: (303) 492 4990
Email: gary.weickum@colorado.edu

Wu, Y.-T.
Fellow, Applied Research Associates, Inc.
8540 Colonnade Center Dr., Ste 301
Raleigh, NC 27615
USA
Phone: 919-582-3335 or 919-810-1788
Email: jwu@ara.com

Zhou, J.
Graduate Research Assistant
Department of Mechanical Engineering
Oakland University
Rochester, MI 48309-4478
USA
Phone: (248) 370-4185
Fax: (248) 370-4416
Email: jzhou@oakland.edu
Part 1

Reliability-Based Design
Optimization (RBDO)
Chapter 1

Principles of reliability-based design


optimization
Alaa Chateauneuf
University Blaise Pascal, France

ABSTRACT: Reliability-Based Design Optimization (RBDO) aims at searching for the best
compromise between cost reduction and safety assurance, by controlling the structural uncer-
tainties allover the design process, which cannot be achieved by deterministic optimization. This
chapter describes the fundamental concepts in RBDO. It aims to explain the role of uncertainties
in deriving the optimal solution, where emphasis is put on the comparison with conventional
deterministic optimization. The interest of RBDO formulation can also be extended to cover
different design aspects, such as multi-component reliability analysis, safety factor calibration,
multi-objective applications and time-variant problems.

1 Introduction
The design of structures must fulfill a number of different criteria, such as cost, safety,
performance and durability, leading to conflicting requirements to be simultaneously
considered by the engineer. Therefore, the challenge in the design process is how to
define the best compromise between contradictory design requirements. Moreover, the
complexity of the design process does not allow for simultaneous optimization of all
the design criteria with respect to all the parameters. Traditionally, this complexity is
reduced by dividing the process into simpler sub-processes where each requirement can
be handled separately. The designer can hence concentrate his effort on only one goal,
generally the cost, and then checks if the other requirements can be, more or less, ful-
filled. If necessary, further adjustments are introduced in order to improve the obtained
solution. However, this procedure cannot assure performance-based optimal design.
In structural engineering, the deterministic optimization procedures have been
successfully applied to systematically reduce the structure cost and to improve the
performance. However, uncertainties related to design, construction and loading, lead
to structural behavior which does not correspond to the expected optimal performance.
The gap between expected and obtained performances is even larger when the struc-
ture is optimized, as the remaining margins are reduced to their lower bounds; in other
terms, the optimal structure is usually sensitive to uncertainties. In deterministic design,
the propagation of uncertainties is usually hidden by the use of the well-known “safety
factors’’, without direct connection with reliability specifications. Traditionally, the
optimal cost is looked for by iterative search procedures, while the required reliability
level is assumed to be ensured by the applied safety factors, as described by the design
codes of practice. As a matter of fact, these safety factors are calibrated for average
4 Structural design optimization considering uncertainties

design situations and cannot ensure consistent reliability levels for specific design con-
ditions. They may even lead to poor design, as the optimization procedure will search
for the weakest region in the domain covered by the code of practice. This weakest
region often presents not only the lowest cost but also the lowest safety. The determin-
istic optimal design is pushed to the admissible domain boundaries, leaving very little
space for safety margins in design, manufacturing and operating processes. Moreover,
the optimization process leads to a redistribution of the roles of uncertainties which
can only be controlled by reliability assessment on the basis of the sensitivity mea-
sures. For these reasons, the Deterministic Design Optimization (DDO) cannot ensure
appropriate reliability levels. If the DDO solution is more reliable than required, the
losses can be avoided in construction and manufacturing costs; however, if the relia-
bility is lower than required, the economic solution is not really achieved, because of
the increase of the failure rate, leading to failure losses higher than the expected money
saving. In this sense, the Reliability-Based Design Optimization (RBDO) becomes a
very powerful tool for robust and cost-effective designs (Frangopol 1995).
The RBDO aims to find a balanced design by reducing the expected total cost, which
is defined in terms of the initial cost (i.e. including design, manufacturing, transport
and construction costs), the failure cost, the operation cost and the maintenance costs.
In addition, the RBDO takes the benefit of driving the search procedure by the well-
controlled variables having great impact on the total cost. On the other side, the
variables with high uncertainties are penalized independently of their mechanical role.
In this sense, the system robustness is achieved as the role of highly uncertain and
fluctuating variables is diminished during the optimization process. Contrary to the
DDO, the solution does not lie in the weakest domain of the design code of practice,
but a better compromise is defined by satisfying the target reliability levels.
The RBDO can also be applied for robust design purposes, where the mean values
of random variables are used as nominal design parameters, and the cost is minimized
under a prescribed probability. Therefore, the solution of RBDO provides not only an
improved design but also a higher level of confidence in the design.
From the practical point of view, solving the RBDO problems is a heavy task because
of the nested nonlinear procedures: optimization procedure, reliability analysis and
numerical simulation of structural systems. Several methods have been developed for
solving efficiently this problem, in order to allow for complex industrial applications;
this topic will be discussed in a subsequent chapter by Chateauneuf and Aoues.
This chapter aims at describing the RBDO principles, in order to give a clear vision
of the links between classical deterministic approach and the reliability-based one. It
emphasizes the fact that the deterministic optimization, based on safety factor consider-
ations, is not anymore sufficient for safety control and assurance. The Reliability-Based
Design Optimization has the advantage of ensuring a minimum cost without affecting
the target safety level. At the end of the present chapter, the use of the RBDO in differ-
ent kinds of engineering problems is briefly discussed in order to show how large can
be the application spectrum.

2 Historical background
Since the beginning of the twentieth century, the need for rational way to consider
structural safety motivated a number of researchers, such as Forsell (1924), Wierzbicki
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 5

(1936) and Lévi (1948). In the conference on structural safety, held in Liège 1948,
by the Association Internationale des Ponts et Charpentes, Torroja stated, probably
for the first time, that the reduction of the total cost, have to include not only the
construction cost, but also the expected failure cost.

CT = CI + CF (1)

where CT is the expected total cost, CI is the initial cost (i.e. design and construction
cost) and CF is the expected failure cost. This expression has been easily approved, as
the increase of construction cost should lead to higher safety margin and so decreasing
the failure probability. Even that the formulation of the RBDO is known since 1948
(and even earlier), the direct application was impossible because of the difficulties
related to the failure probability computation for realistic structures. With the devel-
opment of the reliability theory starting in the 1950s, the solution procedures became
available in 1970s and improved in the 1980s, in order to allow for the analysis of
practical engineering structures. However, till now, the difficulty to estimate the fail-
ure cost is still remain a main problem, especially when dealing with human lives and
environmental deterioration. On the basis of the target reliability index, the RBDO
is really born in the second half of 1980s and developed along 1990s. Nowadays,
the industrial applications of RBDO still face many difficulties due to the very high
computational effort required to solve large-scale systems.
Most of practical applications of structural optimization requires at least three
conflicting goals (Kuschel and Rackwitz 1997):

– Low structural cost, including or not the expected failure cost.


– High reliability levels for components and systems.
– Good structural performance under various operating conditions.

Actually, the new trend is to include the inspection, maintenance, repair and
operating costs in the definition of the expected total cost CT , in order to reach a
performance-based design on the basis of multi-criteria considerations (Frangopol
2000). A comprehensive overview of these approaches is given by Frangopol and
Maute (2003).

3 Reliability analysis
The design of structures requires the verification of a certain number of rules resulting
from the knowledge of physics and mechanics, combined with the experience of design-
ers and constructors. These rules come from the necessity to limit the loading effects
such as stresses and displacements. Each rule represents an elementary event and the
occurrence of several events leads to a failure scenario. In addition to the deterministic
variables d to be used in the system control and optimization, the uncertainties are
modeled by stochastic variables affecting the failure scenario. The knowledge of these
variables is not, at best, more than statistical information and we admit a representa-
tion in the form of random variables X, whose realizations are noted x. For a given
design rule, the basic random variables are defined by their probability distribution
with some statistical parameters (generally, the mean and the standard deviation).
6 Structural design optimization considering uncertainties

Joint
Safety distribution
domain x2

G>0

Failure
domain
Pf G<0
x1

Figure 1.1 Joint distribution and failure probability.

The safety is defined as the state where the structure is able to fulfil all the operating
requirements: mechanical and serviceability, for which it is designed, during the whole
lifetime. To evaluate the failure probability with respect to a given failure scenario,
a performance function G(x, d) is defined by the condition of good operation of the
structure. The limit between the state of failure G(x, d) ≤ 0 and the state of safety
G(x, d) > 0 is known as the limit state surface G(x, d) = 0.
Having the performance function G(x, d), known also as the limit state function or
the safety margin, it is possible to evaluate the probability of failure by integrating the
joint probability density over the failure domain (Figure 1.1):

Pf (d) = fX (x, d) dx (2)
G(x,d)≤0

It is to be noted that the joint density function fX (x, d) depends on the design parameters
d only when the distribution parameters belong to the design variables; this is especially
the case when the mean value is considered as a design variable in the optimization
process.
There is a special case when the performance function is simply written by the margin
between the resistance R and the load effect S, where both variables are independent
normal random variables.
The performance function and the failure probability are simply given by:

G(X, d) = R − S
m R − mS
Pf (d) = (−β(d)) with: β(d) =  (3)
σR2 + σS2

where ( · ) is the standard gaussian cumulated distribution function, β(d) is the relia-
bility index, mR , mS , σR and σS are respectively the means and standard deviations of
the resistance and load effect. For this simple configuration, the optimization variable
could be the mean design strength, and probably, in some cases, the mean load effect.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 7

It to note that also standard deviations can be taken as optimization variables if the
relationship between the quality control and the structural cost can be established.
In practice, the performance function cannot be written in a simple linear form
of normal variables, and equation 3 can rarely be applied. It is thus necessary to
evaluate, more or less precisely, the failure probability as given in equation 2. Direct
integration is impossible even for small structures due to: 1) the high-required precision,
2) the computation cost of the mechanical response, and 3) the multidimensional
space. Numerical methods have to be applied to give an approximation of the failure
probability. Three methods are commonly used for this purpose:

– Monte Carlo simulations allowing to estimate the failure probability for any gen-
eral problem. It has two main advantages: 1) the possibility to deal with practically
any mechanical or physical model (linear, nonlinear, continuous, discrete, . . .) and
2) the simple implementation without any modification of the mechanical model
(e.g. finite element software) which is considered as a blackbox. However, the
two main drawbacks are: 1) the very high computational time, especially for real-
istic structures with low failure probability and 2) the numerical noise due to
random sampling, leading to non-monotonic estimates during simulations, and
consequently, it becomes impossible to get accurate and stable evaluation of the
response gradient. Although computation time can be reduced by using impor-
tance sampling and other variance reduction techniques, the numerical noise still
remains a serious difficulty for practical applications in RBDO.
– First- and Second-Order Reliability Methods, known as FORM/SORM, which
are based on the approximation of the performance function in the standard gaus-
sian space by using polynomial series. An optimization algorithm is applied to
search for the design point, called also the most probable failure point or β-point,
which is the nearest failure point to the origin in the normal space. Then, linear
(FORM) or quadratic (SORM) approximations are adopted for the performance
function in order to get an asymptotic approximation of the failure probability. It
is approved that FORM is usually sufficient for the majority of practical engineer-
ing systems. In RBDO context, FORM/SORM techniques have the advantages
of: 1) high numerical efficiency; and 2) direct computation of the gradients of the
reliability index, and consequently of the failure probability. The main drawbacks
are: 1) the limited precision and convergence problems in some cases, especially
for highly nonlinear limit states; and 2) the computation time for large number of
random variables.
– Response Surface Methods (RSM), which are commonly used to approximate the
mechanical response of the structure, by building what is called a meta-model.
Quadratic polynomials are shown to be suitable for localized approximation of
structural systems. The large part of the computational cost lies in the evaluation of
the polynomial coefficients. Then, the failure probability can be simply evaluated
by using the response surface which is an analytical expression, instead of the
mechanical model itself (generally, complex finite element model). The advantages
are mainly: 1) the reduction of the computation time for moderate number of
random variables; and 2) the possibility of coupling reliability and optimization
algorithms to achieve high efficiency. The most common drawback lies in the large
number of mechanical calls for moderate and high number of variables.
8 Structural design optimization considering uncertainties

x2 Physical space u2 Normalized space


Failure domain
Failure domain Gu(u, d)  0
G(x, d)  0 P* MPP
u*2
mX
2
G(x, d)  0 a Gu(u, d)  0
Safe b
domain
mX x1 u*1 u1
1

Figure 1.2 Reliability index and the Most Probable Failure Point (MPP).

In First Order Reliability Method, the failure probability Pf is approximated in terms


of the reliability index β according to the expression:

Pf (d) = Pr[G(X, d) ≤ 0] ≈ (−β(d)) (4)

where Pr[·] is the probability operator and ( · ) is the standard Gaussian cumulated
function. The invariant reliability index β, introduced by Hasofer and Lind (1974), is
evaluated by solving the constrained optimization problem (Figure 1.2):


β = min u = (Ti (x))2
i (5)
under the constraint: G(T(x), d) ≤ 0

where u is the distance between the median point (corresponding to the space origin)
and the failure subspace in the normalized space u and T(x) is an appropriate proba-
bilistic transformation: i.e. ui = Ti (x). The image of the performance function G(x) in
the normalized space is noted: Gu (u, d) = G(T(x), d). The solution of this problem is
called the Most Probable Failure Point, the design point or the β-point; it is noted P∗ ,
or either x* or u*, whether physical or normalized space is considered, respectively.
At this point, the following relationship holds: β = u.
For the case of two random variables, Figure 1.3 illustrates the important points
involved in structural design: the mean point represents the average stress and strength
at operation, the characteristic values are loading and resistance values that can be
guaranteed in the design process (they correspond to small probability to find higher
loading level or to find lower strengths; percentiles of 95% or 5% are commonly
adopted) and finally the Most Probable Failure Point (MPP) where the failure con-
figuration has the highest joint probability density. While the reliability analysis aims
at finding the Most Probable failure Point, the design procedure aims at setting the
characteristic and mean values of strength and dimensions, according to economical
considerations.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 9

Strength s
fS(s), fR(r)
R
Load effect
P* Limit state
S
s*

xk
sk
G(x, d) = 0
mS
mX

r
mS sk rk mR s, r r* rk mR
s* = r*

Figure 1.3 Mean, characteristic and design points.

Alternatively to equation 2, the reliability level of a structure can also be character-


ized by the performance function Pp defined as:

Pp (d, p) = fX (x, d) dx (6)
G(x,d)≤p

where the subscript p is the performance measure (in standard reliability, p is set to
zero). This formulation can be useful for specific RBDO formulations (see chapter by
Chateauneuf and Aoues).

3.1 S ystem reliability analys is


Due to optimization, the structural components are strongly stretched close to the
limit state, and their contribution in the overall safety becomes significant. That is
why structural reliability cannot be correctly computed unless the complete system
is considered, by taking into consideration the contributions of all the failure modes
through appropriate modeling of system configurations, material behavior, load vari-
ability, strength uncertainty and statistical correlation. As structures are made of
the assembly of several members, the overall ultimate capacity is highly conditioned
by the redundancy degree. For many structures, several components can reach their
ultimate capacity much before reaching the overall structural failure load. On the other
hand, the structure could contain a number of critical members, leading to the over-
all failure if any one of them fails. In this context, the system reliability can be quite
different from the reliability of its components.
In the last decades, many research works have been dedicated to compute the system
reliability, especially for series and parallel systems. A series system, representing a
“weakest-link’’ chain, fails if any link fails; superstructures and building foundations
are generally good examples of a series system. A parallel system implies that each
component contributes more or less in the structural good-standing; the system failure
takes place if all components fail.
10 Structural design optimization considering uncertainties

Practical expressions for system reliability include lower and upper bounds for both
series and parallel systems, some of these bounds consider the correlation between
pairs of potential failure modes. Also, more complex system models involving mixed
series-parallel systems can be used (Ditlevsen and Madsen 1996). For series and parallel
systems, the first order approximation of the failure probabilities can be computed as
following:
 

Pf = Pr Gj (X, d) ≤ 0 ≈ 1 − m (β(d), ρ) for series system
j
  (7)

Pf = Pr Gj (X, d) ≤ 0 ≈ m (−β(d), ρ) for parallel system
j

where m (β(d), ρ) is the multi-dimensional standard normal distribution, β(d) is the


vector of the reliability indices for the different modes and ρ is the matrix of correlations
between the failure modes. For practical RBDO analysis, the failure probability can be
estimated by Ditlevsen bounds (Ditlevsen 1979), which is written for series system as:
⎡ ⎤

m 
j−1

m 
m
Pf1 + max ⎣Pfj − Pfjk , 0⎦ ≤ Pfs ≤ Pfj − max Pfjk (8)
k<j
j=2 k=1 j=1 j=2

where m is the number of dominant failure modes, Pfj is the failure probability of mode
j and Pfjk is the probability of the intersection of modes j and k; in this expression, the
failure probabilities follow a decreasing order.

4 Formulation of Reliability-Based Design Optimization


The Reliability-Based Design Optimization (RBDO) aims at finding the optimal solu-
tion that fulfills the prescribed reliability requirements. The fluctuation of loads, the
variability of material properties and the uncertainties regarding the analysis models,
contribute to make the performance of the optimal design different from the expected
one. In this sense, the optimization process has a large effect on the structural reliability.
It is today well recognized that the safety factor approach cannot ensure the required
safety levels, as they do not explicitly consider the probability of failure regarding some
performance criteria. In other words, the optimal design resulting from deterministic
optimization procedures does not necessarily meet the reliability targets.
The RBDO allows us to consider the safety margin evolution, leading to the settle-
ment of the best compromise between the life-cycle cost and the required reliability.
This task is rather complicated due to the inherent non-deterministic nature of the input
information. For this reason, many analysis methods have been developed to deal with
the statistical nature of data. The process efficiency is mandatory to deal with realis-
tic engineering problems (Kharmanda et al. 2002). The solution based on reliability
concepts is rather robust, as the uncertain parameters are penalized during the design
process, compared to a greater commitment of the well-controlled parameters.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 11

4.1 Insuf f iciency of Determinis tic Des ig n O p ti mi zati o n


In Deterministic Design Optimization (DDO), it is aimed to reduce the initial structural
cost CI (d) under a number of constraints gj (d, γ), j = 1, 2, . . . , ng ; where d is the vector
of design parameters and γ is the vector of partial safety factors. The optimization
problem is thus written:

min CI (d)
d (9)
subject to gj (d, γ) ≤ 0 for j = 1, 2, . . . , ng

In this problem, the structural safety is assumed to be ensured by the introduction


of the safety factors within the constraint equations. A typical constraint for stress can
be written as: g = σ − fy /γ where σ is the applied stress, fy is the yield stress and γ is
the safety factor. Usually the strength fy is defined either by the mean value or by the
characteristic value; the former is common in mechanical engineering and the latter is
common in civil engineering.
In the DDO, it is assumed that the safety factors are appropriate whatever the chosen
optimal configuration. For most of systems, it can be shown that the safety level is not
independent of the selected optimal design parameters. Figure 1.4a illustrates how the
deterministic optimal design is defined on a constraint which is simply described by
shifting the limit state. It can be said that the failure limit state g(d) is transformed to a
safe limit state g(d, γ), by introducing the safety factor to take account for uncertainties.
Starting from the initial point x0 , the DDO is based on the use of classical optimization
algorithms to find the optimal design d∗ , which is generally located on the boundary
of the reduced design space, including the safety factor.
The Reliability-Based Design Optimization aims at finding the optimal solution,
such that the failure limit state is kept sufficiently far from the operating point. In
other words, the failure surface must lie on the iso-reliability level corresponding to
the prescribed safety target (Figure 1.4b). It is clear that even for simple cases, the
solution can be quite different from the deterministic optimization where homothetic

Safe design Limit state


s constraint s g(d )0

γ Optimum Optimum
design design
Limit state Iso-reliability
contours
n
g(d) = 0 ductio
d*
s*
st re
Co
g(d, γ) = 0 s* d*
x0 n x0
du ctio
s t re
Co r r
(a) r* (b) r*

Figure 1.4 Comparison of optimal points corresponding to DDO and RBDO.


12 Structural design optimization considering uncertainties

fG(g)

Safety factor
g
g1 1 g

Figure 1.5 Distribution of the global safety factor.

reduction of the design space is applied. In this sense, the RBDO can really ensure
optimal cost, without compromising the structural safety.
As a matter of fact, the uncertainties related to structural geometry, material prop-
erties and loading lead to stochastic cost, strength and stress in the structure, which
automatically leads to random safety factors. When the optimal design is defined, it
can be possible to see the effect of uncertainties on the global safety factor. This can
be carried out by plotting the distribution of the strength-stress ratio, as illustrated
in figure 1.5. In this case, the structural failure is observed when the safety factor
realizations become less than unity. The failure probability can thus be computed by
evaluating either Pr [γ ≤ 1] or Pr [G(d∗ , X) ≤ 0].
It is also to emphasize that the system uncertainties may lead to random total cost,
which can be considered in one of the two following ways:

– If the optimal configuration is specified, the structure realization involves ran-


dom variations in material properties, geometrical parameters, material unit cost,
construction costs and failure costs. These viabilities and fluctuations produce a
random total cost, where the probabilistic distribution depends on the inherent
random variables. The goal of RBDO is usually to minimize the expected total cost.
– If the structural realizations are considered for cost optimization, a scatter of the
optimal solutions is obtained, as different optimum is found for each structural
realization. In other words, even if random variables are not involved in the cost,
the optimal deterministic solution changes according to structure and loading fluc-
tuations. Therefore, the total cost becomes random as solutions differ in terms of
input uncertainties. This leads to what can be seen as a lack of robustness.

Deterministic optimization is even worst when multi-constraints or multi-


components are considered. The difficulty lies in the way to set the safety factors
in order to ensure simultaneously safe and optimal design. As illustrated in figure 1.6,
while the deterministic optimum leads to uncontrolled safety levels with respect to vari-
ous limit states (due to the application of either the same safety factor or an inconsistent
set of safety factors), the reliability-based design optimization looks for the situation
where the safety levels can be simultaneously controlled for all the limit states. In this
case, the optimum design is oriented such that safety requirements are fulfilled with
respect to the uncertainty degrees; practically, greater margins are taken for largely
scattered variables, while small margins are considered for well controlled variables.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 13

Deterministic
x2 optimum

Limit state
with safety factors
Real limit state

Iso-reliability
contours

Reliability-based
optimum
x1

Figure 1.6 Comparison between DDO and RBDO solutions.

1
600 mm

y 2
S1 v
S2

x
800 mm 450 mm

Figure 1.7 Two-link structure under vertical force.

In other words, the design is driven by the variables with small uncertainties. That is
why the Reliability-Based Design Optimization (RBDO) aims at searching for the best
compromise between cost reduction and reliability assurance, by taking the system
uncertainties into consideration; therefore, the RBDO ensures economical and safe
design. It offers a good alternative to the safety factor approach, which is based on
deterministic considerations and cannot take account for reduction of safety margins
during the optimization procedure.
In order to illustrate this idea, let us consider the two-bar system shown in figure 1.7.
The system is supported at the end nodes and a vertical load P is applied at the internal
node. The bar cross-sections are noted S1 and S2 for bars 1 and 2, respectively. The
14 Structural design optimization considering uncertainties

design criteria are related to member strengths and nodal displacements; buckling is
assumed to be neglected.
Under stress and deflection constraints, the deterministic optimization problem is
written:

min V = 1000 S1 + 750 S2


S1 ,S2
fY
subject to g1 = F1 − S1 ≤ 0
γσ
fY (10)
g2 = F2 − S2 ≤ 0
γσ
vL
and g3 = v − ≤0
γv

with:

P 480 360
F1 = 0.6P; F2 = 0.8P; v= + (11)
E S1 S2

where E is the Young’s modulus, fY is the yield stress, vL is the limit displacement, γσ is
the load safety factor and γv is the displacement safety factor. They are calculated by:
 
fY S1 fY S2
γσ = min ;
0.6P 0.8P
(12)
E S1 S2 vL
γv =
P (480 S1 + 360 S2 )

For deterministic optimization, these safety factors are taken as γσ = 1.5 and
γv = 1.2. Considering only the two resistance limit states, Figure 1.8 shows the

G1(d) ⴝ 0 G1(d, γ) ⴝ 0
S2 Safety design domain
G1(d, γ)  0 and G2(d, γ )  0

Deterministic
optimum Iso-reliability contours

0.9P/fY G2(d, γ) ⴝ 0
P1*

0.6P/fY G2(d) ⴝ 0
P*2
Failure domain

S1

0.8P/fY 1.2P/fY

Figure 1.8 Deterministic design and inconsistent reliability levels.


P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 15

deterministic optimum solution in the space of the design variables S1 and S2 . This opti-
mum is obtained at the intersection of the shifted limit states, due to the application of
the safety factor of 1.5. If the uncertainties on the cross-sections are considered by using
independent normal variables with the same standard deviation, we get directly the
iso-reliability contours in this superposed design/random space. It is clearly observed
that the Most probable failure points P1∗ and P2∗ do not lie on the same reliability level.
As the system represents a series combination, the bar 2 reduces the overall structural
reliability and the target safety is not met. A more rational approach implies that the
two MPP should be at the same reliability contour; for more complex systems, this
should take into account the failure costs and system reliability models for effective
setting of the MPP locations.
It has to be stressed that even for this simple problem, the task is not that easy, it
can be imagined how complex is to set appropriate safety factors for nonlinear limit
states (e.g. the displacement constraint) with non-gaussian variables, including statis-
tical correlations. For this reason, the DDO approach cannot clearly give convenient
solution with consistent uncertainty considerations.

4.2 Reliability-bas ed des ign optimizatio n


Although the total cost includes all costs from construction till destruction and recy-
cling, including the in-service costs, the high complexity of engineering systems leads
to difficulties in dealing with both aspects: design and maintenance uncertainties.
A common procedure consists in separating the design into two steps. In the first step,
the structure is designed in order to avoid “failure’’ configurations, with respect to the
limit states (ultimate, serviceability, fatigue, . . .). At this design stage, the optimization
is applied to assure the structural survival (or good-standing) with the lowest cost.
In the second step, the maintenance planning is optimized for the structure designed
in the first step. In this way, the design optimization is first carried out to define the
optimal structural configuration for which Reliability-Based Maintenance Optimiza-
tion (RBMO) is performed to define the maintenance-inspection-replacement policy.
In this sense, the total cost minimization is carried out in two times: minimizing the
initial and failure costs at the first time and then minimizing the maintenance cost at
the second time. Of course, this implies an approximation, as some design variables
can play different roles in the cost components. For some engineering systems, the
decoupling of design and maintenance costs may not lead to globally optimal costs,
due to interaction between design decisions and deterioration rates and time-dependent
failure probability. In other words, the variation of some variables may increase failure
cost and decrease maintenance costs, and vice-versa. However, this approach is widely
accepted in engineering practice. It has also a practical advantage, as the optimal design
becomes independent of the maintenance policy, where the operating conditions (load-
ing, environment, deterioration rates, costs, . . .) may strongly varies over the structure
lifespan.
The total cost CT depends on two kinds of variables (Kharmanda et al. 2002b):

– Design variables, noted d, which are the deterministic control parameters. They
should be optimized for cost reduction. They can be mechanical parameters
16 Structural design optimization considering uncertainties

Cost Expected
total cost CT
Expected failure
cost CF  Cf Pf
Minimum
expected
total cost

Initial construction
cost CI
Failure probability

Optimum Pf
reliability level

Figure 1.9 Evolution of the costs in function of the failure probability.

(e.g. geometrical dimensions, material properties) or probabilistic parameters (e.g.


means of random distributions).
– Random variables, noted X, whose realizations are x, representing the uncertain-
ties and the fluctuations in the system configuration. Each of the random variables
is defined by a probabilistic distribution. They usually represent geometrical,
material or loading uncertainties.

Basically, the RBDO aims at minimizing the total expected cost CT (Figure 1.9)
which is given in terms of initial cost CI (including design, manufacturing, transport
and construction costs) and direct failure cost Cf (Torroja 1948, Ditlevsen and Madsen
1996).

min CT (d) = CI (d) + Cf Pf (d)


d (13)
subject to gj (d) ≤ 0

A more rigorous mathematical notation consists in writing E[CT (d, X)] instead of
CT (d), as what is optimized is the expectation, not the cost itself (which is a random
function); however, for simplicity, the notation CT (d) is maintained to indicate the
expectation.
This problem can also be written in terms of the utility function U(d) as following
(Frangopol 1995):

max U(d) = B(d) − CI (d) − L(d)


x (14)
subject to gj (d) ≤ 0

where B is the benefit derived from the system operation, C I is the initial construction
cost and L is the expected loss due to inspection, maintenance and failure.
This total cost expression indicates that the possible increase of initial cost should
be balanced by a decrease in the risk C F (i.e. product: Cf Pf ). The minimization is
carried out for the design parameters such as member sizes, structural configuration
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 17

and material parameters. These design parameters may correspond to probabilistic


distribution parameters: cost is related to the mean value when it represents the nom-
inal design value and to the standard deviation when it represents the quality control
and the dispersion reduction aspects.
Usually, the cost of consequences is taken as fixed, but in fact it should be a function
of the failure probability. This means that the failure cost Cf (e.g. reconstruction cost,
direct damage cost and pollution) is independent of the failure probability Pf , and
consequently, the expected failure cost can be written: CF = Cf × Pf . This expression
holds as long as the failure rate remains below a commonly accepted level. However,
with abnormal failure rates, the failure cost Cf becomes a function of the failure
probability according to indirect damage (propaganda, market losses, public opinion
on company/authority, accelerated effects, . . .). For example in automotive industry, if
a defect is observed for few cars, the failure cost for each car can be assumed equal
to the repair of that car (added to some indemnity for the car owner). But if the
defect is observed for a large number of cars, the company should repair the whole
produced cars, beside the social damage to the company itself which can be traduced
by significant selling losses. In other domains, such as nuclear energy, the failure cost
can be a jump function as only one accident in a nuclear power plant leads to very
high economic, social and political consequences.
To take account for this failure rate dependence, it could be appropriate to estimate
the failure cost by nonlinear functions, such as:

CF = E[Cf ] ≈ Cf (Pf ) × Pf (15)

where Cf (Pf ) may take one of the following forms:

Polynomial Cf (Pf ) = Cf0 (1 + Pfα )



Cf0 for Pf ≤ Pf0
Exponential Cf (Pf ) =
Cf0 exp(µ(Pf − Pf0 )α ) for Pf > Pf0
Cf 1
Sigmoidal Cf (Pf ) = Cf0 +
1 + exp (−µ(Pf − Pf0 ))

where Cf0 and Cf1 are respectively the basic and the extra failure costs, Pf0 is the
probability threshold, and α, µ are parameters to be estimated in terms of failure
consequences.
More generally, the expected total cost CT can be expressed in terms of all the costs
involved in the structural system, from birth to death. It thus includes inspection,
maintenance, repair and operating costs (Frangopol 2003), leading to:

CT = CI + CF + CM + CS + CR + CU + CD (16)

where CI is the initial construction cost, CF is the expected failure cost, usually defined
as: CF = Cf × Pf , CM is the expected preventive maintenance cost, CS is the expected
inspection cost, CR is the expected repair cost, CU is the expected use cost and CD is the
expected recycling and destruction cost, which is particularly important for sensitive
structures, such as nuclear powerplants.
18 Structural design optimization considering uncertainties

In practice, the design objective of only minimizing the expected total cost is not
yet applicable, and is somehow dangerous from human point of view. For example,
if the designer underestimates the failure consequences with respect to the initial cost,
the optimal solution will allow for high failure rates, leading to accept the use of
low-reliable structures. The extrapolation to rich and poor countries or cities, leads
also to low reliability levels in poor countries (or cities) because of the lower failure
costs, as human lives and constructions have statistically lower monetary values. One
can imagine the political consequences of such a strategy. At least theoretically, the
correct estimation of the failure cost should lead to coherent results. The problem
of cost estimation is even more complicated when talking about the whole lifetime
management, because the failure cost may change along the structure lifetime due to
socio-economical considerations (e.g. life quality of the society). In all cases, special
care is strongly required when minimizing the expected total cost, even when other
reliability constraints are considered.
Due to difficulties in estimating the failure cost Cf (especially when dealing with
human lives and environmental deterioration, political consequences, . . .), the direct
use of the above equation is not that easy. For design purpose, an alternative to the
expect total cost formulation is usually to minimize the initial cost under a prescribed
reliability constraint (Moses 1977):

min CI (d)
d
subject to Pf (d) ≤ Pft (17)
d ≤d≤d
L U

where dL and dU are respectively the lower and upper bounds of the design variables
and Pft is the admissible failure probability, which is set on the basis of engineering
state-of-knowledge and experience. An equivalent formulation is defined in terms of
the target reliability index βt :

min C(d)
d
subject to β(d) ≥ βt (18)
d ≤d≤d
L U

This formulation has the advantage of avoiding the failure cost computation. Never-
theless, the failure consequences can be indirectly included by selecting suitable target
safety levels. In civil engineering, it is common to use an admissible failure probabilities
of 10−4 for the ultimate limit state and of 10−2 for the serviceability limit state. More
refined target values are given in the Eurocodes, in terms of the economical gravity
and the number of exposed persons.
In principle, the target system reliability should be determined by social and eco-
nomical considerations. There is no general rule, so far, to select the target value of
the system-reliability index. Furthermore, the designer’s experience and preferences
still play an important role in the process. A reasonable choice consists in taking the
reliability of old design codes as a target for the new codes. Nevertheless, the choice of
the target value is still very important in system reliability-based optimization, because
it is the regulator of the reliability indexes of the failure modes.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 19

The above formulation represents two embedded optimization problems (Enevold-


sen and Sørensen 1994; Enevoldsen 1994). The outer one concerns the search for
optimal design variables to minimize the cost and the inner one concerns the evalua-
tion of the reliability index in the space of random variables. The coupling between
the optimization and reliability problems is a complex task and leads to a very high
calculation cost. The major difficulty lies in the evaluation of the structural reliability,
which is carried out by a particular optimization procedure. In the random variable
space, the reliability analysis implies a large number of mechanical calls, where in the
design variable space, the search procedure modifies the structural configuration and
hence requires the re-evaluation of the reliability level at each iteration. For this reason,
the solution of these two problems (optimization and reliability) requires very large
computation resources that seriously reduces the applicability of this approach. This
topic will be intensively discussed later on in the chapter by Chateauneuf and Aoues.
In general, the RBDO can be formulated according to one of the following forms:

– RBDO1: Minimize the design cost under reliability and structural constraints:

min: CI (d) min: CI (d)


d d
subject to: β(d) ≥ βt or subject to: Pf (d) ≤ Pft
and: gj (d) ≤ 0 and: gj (d) ≤ 0

where βt is the target reliability index and Pft is the maximum allowable failure
probability. When first order approximation is applied, the relationship between
these two forms is given by: Pf = (−β) or β = −−1 (Pf ).
– RBDO2: Maximize the reliability under cost and structural constraints:

max: β(d) min: Pf (d)


d d
subject to: CI (d) ≤ CIt or subject to: CI (d) ≤ CIt
and: gj (d) ≤ 0 and: gj (d) ≤ 0

– RBDO3: Maximize the reliability per unit cost under structural constraints:

max: β(d)/CI (d) max: 1/Pf (d)/CI (d)


d or d
subject to: gj (d) ≤ 0 subject to: gj (d) ≤ 0

which is equivalent to minimize the ratio cost/reliability:

min: CI (d)/β(d) min: CI (d) · Pf (d)


d or d
subject to: gj (d) ≤ 0 subject to: gj (d) ≤ 0

This kind of formulation is particularly useful when there is no limitation on the


total cost in RBDO2.
20 Structural design optimization considering uncertainties

mR
h
A

Figure 1.10 Perforated beam subjected to uniform load.

– RBDO4: Minimize the total expected cost under reliability and structural
constraints:

min: CI (d) + Cf Pf (d) min: CI (d) + Cf Pf (d)


d d
subject to: β(d) ≥ βt or subject to: Pf (d) ≤ Pft
and: gj (d) ≤ 0 and: gj (d) ≤ 0

These formulations are considered as the basic forms of reliability-based design


optimization, where the goal is to better redistribute the material within the structure
by taking into account the effects of uncertainties and fluctuations.

4.3 Il l ustrati o n o n pe r fo r at e d s imple be a m


A simply supported beam, with length L = 2 m and height h = 0.3 m, is perforated by
5 holes of mean radius mR . The beam is subjected to uniformly distributed load P
with mean value 1 MN/m and coefficient of variation of 15%. The maximum stress
is located at point A in Figure 1.10. Under the effect of geometrical uncertainties, the
nominal hole radius mR has to be designed according to the RBDO basis.
In Figure 1.11, the initial, failure and total costs are plotted in function of the
mean hole radius. The minimum cost corresponds to mR = 7.5 cm, and to the failure
probability of 1.07 × 10−4 . Figure 1.12 shows the expected total cost for different
values of consequence severity. It is observed that the hole radius should be decreased
with higher consequence costs, in order to reduce the probability of failure and there-
fore the risk. The optimal solutions are found with respect to each failure cost case:
Low: mR = 7.9 cm (Pf = 3.4 × 10−3 ), Moderate: mR = 7.5 cm (Pf = 1.1 × 10−4 ), High:
mR = 7.1 cm (Pf = 4.3 × 10−6 ) and Very High: mR = 6.7 cm (Pf = 3.7 × 10−7 ). It can be
observed that the failure probability levels are very sensitive to the failure consequences,
showing that special care should be considered in estimating these consequences, as
they changes drastically the optimal solution.

5 Multi-component RBDO
In practical structural systems, the overall failure is generally dependent on a certain
number of components where each one may have several failure modes, arranged in
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 21

4000
3500

Expected costs (Euros)


Initial cost
3000 Failure cost
2500 Total cost
2000
1500
1000
500
0
6,500 7,000 7,500 8,000
Mean hole radius (cm)

Figure 1.11 Initial, failure and total costs of the perforated beam.

4,00E03
3,50E03
Expected costs (Euros)

3,00E03
2,50E03
2,00E03
1,50E03
Low failure cost
1,00E03 Moderate failure cost
High failure cost
5,00E02 Very high failure cost
0,00E00
6,500 7,000 7,500 8,000
Mean hole radius (cm)

Figure 1.12 Expected total costs in function of the failure consequence costs.

series and/or parallel systems. During the optimization of redundant structures, the
contribution of various members is highly redistributed and the prediction of the most
important components is not easy. Some insignificant components at the beginning
of the RBDO procedure can become very important in the neighborhood of the opti-
mal point. That is why structural reliability cannot be correctly computed without the
whole system consideration, by taking account for all the failure modes. In this case, the
constraint on system reliability becomes a computational challenge because of the dif-
ferent levels of embedded optimization loops. So, the system RBDO has common limi-
tations due to system reliability computation and the necessity to make some approxi-
mations in practical cases (e.g. bounds, reduction of failure paths, . . .). This is probably
the main reason why the system approach is less popular than the component approach.
Another difficulty arises from the fact that the component assembly is rather a logical
combination (i.e. union and intersection of events) than just algebraic operation, which
is hard to deal with in system optimization, as sensitivity computation is not easy for
22 Structural design optimization considering uncertainties

logical operators. For example, the derivation of the union of two events is not simple
to handle when one of them is totally or partially included inside the other, as the
derivative operator can only capture the dominant event sensitivity.
This difficulty is emphasized by the fact that failure mode combination is strongly
related to the few significant failure modes at a given instance of the computing pro-
cess. However, as the design variable values change in each iteration, the significant
failure modes are not always the same, which greatly influences the convergence of the
optimization procedure. Fortunately, in practice the significant failure modes identified
in the system reliability analysis tend to be stabilized after few iterations.

5.1 Sy stem RB DO fo r mulat io n


The system RBDO can be formulated either at the component level or at the system level
(Enevoldsen 1994). At the component level, the RBDO can be written by specifying
the target reliability for each one of the structural components, leading to:

min CI (d)
x (19)
subject to βi (d) ≥ βti and gj (d) ≤ 0

where βi (d) and βti are respectively the reliability index and the target index for the
ith component. Each one of the component reliability constraints includes a minimum
reliability requirement for a specific failure mode at a specific location in the structure.
For example, a member has several critical cross-sections which may fail according to
several modes, such as yielding, cracking and excessive deformation, in addition to
member buckling failure and structural instability.
At the system level, the RBDO is formulated by only specifying the target system
reliability for the whole structure:

min CI (d)
d (20)
subject to βsys (d) ≥ βt and gj (d) ≤ 0

where βsys (d) and βt are respectively the reliability index and the target index for the
whole system. The system reliability is generally evaluated by the use of upper and
lower bounds.
Some authors combined the constraints on component and system reliabilities, but
this approach could lead to either redundant or inconsistent constraints. Aoues and
Chateauneuf (2007) proposed a scheme for consistent RBDO of structural systems.
The basic idea consists in updating the component target safety levels in order to fulfill
the overall system target. In the main optimization loop, the cost function is minimized
under the constraints that component reliability indexes must satisfy the updated target
values.
min C(d)
Updated
subject to βj (d) ≥ βtj (21)
d ≤d≤d
L U

Updated
where βtj is the updated target reliability index for the jth failure mode and βj (d) is
the reliability index for the considered design configuration. In the updating procedure,
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 23

P P
q q
q/8

d2 d2
d1

Lc  3 m L  8m Lc  3 m

M1 M1

M2 M2
M2  M1/8 Bending moment diagram

Figure 1.13 Overhanged beam with variable cantilever depth.

the target indexes are adjusted to meet the system reliability requirement. This can be
performed by solving the problem:
mp
 Updated
min (βtj − β j )2
Updated
βt
j i=1 (22)
Updated
subjected to βsys (βtj , ρjk ) ≥ βt
Updated
which is solved for the updated target indexes βtj .

5.2 Overhanged reinforced concrete be am


In order to show the interest of system analysis, an overhanged beam with variable
cantilever depth is considered, as shown in Figure 1.13 (Aoues and Chateauneuf 2007).
With a constant breadth of 20 cm, the beam is defined by the middle-span depth d1
and the cantilever end depth d2 . The span is L = 8 m and the cantilever length is
Lc = 3 m. The beam is subjected to uniformly distributed loads q and q/8 as illustrated
in Figure 1.13. In order to reduce the negative moments, two tension rods are acting
at the cantilever ends, modeled by the tensile force P. The concrete strength is taken
as fcu = 25 MPa and the steel yield strength is fY = 200 MPa. An extreme loading case
is considered where q = 40 kN/m and P = 30 kN; leading to the maximum moments
M(x = 0.75) = 11.25 kNm and M(x = 3) = −90 kNm.
The considered random variables are the applied loads and the effective depth of RC
cross-sections, which are considered as normally distributed to allow for easy graphical
illustrations. For a given cross-section, the design equation is written by:

f Y As i
Gi = fY Asi di − − Mi (23)
2(0.85fcu b)
24 Structural design optimization considering uncertainties

Table 1.1 Statistical data for random variables.


Random variable Mean St-deviation

Middle span depth d1 md1 σd1 = 5 cm


Cantilever end depth d2 md2 σd2 = 2.5 cm
Reference moment M mM = 90 kNm σM = 18 kNm

The reinforcement is chosen as As1 =12 cm2 and As2 = 6 cm2 , leading to the limit
states:

G1 = 0.24(d1 − 0.02824) − M1
(24)
G2 = 0.12(d2 − 0.01412) − M2

which can be written in the normalized space by probabilistic transformation:

H1 = 0.24(md1 + σd1 ud1 − 0.02824)


 − (mM + σM uM )
H2 = 0.12(md2 + ρσd2 ud1 + 1 − ρ σd2 ud2 − 0.01412)
2 (25)
−0.125(mM + σM uM )

where M is a reference moment (equal to M1 ), ui are the normalized variables and ρ is


the correlation between d1 and d2 . The distribution parameters are given in Table 1.1.
The correlation between d1 and d2 is taken as ρ = −0.6. As this situation is con-
sidered as extreme one, the allowable failure probability of the system is set to:
Pf _system = 0.05 (naturally, this is a conditional probability as it assumes that extreme
situation occurs). The reliability solution leads to the direction cosines: αd1 = 0.55 and
αM = −0.83 for the limit state H1 , and αd1 = −0.48, αd2 = 0.64 and αM = −0.60 for the
limit state H2 . Thus, the correlation between these two limit states is equal to 0.233.
The overall RC volume in this beam is computed by V = 0.2(11 d1 + 3 d2 ). To take
account for the workmanship in the cost calculation, the depths are set to the power 3.
The final cost of RC is estimated by 150a/m3 . The system RBDO is applied to the
structure, by adopting two considerations: 1) the target reliability index is the same
for all the limit states, and 2) the target reliability indexes are adapted to find a better
solution, under the satisfaction of the system target. In the first case, the target system
failure probability of 0.05 is reached when both components have reliability indexes
of 1.943, knowing the correlation of 0.233. In the second case, the target of 0.05 is
searched, where the cost is to be set as low as possible. The interest of the adaptive
target strategy is shown by comparing these two RBDO formulations, as indicated in
Table 1.2. For the same system reliability level, the adaptive target methodology allows
us to significantly decrease the structural cost, by better distributing the material within
the structure.
Figure 1.14 compares the failure domains for both solutions (the 2D graph is given
for the limit states projected on the plane uM = 0). As the system failure probability is
the same for both formulations, the decrease of the margin for H1 implies the increase
of the margin for H2 . For the same system reliability, the adaptive approach allows us
to reach a cost reduction of 12.4%. Figure 1.14 shows also the beam profile obtained
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 25

Table 1.2 RBDO formulation and solutions.


Considered aspect Component-based formulation System-based formulation

Formulation Minimize: 300(11m3d1 + 3m3d2 ) Minimize: 300(11m3d1 + 3m3d2 )


under: β1 ≥ 1.9434 under: βsys ≥ 1.64485
and: β2 ≥ 1.9434
Failure point U ∗ : u∗d1 = −1.07; u∗d2 = 0; u∗M = 1.61 u∗d1 = −0.91; u∗d2 = 0; u∗M = 1.37
∗ ∗
ud1 = 0.93; ud2 = −1.24; u∗M = 1.17 ∗
ud1 = −1.58; ud2 = −2.11; u∗M = 1.98

Reliability levels β1 = 1.9434 β1 = 1.6487


at optimum: β2 = 1.9434 β2 = 3.2959
Pfsys = 0.05 Pfsys = 0.05
Optimum design: m∗d1 = 57.8 cm m∗d1 = 55.2 cm
m∗d2 = 16.9 cm m∗d2 = 21.1 cm
CT = 64.3a CT = 56.3a

H1 ud
2
H2

ud
1
Limit states Limit states for
for identical adaptive targets
component
reliabilities

16.9 cm
21.1 cm
55.2 cm
57.8 cm

Design with identical Design with


component reliabilities adaptive targets

Figure 1.14 Failure domains and optimum design for identical and adaptive formulations.

by the two approaches. It is clear that the adaptive target approach tries to decrease
the depth where cost is widely involved, without decreasing the overall system safety.

6 RBDO issues
The interest of RBDO is not limited to the design of new structures, but it also offers
a powerful tool to solve a large class of structural problems. The RBDO is applied to
various levels of reliability assessment, design, maintenance and codification. Some of
these issues are briefly presented in this section.

6.1 Multicriteria approach for RBDO


As a matter of fact, the RBDO is a multicriteria optimization problem where the objec-
tive is to minimize the costs and to maximize the safety (Kuschel and Rackwitz 1997).
It is generally acceptable that reliability and economy have conflicting requirements
26 Structural design optimization considering uncertainties

which must be considered simultaneously in the optimization process. The usual for-
mulations aims either to combine these two objectives in only one weighted objective
or to deal with one of these objectives as an optimization constraint.
A more rational formulation can be stated as real multicriteria problem where the
designer can get the Pareto optimal configurations in order to make consistent choices
in the design process. As an example, Frangopol (2003) proposed a four-objective
vector for bridge structures:

f(d, x) = [V(d), PfCOL (d, x), Pf YLD (d, x), Pf DFM (d, x)] (26)

where V(d) is the material volume, Pf COL (d, x) is the collapse probability, Pf YLD (d, x)
is the first yield probability and Pf DFM (d, x) is the excessive deformation probability.
This problem can be solved by any general multi-criteria technique.

6.2 C o d e c al i b r at io n b y R B DO
The design codes of practice must fit a certain objective for the whole applicable
domain. Many actual design codes derive from a reliability-based calibration pro-
cedure to determine the partial safety factors to be applied in design (Sørensen et al.
1994). The objective of these codes is generally to keep the structural reliability above
the specified target level (Ang and De Leon 1997). The problem of defining the safety
factors is solved by the minimization of a penalization function for all the design sit-
uations covered by the design code (Gayton et al. 2004); the optimization problem is
thus (Ditlevsen & Madsen 1996):


L
min f (γi ) = W(ωj , βj (γi ), βt ) (27)
γi
j=1

where W( · ) is a penalty function, γi are the partial safety factors, βj (γi ) is the safety
index for the j-th situation and βt is the target reliability. Several kinds of penalty
function have been proposed in the literature. The simplest one is defined by the
weighted least square function:

W1 (γi ) = ωj (βj (γi ) − βt )2 (28)

This function has the advantage of being very simple and the solution of the opti-
mization problem (equation 27) can be greatly simplified if βj (γi ) has a simple explicit
expression. Nevertheless, this function is symmetrical with respect to βt , i.e. it only
depends on the difference βj − βt , and structures with a reliability index smaller than
the target are not more penalized than structures with higher reliability index. Another
function can take the following form (Lind 1977):

W2 (γi ) = ωj (k(βj (γi ) − βt ) + exp(−k(βj (γi ) − βt )) − 1) (29)

where k > 0 is the curvature parameter. This function penalizes the reliability indexes
which are smaller than the target, compared to those higher than the target. When the
parameter k increases this function becomes more penalizing for βj < βt than the least
square function. For large values of k, the penalty goes to infinity for βj < βt , and so
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 27

reliability indexes lower than the target become forbidden. Other penalty functions
can be proposed on the basis of socio-economic measures of the gap between the code
and its objective. In such a case, the relationship between cost and target reliability
index must be known.
Classically, the goal of the design codes is to minimize the expression (27) for the
whole spectrum of the design situations. Nevertheless, new evolutions of some codes
of practice tend to homogenize the risk (i.e. product of failure probability by the
consequences) instead of the reliability level (or failure probability). As an example,
the RBDO calibration could take the form:


L
min CT (d, γ) = CTi (d, γ)
d,γ
i=1

L
subject to W(ωj , βj (d, γ), βt ) ≤ ε (30)
j=1
gj (d) ≤ 0
dL ≤ d ≤ dU

where ε is an acceptable tolerance for target fitting.

6.3 Topology bas ed RBDO


Knowing that the RBDO concerns mostly shape optimization, the application to topol-
ogy optimization is a new research field (Kharmanda et al. 2004). The basic idea
concerns the use of uncertainties as a control parameter for topology selection. In fact,
the reliability constraint allows us to get a robust structural topology. Figure 1.15
illustrates the fact that different topologies can be suitable for the same ground struc-
ture. Usually, the comparison in deterministic topology optimization is only related to
minimized mean compliance, without observing the solution dispersion. The principle
of reliability-based topology robustness consists in defining the topology which is less
sensitive to system uncertainties.
The main difficulty in dealing with topology lies in the fact that topology opti-
mization is a qualitative approach, while the reliability-based design is a quantitative

Robust topology
Large dispersion
Compliance

Optimization procedure iterations

Figure 1.15 RBTO and principle of reliability-based topology robustness.


28 Structural design optimization considering uncertainties

approach. The coupling of the two methods requires special developments to overcome
formulation and efficiency problems.

6.4 T i m e-v a ri ant R B DO


Every designer knows well that system information are not perfect and their validity
is limited under system aging. In fact, most of phenomena involved in the total cost
function are time-variant. It can be mentioned, for example, the loading fluctuation
over the structural lifetime, the deterioration of material properties with time, the
variation of operating and maintenance costs, and the monetary fluctuation of failure
costs. All these time-variant phenomena lead to time-variant optimal solution. How-
ever, the designer must take decisions in a given stage of the project (largely before
the construction or the manufacturing of the system), in function of the available data
at that stage. The resulting solution is optimal only in the first part of the structure
lifetime, as it does not account for aging and long-term exposure.
In time-variant RBDO (Kuschel and Rackwitz 1998), the ideal scheme consists in
designing the system for best optimal solution, considering the whole lifetime of the
system. In this case, the utility function takes the form:

max U(p, d, T) = B(d, t) − CI (d) − L(p, d, T)


x (31)
subject to gj (d) ≤ 0

with
T
B(d, T) = b(t) d(t) (1 − Pf (p, d, t))dt
0

T (32)
L(p, d, x, T) = Cf (p, d) f (p, d, t) d(t)dt
0

where b(t) is the benefit derived from the existence of the system, Cf is the failure cost,
f (p, d, t) is the probability density of the time to failure, d(t) is the discount function
(or capitalization function) and T is the system age.

6.5 C o upl ed r e liab ilit y-b as e d d es ig n an d m a i n t e n a n ce p l a n n i n g


Although in design practice and due to system complexity, the maintenance planning
is often considered as an independent step, the Reliability-based optimization can also
be applied to a coupled set of design and maintenance parameters. In this case, the
problem is formulated as:

min CT (d) = CI (d) + CF (d) + CM (d)


x (33)
subject to gj (d) ≤ 0

At the design stage, the maintenance cost is minimized by selecting the best set
of parameters. At this stage, there is no available site information (as the system is
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 29

not constructed yet) and a priori hypotheses have to be formulated. Generally, regu-
lar maintenance intervals are chosen at this stage. The maintenance cost is usually a
function of the type of inspection method mS , the number of inspections in the remain-
ing lifetime nS , and the time for different inspections t. The maintenance cost can be
described by (Enevoldsen and Sørensen 1994):

CM (d, p) = CPM (d, p) + CINS (d, p) + CREP (d, p) (34)

where CM is the expected maintenance cost, CPM is the preventive maintenance cost,
CINS is the expected inspection cost, CREP is the expected cost of repairs, and p is the
vector of maintenance parameters. Enevoldsen and Sørensen (1994) suggested to use
the following expressions to evaluate inspection and repair costs:


nS
1
CINS (d, mS , nS , t) = CSi (mS )(1 − Pf (d, ti ))
(1 + r)ti
i=1
(35)

nS
1
CREP (d, nS , t) = CRi (x)PRi (d, ti )
(1 + r)ti
i=1

where CSi is the ith inspection cost, Pf is the failure probability in the time interval
[0, ti ], r is the discount rate, CRi is the cost of a repair at the ith inspection and PRi is the
probability of performing a repair after the ith inspection for surviving components.

7 Conclusions
RBDO is a powerful tool for robust design of structural systems. The explicit consider-
ation of safety level allows us to optimize the total cost where the solution becomes less
sensitive to system uncertainties. Contrary to traditional deterministic design optimiza-
tion, the RBDO allows us to modulate the safety margins in function of the uncertainty
effects for each variable, in order to reach economic, safe, efficient and robust design.
In this sense, the safety factors are optimally defined within the system, compared to
deterministic design where the safety factors are set before undergoing the optimization
process.
RBDO is still an active research field in order to extend the possibilities for new
applications. Design, topology and time-variant reliability-based optimizations are
very interesting field to reach performance-based design for cost-effective, durability
and lifetime management of structural systems.

References

Ang, A.H.-S. & De Leon, D. 1997. Determination of optimal target reliabilities for design and
upgrading of structures. Structural Safety 19:91–103.
Aoues, Y. & Chateauneuf, A. 2007. Reliability-based optimization of structural systems by
adaptive target safety application to RC frames. Structural Safety. Article in Press.
Ditlevsen, O. 1979. Narrow reliability bounds of structural systems. Journal of Structural
Mechanics 7:435–451.
Ditlevsen, O. & Madsen, H. 1996. Structural Reliability Methods. John Wiley & Sons.
30 Structural design optimization considering uncertainties

Enevoldsen, I. 1994. Reliability-based optimization as an information tool. Mech. Struct. &


Mach. 22:117–135.
Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering.
Structural Safety 15:169–196.
Frangopol, D.M. 1995. Reliability-based optimum structural design. In: Probabilistic structural
mechanics handbook, edited by C. Raj Sundararajan, Chapman Hall, USA, 352–387.
Frangopol, D.M. 1999. Life-cycle cost analysis for bridges. In: Bridge safety and reliability.
ASCE, Reston, Virginia, 210–236.
Frangopol, D.M. 2000. Advances in life-cycle reliability-based technology for design an main-
tenance of structural systems. In: Computational mechanics for the twenty-first century.
Edinburgh: Saxe-Coburg Publishers, 257–270.
Frangopol, D.M. & Maute K. 2003. Life-cycle reliability-based optimization of civil and
aerospace structures. Computers & Structures 81:397–410.
Gayton, N., Mohamed-Chateauneuf, A., Sørensen, J.D., Pendola, M. & Lemaire, M. 2004.
Calibration methods for reliability-based design codes. Structural Safety 26(1):91–121.
Hasofer, A.M. & Lind, N.C. 1974. An Exact and Invariant First Order Reliability Format.
J. Eng. Mech., ASCE, 100, EM1:11–121.
Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. Efficient reliability-based
design optimization using a hybrid space with application to finite element analysis. Journal
of Structural and Multidisciplinary Optimization 24(3):233–245.
Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. CAROD: Computer-Aided
Reliable and Optimal Design as a concurrent system for real structures. Journal of Computer
Aided Design and Computer Aided Manufacturing CAD/CAM 1(1):1–12.
Kharmanda, G., Olhoff, N., Mohamed-Chateauneuf, A. & Lemaire, M. 2004. Reliability-based
topology optimization. Struct. Multidisc. Optim. 26:295–307.
Kuschel, N. & Rackwitz, R. 1997. Two basic problems in reliability-based structural
optimization. Mathematical Methods of Operations Research 46:309–333.
Kuschel, N. & Rackwitz, R. 1998. Structural optimization under time-variant reliability con-
straints. Proceeding of the eighth IFIP WG 7.5 Working conference on Reliability and
Optimization of Structural Systems, edited by Nowak, University of Michigan, Ann Arbor,
Michigan, USA, 27–38.
Kuschel, N. & Rackwitz, R. 2000. A new approach for structural optimization of series system.
In: R.E. Melchers & M.G. Stewart (eds). Proceedings of the 8th International conference
on applications of statistics and probability (ICASP) in Civil engineering reliability and risk
analysis, Sydney, Australia, December 1999, Vol. 2. pp. 987–994.
Lemaire, M., in collaboration with Chateauneuf, A. & Mitteau, J.C. 2006. Structural reliability.
ISTE, UK.
Lind, N.C. 1977. Reliability based structural codes, practical calibration. Safety of structures
under dynamic loading, Trondheim, Norway, 149–160.
Madsen, H.O. & Friis Hansen, P. 1991. Comparison of some algorithms for reliability-
based structural optimization and sensitivity analysis. In: C.A. Brebbia & S.A. Orszag (eds):
Reliability and Optimization of Structural Systems, Springer-Verlag, Germany, 443–451.
Moses, F. 1977. Structural System Reliability and Optimization. Comput. Struct. 7:283–290.
Moses, F. 1997. Problems and prospects of reliability based optimization. Engineering Structures
19(4):293–301.
Rackwitz, R. 2001. Reliability analysis, overview and some perspectives. Structural Safety
23:366–395.
Sørensen, J.D., Kroon, I.B. & Faber, M.H. 1994. Optimal reliability-based code calibration.
Structural Safety 15:197–208.
Chapter 2

Reliability-based optimization of
engineering structures
John D. Sørensen
Aalborg University, Aalborg, Denmark

ABSTRACT: The theoretical basis for reliability-based structural optimization within the
framework of Bayesian statistical decision theory is briefly described. Reliability-based cost
benefit problems are formulated and exemplified with structural optimization. The basic
reliability-based optimization problems are generalized to the following extensions: interactive
optimization, inspection and repair costs, systematic reconstruction, re-assessment of exist-
ing structures. Illustrative examples are presented including a simple introductory example, a
decision problem related to bridge re-assessment and a reliability-based decision problem for
offshore wind turbines.

1 Introduction
The theoretical basis for reliability-based structural optimization can be formu-
lated within the framework of Bayesian statistical decision theory mainly developed
and described in the period 1940–60, see for example (Raiffa & Schlaifer 1961),
(Aitchison & Dunsmore 1975), (Benjamin & Cornell 1970) and (Ang & Tang 1975).
By statistical decision theory it is possible to solve a large number of decision problems
where some of the parameters are modeled as uncertain. The uncertain parameters are
modeled by stochastic variables or stochastic processes. Uncertain costs and benefits
can thus be accounted for in a rational way. A large number of “simple’’ examples
for application of statistical decision theory within structural and civil engineering are
given in e.g. (Benjamin & Cornell 1070), (Rosenbleuth & Mendoza 1971) and (Ang &
Tang 1975).
During the last decades significant achievements have been obtained in development
of efficient numerical techniques which can be used in solving problems formulated by
statistical decision theory. Especially the development of FORM (First Order Reliabil-
ity Methods), SORM (Second Order Reliability Methods) and simulation techniques to
evaluate the reliability of components and systems has been important, see e.g. (Madsen
et al. 1986). In the same period efficient methods to solve non-linear optimization prob-
lems have also been developed, e.g. the sequential quadratic optimization algorithms
(Schittkowski 1986) and (Powell 1982). These developments have made it possible to
solve problems formulated in a decision theoretical framework. Examples are:

• Reliability-based inspection and repair planning for offshore structures and con-
crete structures, formulated as a preposterior decision problem, see e.g. (Kroon
1994), (Engelund 1997), (Skjong 1985), (Thoft-Christensen & Sørensen 1987),
32 Structural design optimization considering uncertainties

(Fujita et al. 1989), (Madsen et al. 1989), (Madsen & Sørensen 1990), (Fujimoto
et al. 1989), (Sørensen & Thoft-Christensen 1988) and (Faber et al. 2000).
• Reliability-based structural optimization problems and associated techniques for
sensitivity analysis and numerical solution. Basic formulations of reliability-based
structural optimization are given in e.g. (Murotsu et al. 1984), (Frangopol 1985),
(Sørensen & Thoft-Christensen 1985) and (Enevoldsen & Sørensen 1994). System
aspects are considered in e.g. (Enevoldsen & Sørensen 1993), interactive reliability-
based optimization in (Sørensen et al. 1995) and optimization with time-variant
reliability in e.g. (Kuschel & Rackwitz 1998). Further it is noted that a one-level
approach for reliability-based optimization is described in (Streicher & Rackwitz
2002) based on an idea in (Madsen & Hansen 1992).

In section 2 a short description of Bayesian decision theory for engineering deci-


sions is given and in section 3 reliability-based structural optimization problems are
formulated. Only time-invariant reliability problems are considered.
Three levels of decision problems with increasing degree of complexity can be iden-
tified: (1) decisions with given information (e.g. for new structures), (2) decisions with
given new information (e.g. for existing structures), (3) decisions involving planning
of experiments/inspections to obtain new information (e.g. for inspection planning).
Further, interactive optimization aspects are discussed.
In order to solve reliability-based optimization problems it is important to have
accurate and numerically effective methods to evaluate probabilities of different events
and of expectations. In section 4 some probabilistic methods, such as FORM/SORM,
are briefly mentioned. Also techniques are described for sensitivity analyses to be
used in numerical solution of the optimization problems using general optimization
algorithms. In section 5 illustrative examples are presented, including applications
with re-assessment of a concrete bridge and with reliability-based design of support
structure for wind turbines.

2 Decision theory for engineering decisions


Engineers are often in the situation to take decisions on design of a new structure,
on repair/maintenance of existing structures where statistical information is available.
In the following it is shown how Bayesian statistical decision theory can be used for
making such decisions in a rational way, see (Raiffa & Schlaifer 1961) and (Benjamin &
Cornell 1970) for a detailed description.
An important difficulty in Bayesian statistical decision theory when applied in civil
and structural engineering is that it can be difficult to assign values to cost of failure, or
not acceptable behavior, especially when loss of human lives is involved. One solution
is to calibrate the cost models to existing structures or to base the decisions on com-
parisons with alternative solutions. Further, organizational factors can have a rather
significant influence in the decision process. These factors often have an influence,
which is not rational from a cost-benefit point of view. Examples are the influence of
the organizational structure, personal preferences and organizational culture.
The first problem to consider is that of making rational decisions when some of
the parameters defining the model are uncertain, but a statistical description of the
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 33

Cost
C(z, X)

Design State of nature X


decision z

Figure 2.1 Decisions with given information.

parameters is available, i.e. the statistical information is given. The uncertain parame-
ters are modeled by n stochastic variables X = (X1 , X2 , . . . , Xn ). The density function
of the stochastic variables is fX (x, θ) where θ are statistical parameters, for example
mean values, standard deviations and correlation coefficients.
Further, it is assumed that a decision has to be taken between a number of alternatives
which can be modeled by design/decision variables z = (z1 , z2 , . . . , zN ). In figure 2.1 a
decision model with one discretized variable z is shown. The decision is taken before
the realization by nature of the stochastic variables is known. Besides the decision
variables z and the uncertain variables X also a cost function C(z, X) is introduced in
the decision model in figure 2.1. When a decision z has been taken and a realization
x of the stochastic variables appears then the value obtained is denoted C(z, X) and
represents a numerical measure of the consequences of the decision and the realization
obtained. C(z, X) is assumed to be related to money and represents in general costs
minus benefits, if relevant.
As an example the design parameters z could be the geometrical parameters of a
structural system (cross-sectional dimensions and topology), the stochastic variables
X could be loads and material strengths and objective function C could be the cost of
the structure.
In some decision problems it can be difficult to specify the cost function, especially
if the consequences not directly measurable in money are involved, for example per-
sonal preferences. However, as described in von (von Neumann & Morgenstern 1943)
rational decisions can be taken if the cost function is made such that the expected value
of the cost function is consistent with the personal preferences. Thus, if the decision-
maker wants to act rationally the strategy z, which minimizes the expected cost, has
to be chosen as

C ∗ = min EX [C(z, X)] = C(z, X)fX (x) dx (1)
z

EX [−] is the expectation with respect to the joint density function of the stochastic
variables X is the minimum cost corresponding to the optimal decision z∗ .
The optimization problem can be generalized to include benefits B(z) such that the
total expected benefits minus costs, Z are maximized. (1) is then written

Z∗ = max Z(z) = B(z) − EX [C(z, X)] = B(z) − C(z, X)fX (x) dx (2)
z

where it is assumed that the benefits are not dependent on the stochastic variables X.
34 Structural design optimization considering uncertainties

3 Reliability-based structural optimization


The formulations given above can be used in a number of cases related to design of
structures. As mentioned in section 2 they can e.g. be used in a design situation where z
models the design variables (size and shape variables in a structural system), X models
uncertain loads and material parameters, B models the benefits and C models the
total expected costs to design and possible failure. As mentioned only time-invariant
reliability problems are considered.

3.1 Ba si c rel i ab ilit y-b as e d o pt imizat io n f o r m u l a t i o n s


First, it is assumed that

• There is no systematic reconstruction of the structure in case of failure


• Discounting can be ignored

The total expected cost-benefits can then be written

Z(z) = B(z) − C(z) = B(z) − CI (z) − Cf PF (z) (3)

where CI (z) and Cf model the costs due to construction and failure, B(z) models
the benefits and PF (z) is the probability of failure. Failure/no failure should here be
considered in a general sense as satisfactory/not satisfactory behavior.
The optimal design z∗ is obtained from the optimization problem:

max Z(z) = max {B(z) − CI (z) − Cf PF (z)} (4)


z z

(4) can equivalently be formulated as a reliability-constrained optimization problem

max B(z) − CI (z) (5)


z

subject to β(z) ≥ βmin

where the generalized reliability index is defined by

β(z) = −−1 (PF (z)) (6)

 is the standard normal distribution function. βmin can be a code specified minimum
acceptable reliability level related to annual or lifetime reference time intervals. Other
design constraints can be added to (5) if needed. (4) and (5) give the same optimal
decision if βmin is chosen as the reliability level corresponding to the optimal solution
z∗ of (4): βmin = β(z∗ ), i.e. there is a close connection between βmin and Cf /CI . This
can easily be seen considering the Kuhn-Tucker optimality conditions for (4) and (5).
(5) is a two-level optimization problem, sine the calculation of the reliability index β
by FORM requires an optimization problem to be solved, see section 4.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 35

The optimization problem in (5) can be generalised to the following element


reliability-based structural optimization problem:
m 
 D 
mP
max Z(z) = B(z) − C(z) = B(z) − CIi Vi (z) + Cfi (−βi (z))
z
i=1 i=1
subject to βi (z) ≥ βimin , i = 1, . . . , M
BI,i (z) ≥ 0, i = 1, . . . , mI (7)
BE,i (z) = 0, i = 1, . . . , mE
z1i ≤ zi ≤ ziu , i = 1, . . . , N
where z = (z1 , . . . , zN ) are the design (or optimization) variables. The optimization
variables are assumed to be related to parameters defining the geometry of the structure
(for example diameter and thickness of tubular elements) and coordinates (or related
parameters) defining the geometry (shape) of the structural system.
The objective function C consists of a deterministic and a probabilistic part with mD
and mP terms, respectively. Vi is e.g. a volume in the ith deterministic term and Vi is
the cost per volume of the ith term modelling the construction costs. Vi is assumed to
be deterministic. If stochastic variables influence Vi then design values, see below, are
assumed to be used to calculate Vi . In the probabilistic part Cfi is the cost due to failure
of failure mode i. βi , i = 1, . . . , mP are reliability indices for the mP failure modes. The
general formulation of (7) allows the objective function to model both the structural
weight and the total expected costs of construction and failure.
The constraints in (7) are based on the reliability indices βi , i = 1, . . . , M for M
failure modes. βimin , i = 1, . . . , M are the corresponding lower limits on the reliabili-
ties. BI,i , i = 1, . . . , mI and BE,i , i = 1, . . . , mE define the deterministic inequality and
equality constraints in (7) which can ensure that response characteristics such as dis-
placements and stresses do not exceed codified critical values. Determination of the
inequality constraints usually includes finite element analyses of the structural system.
The inequality constraints can also include general design requirements for the design
variables. Finally also simple bounds are included as constraints.
The variables (parameters) used to model the structure are characterized as stochastic
or deterministic if the variable can be modelled as stochastic or deterministic and design
or fixed if the variable is a design (optimization) variable or a fixed constant.
The optimization problem in (5) can further be generalised to the following system
reliability-based structural optimization problem:
m 
D 
mP
max Z(z) = B(z) − C(z) = B(z) − CIi Vi (z) + Cfi (−βi (z))
z i=1 i=1
subject to βS (z) ≥ βmin ,
(8)
BI,i (z) ≥ 0, i = 1, . . . , mI
BE,i (z) = 0, i = 1, . . . , mE
z1i ≤ zi ≤ ziu , i = 1, . . . , N
where βS is the system reliability index. If failure of the structure can be modelled as
by a series/parallel system then βS can be obtained from:

βS (z) = −−1 (Pf (z)) (9)


36 Structural design optimization considering uncertainties

where Pf (z) is the probability of failure of the system, e.g. obtained by FORM/SORM
techniques.

3.2 Intera c ti v e o pt imizat io n


In practical solution of an optimization problem it will often be very relevant to be able
to make different types of interaction between the user and the numerical formulation/
solution of the design problem. The basic types of interactive optimization which
influences the formulation of the optimization problems are, see (Haftka & Kamat
1985) and (Sørensen et al. 1995):
• include (delete) a design (optimization) variable
• include (delete) a constraint
• modify a constraint or
• modify (change) the objective function.
In order to investigate the effect of interactive optimization on the optimality criteria,
(9) is restated as the following general optimization problem:
min C(z)
z
subject to ci (z) = 0, i = 1, . . . , mE (10)
ci (z) = 0, i = mE + 1, . . . , m

First order necessary conditions that have to be satisfied at a (local) optimum point
z∗ are given by the Kuhn-Tucker conditions. If the optimization process has almost
converged, a good guess on the optimal design is available. A modification of the
optimization problem is then specified by the user. In (Sørensen et al. 1995) the details
are described.
Figure 2.2 illustrates the data flow in interactive structural optimization. The
modules used are:

• User interface
• OPT: general optimization algorithm
• REL: module for reliability analysis, e.g. FORM, incl. optimization
• FEA: finite element program module
• DSA: module for calculating design sensitivity coefficients.

3.3 General i z a t io n: inc lud e ins pec t io n a n d r e p a i r co s t s


The basic decision problems considered in section 2 can as mentioned be generalized
to be used in reliability-based experiment and inspection planning, see figure 2.3.
If e model the inspection times and qualities, and d models the repair decision given
uncertain inspection result S, the optimization problem can be written:

max Z(e, d) = B0 − {CIN


0
(e) + CR0 (e, d)PR (e, d) + Cf0 PF (e, d)} (11)
e,d

0
where B0 models the benefits, CIN models the inspection costs, CR0 models the repair
costs, PR is the probability of repair and PF is the probability of failure, both obtained
using stochastic models for S and X.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 37

CARBOS: User interface

OPT

Optimization (deterministic)
Optimization (probabilistic)
Interactive optimization
Sensitivity analysis
REL DSA

Sensitivity analysis
Reliability analysis
DSA FEA

FEA

Modifi- Y CARBOS: Modify variables,


cations constraints and obj. function
?
N
Final
design

Figure 2.2 Data flow interactive optimization, from (Sørensen et al. 1995).

Inspection Inspection Repair State of Cost-benefit


plan e result S decision d nature X Z(e, S, d, X)

Figure 2.3 Decisions for with given information.

(11) can be further generalised if the total expected costs are divided into construc-
tion, inspection, repair and failure costs and a constraint related to a maximum annual
(or accumulated) failure probability
PFmax is added. If the inspections performed at
times T1 , T2 , . . . , TN are part of e the optimization problem can be written

max Z(e, d) = B(e, d) − {CI (e, d) + CIN (e, d) + CR (e, d) + CF (z, e)}
e,d
subject to ei1 ≤ ei ≤ eiu , i = 1, . . . , N (12)

Pt (t, e, d) ≤
PFmax , t = 1, 2, . . . , TL

where, B is the expected benefits, CI is the initial costs, CIN is the expected inspection
costs, CR is the expected costs of repair and CF is the expected failure costs. The annual
probability of failure in year t is
PF,t . The N inspections are assumed performed at
times 0 ≤ T1 ≤ T2 ≤ · · · ≤ TN ≤ TL .
38 Structural design optimization considering uncertainties

The total capitalised benefits are written


N
1
B(e, d) = Bi (1 − PF (Ti )) (13)
(1 + r)Ti
i=1

The ith term represents the capitalized benefits in year i given that failure has not
occurred earlier, Bi is the benefits in year i, PF (Ti ) is the probability of failure in the
time interval [0, Ti ] and r is the real rate of interest.
The total capitalised expected inspection costs are written


N
1
CIN (e, d) = CIN,i (e)(1 − PF (Ti )) (14)
(1 + r)Ti
i=1

The ith term represents the capitalized inspection costs at the ith inspection when
failure has not occurred earlier, CIN,i is the inspection cost of the ith inspection, PF (Ti )
is the probability of failure in the time interval [0, Ti ] and r is the real rate of interest.
The total capitalised expected repair costs are


N
1
CR (e, d) = CR,i PRi (e, d) (15)
(1 + r)Ti
i=1

CR,i is the cost of a repair at the ith inspection and PRi is the probability of performing
a repair after the ith inspection when failure has not occurred earlier and no earlier
repair has been performed.
The total capitalised expected costs due to failure are estimated from


TL
1
CF (e, d) = CF (t)
PF,t PCOL|FAT (16)
(1 + r)t
t=1

where CF (t) is the cost of failure at the time t. PCOL|FAT is the conditional probability
of collapse of the structure given failure of the considered component.

3.4 G en eral i z a t io n: inc lud e s y s t emat ic r e co n s t r u ct i o n


The following assumptions are made: (1) the structure is assumed to be systematically
rebuild in case of failure, (2) only initial costs, CI (z) and direct failure costs, CF are
included, (3) the benefits per year are b and (4) failure events are assumed to be modeled
by a Poisson process with rate λ. The probability of failure is PF (z).
The optimal design is determined from the following optimization problem, see e.g.
(Rackwitz 2001):

b Ci (z) CI (z) CF λPF (z)
max Z(z) = − − +
z rC0 C0 C0 C0 r + λPF (z)
subject to zl ≤ z ≤ zu , i = 1, . . . , N (17)
i i i

PF (z) ≤ PFmax
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 39

where zl and zu are lower and upper bounds on the design variables. PFmax is the maxi-
mum acceptable probability of failure e.g. with a reference time of one year. This type
of constraint is typically required by regulators. The optimal design z∗ is determined
by solution of (17). If the constraint on the maximum acceptable probability of fail-
ure is omitted, then the corresponding value PF (z∗ ) can be considered as the optimal
probability of failure related to the failure event and the actual cost-benefit ratios used.
The failure rate λ and probability of failure can be estimated for the considered
failure event, if a limit state equation, g(X1 , . . . , Xn , z) and a stochastic model for the
stochastic variables, (X1 , . . . , Xn ) are established. If more than one failure event is
critical, then a series-parallel system model of the relevant failure modes can be used.

3.5 Generalisation: optimal re-as s es s me nt o f e xi s ti ng s truc ture s


In re-assessment of structures and engineering systems, engineers are often in the
situation to be involved in decisions on repair and/or strengthening of an existing
system/structure where some statistical information is available. In the following it is
shown how Bayesian statistical decision theory can be used for making such decisions
in a rational way. The theoretical basis is detailed described in e.g. (Raiffa & Schlaifer
1961) and (Benjamin & Cornell 1970). It is assumed that the decision is taken on behalf
of the owner of the structure, and that a cost-benefit approach is used with constraints
related to minimum safety requirements specified by national/international codes of
practice and/or the society. The same principles can be applied in case of other decision
makers. It is noted that the optimal solution from the cost-benefit problem should be
used as one input to the decision process.
The decision problem on possible repair and/or strengthening in a re-assessment sit-
uation is illustrated in figure 2.4. It is assumed that the design variables in the initial
design situation are denoted z. After the initial design information about the uncer-
tain variables influencing the behaviour of the structure is collected, and are denoted S.
Often this information will be collected in connection with the re-assessment. The deci-
sion variables at the time TR of re-assessment are denoted d. The uncertain variables
describing the state of nature are denoted X.

Time TR

Design Information S Repair/re-design State of Cost-benefit


decision z decision d nature X Z(z, S, d, X)

Figure 2.4 Decisions in re-assessment with given information. The vertical line illustrates the time
of re-assessment.
40 Structural design optimization considering uncertainties

The decision is taken before the realization by nature of the stochastic variables is
known. Besides the decision variables d and the uncertain variables X also a cost-
benefit function Z(z, S, d, X) is introduced in the decision model. When a decision d in
the re-assessment problem has been taken and a realisation x of the stochastic variables
appears then the value obtained is denoted Z(z, S, d, x) and represents a numerical
measure of the consequences of the re-assessment decision and the realisation obtained.
Z(z, S, d, x) is assumed to be measured in monetary units and represents in general costs
minus benefits, if relevant.
Illustrative examples of the decision variables z and d, and the stochastic variables
S and X are:

• z: design parameters, e.g. geometrical parameters of a structural system (cross-


sectional dimensions and topology). The design parameters are already chosen at
the initial design, and are therefore fixed at the time of re-assessment.
• S: information collected, e.g. concrete compression strengths obtained from sam-
ples taken from the structure, measured wave heights, non-failure of the structure,
no-find of defects by an inspection.
• d: design parameters in the re-assessment, e.g. geometrical parameters of a repair
(cross-sectional dimensions and topology).
• X : stochastic variables, representing e.g. loads and material strengths.

In some decision problems it can be difficult to specify the cost function, especially
if the consequences not directly measurable in money are involved, for example per-
sonal preferences. However, as described in (von Neumann, J. and Morgenstern 1943)
rational decisions can be taken if the cost function is made such that the expected value
of the cost function is consistent with the personal preferences.
If the information S is related the stochastic variables X then a predictive density
function (updated density function) fX (x|s) of the stochastic variables X taking into
account a realization s can be obtained using Bayesian statistical theory, see (Lindley
1976) and (Aitchison & Dunsmore 1975).
If the decision-maker wants to act rationally, taking into account the information s
the strategy d, which maximizes the expected cost-benefits, has to be chosen from

Z∗ = max EX|s [Z(z, s, d, X)] (18)


d

EX|s [−] is the expectation with respect to the predictive (updated) density function
fX (x|s). In the following the initial design variables z are not written explicitly. Z∗ is
the maximum cost-benefit corresponding to the optimal decision. If the benefits are not
dependent on the stochastic variables then the optimization problem can be written:

Z∗ = max Z(d) = max {B(d) − EX|s [C(s, d, X)]} (19)


d d

where the future benefits are denoted B and the future costs are denoted C. Both bene-
fits and costs should be discounted to the time of the re-assessment. The optimization
formulation can also be generalised to include decision variables related to experiment
planning.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 41

In the following time-invariant reliability problems are considered. It is assumed that


there is no systematic reconstruction of the structure in case of failure and discounting
can be ignored. The total expected cost-benefits can then be written

Z(d) = B(d) − C(d) = B(d) − CS (d) − Cf Pf (d) (20)

where CS (d) and Cf models the costs due to repair/strengthening after the re-assessment
and due to failure, B(d) models the benefits and Pf (d) is the probability of failure
updated with the information s. Failure/no failure should here be considered in a
general sense as satisfactory/not satisfactory behaviour.
In the case the information S models (one or more) events modelled by an event
margin {h(d, X) ≤ 0}, and failure is modelled by a limit state function g(d, X), the
updated probability of failure is obtained from:

Pf (d) = P(g(d, X) ≤ 0|h(d, X) ≤ 0) (21)

In the case the information S is related to the measurements of the stochastic variables
X then the (updated) density function fX (x|s) is used.
The optimal design d∗ is obtained from the optimization problem

max Z(d) = max {B(d) − CS (d) − Cf Pf (d)} (22)


d d

(22) can equivalently be formulated as a reliability-constrained optimization problem

max B(d) − CS (d),


d (23)
subject to β (d) ≥ βmin

where the generalised reliability index is defined by β (d) = −−1 (Pf (d)). βmin is a code
specified minimum acceptable reliability level related to annual or lifetime reference
time intervals. Other design constraints can be added to (23) if needed. (22) and (23)
give the same optimal decision if βmin is chosen as the reliability level corresponding to
the optimal solution d∗ of (22): βmin = β (d∗ ), i.e. there is a close connection between
βmin and Cf /CS . This can easily be seen considering the Kuhn-Tucker optimality
conditions for (22) and (23).
The basic decision problems considered above can be generalized to be used in
reliability-based experiment and inspection planning as described in section 3.3.

3.6 Numeric al s olution of decis ion pro bl e ms


Numerical solution of the decision problems requires solution of one or more optimiza-
tion problems. Since the optimization problems formulated are generally continuous
with continuous derivatives sequential quadratic optimization algorithms such as
(Schittkowski 1986) and (Powell 1982) can be expected to be the most effective, see
(Gill et al. 1981). These algorithms require that values of the objective function and the
constraints be evaluated together with gradients with respect to the decision variables.
The probabilities in the optimization problems can be solved using FORM tech-
niques, see (Madsen et al. 1986). Associated with the FORM estimates of the
42 Structural design optimization considering uncertainties

probabilities also sensitivities with respect to parameters are obtained. If the decision
problem includes analysis of a structural system the finite element method in combi-
nation with sensitivity analyses can be used. The sensitivity analyses can be based on
the direct or adjoint load method in combination with the discrete quasi-analytical
method or with the continuum method.

4 Reliability analysis and sensitivity analysis


As mentioned in the previous section the evaluation of the probability of failure events
is an integral part of decision analysis and reliability-based structural optimization
problems. Further, the decision analysis involves the evaluation of expected values of
the costs. Both the relevant failure probabilities and expected values can be determined
using modern reliability analysis techniques.
If all variables in the reliability problem can be modelled as time-invariant random
variables, the failure probability, PF (z), for a given limit state equation, g(x, z) can be
evaluated as

PF (z) = P(g(X, z) ≤ 0) = fX (x, z) dx (24)
g(x,z)≤0

where fX (x, z) is the joint density function of the stochastic variables X. The integral in
(24) plays a central role in the reliability analysis and has therefore been devoted special
attention over the last decades. As the integral in general has no analytical solution
it is easily realised that its solution or numerical approximation becomes a major
task for integral dimension larger than say 6 and for small probabilities. Sufficiently
accurate approximations have been developed which are based on asymptotic integral
expansions. These FORM/SORM methods are standard in reliability analysis and
commercial software, see e.g. (Madsen et al. 1986). Also simulation methods can in
many cases be very effective alternatives to FORM/SORM methods.
By FORM analysis the failure surface is approximated by its tangent at the design
point. On the basis of the linearised failure surface the probability of failure can be
approximated by, see (9):

PF (z) ≈ (−β(z)) (25)

Most optimization algorithms for solution of the reliability-based optimization prob-


lems formulated in section 3 require that the sensitivities with respect to objective
functions and reliability estimates can be determined efficiently. By a FORM analysis
these derivatives can be computed numerically by the finite difference method. How-
ever, it is more efficient to use a semi-analytical expression. For an element analysis the
derivative of the first order reliability index, β, with respect to a parameter p, which
may be a design variable z, is
∂β 1 ∂ g(u∗ ; p)
=  (26)
∂p ∇u g(u∗ ; p) ∂p

If a gradient-based algorithm is used in order to locate the design point the gradient
vector ∇u g(u∗ ; p) is already available and it is only necessary to determine the derivative
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 43

of the failure function with respect to the parameter p. The derivative of the first order
estimate of the probability of failure with respect to p is
∂Pf ∂β
= −ϕ(−β) (27)
∂p ∂p
where ϕ denotes the density function of a standard normally distributed variable. Also
for series and parallel systems semi-analytical expressions for the derivatives of the
first order reliability index can be derived.
The following optimization problem corresponding to the general optimization
problems defined in section 3, is considered.

min CI (z, p) = C0 (z, p) + Cj (z, p)Pj (z, p)
z j (28)
subject to Pf (z, p) ≤ Pfmax

where z are decision/design variables, p are quantities defining the costs and/or the
stochastic model. Pj denotes a probability (failure or repair), Pf denotes a failure
probability and Pfmax is the maximum accepted failure probability. The sensitivity of
the total expected costs C with respect to the elements in p is obtained from, see
(Haftka & Kamat 1985) and (Enevoldsen 1994)
dC  dPj dPf
= Cj +λ (29)
dpi dpi dpi
j

where λ is the Lagrangian multiplier associated with the constraint in (25).


The sensitivity of the decision variables z with respect to pi can be calculated using the
formulas given below which are obtained from a sensitivity analysis of the Kuhn-Tucker
conditions related to the optimization problem defined in (28). dz/dpi is obtained from
⎡ ⎤
⎡ ⎤dz ⎡ ⎤
A B ⎢ ⎥ C
⎣ ⎦⎢ dp i ⎥=⎣ ⎦
⎣ ⎦ (30)
BT 0 dλ 0
dpi

The elements in the matrix A and the vectors B and C are

∂ 2 C0  ∂2 Cj ∂Pj ∂Cj ∂2 Pj

∂2 Pf
Ars = + Pj +2 + Cj + λ (31)
∂zr ∂zs ∂zr ∂zs ∂zr ∂zs ∂zr ∂zs ∂zr ∂zs
j

∂Pf
Br = (32)
∂zr

∂ 2 C0  ∂2 Cj ∂Pj ∂Cj

Cr = − − + (33)
∂zr ∂pi ∂zr ∂pi ∂zr ∂pi
j

It is seen that the sensitivity of the objective function (the total expected cost) with
respect to some parameters can be determined on the basis of the first order sensitivity
44 Structural design optimization considering uncertainties

coefficients of the probabilities and of the cost functions, see (29). However, calculation
of the sensitivities of the decision parameters is much more complicated because it
involves estimation of the second order sensitivity coefficients of the probabilities, see
e.g. (Enevoldsen 1994).

5 Examples

5.1 Ex am pl e 1 – S imple c o s t-b e nefit an a l ys i s


In this section a simple, introductory example is presented. A structural component is
considered. It is assumed to have strength R and load S, which for simplicity both are
Normal distributed:

Load S: expected value µS = 20 kN and Coefficient of Variation = 25%


Strength R: expected value µR = 50 kN/m2 and Coefficient of Variation = 10%

The design variable z represents the cross-sectional area. The limit state equation is
written:

g = zR − S (34)

In the initial design situationz = z0 = 1 m2 is chosen. The corresponding relia-


bility index is β = (1 · 50 − 20)/ (1 · 5)2 + 52 = 4.24 and the probability of failure
Pf = (−4.24) = 1.1 · 10−5 . The benefits and cost of failure are B0 = 10 and CF = 107 .
New information has been collected. It consists of n = 5 tests with samples of similar
components with the following results: 51, 53, 56, 57 and 58 kN/m2 . The mean
value of the test results is X = 55 kN/m2 . For updating Bayesian statistics is used. It is
assumed that the strength has a known standard deviation σR = 4 kN/m2 . The expected
value is assumed to have a prior which is Normal distributed with expected value
µ0 = 50 kN/m2 and standard deviation σ0 = 3 kN/m2 . It is noted that these assumptions
are consistent with the initial model for the strength (µR = 50 kN/m2 and COV = 10%).
The (updated) posterior for the expected value becomes Normal distributed with
expected value of µR equal to µ =  (nXσ02 + µ0 σR2 )/(nσ02 + σR2 ) = 53.7 kN/m2 and
standard deviation of µR equal to σ  = (σ02 σR2 )/(nσ02 + σR2 ) = 1.5 kN/m2 .
The predictive (updated) distribution for the strength becomes Normal distributed
with expected value of R equal to µ = µ = 53.7 kN/m2 and standard deviation of R
equal to σ  = σ 2 + (σ02 σR2 )/(nσ02 + σR2 ) = 4.3 kN/m2 .
 The updated reliability index and probability of failure becomes β = (1 · 53.7 − 20)/
(1 · 4.3)2 + 52 = 5.12 and the probability of failure Pf = 1.56 · 10−7 .
At time TR the following two alternatives re-design situations are considered:

1) continue with existing design


The cost-benefits becomes:

Z = B0 − CF Pf = 10 − 107 · 1.5610−7 = 8.44


R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 45

9.0

8.5
Z(z)

8.0

7.5

7.0
1.00 1.05 1.10 1.15 1.20
z

Figure 2.5 Cost-benefit as function of design variable z.

2) use a modified design with increased benefits


The design variable is chosen to be z = 1.1 m2 . The benefits are assumed to be
changed to: B (z) = B0 + (z − z0 ) · 0.5. The cost of the design change is assumed to
be: CI (z) = 1 + (z − z0 ) · 2. The updated reliability index and probability of failure
becomes:

β = (1.1 · 53.7 − 20)/ (1.1 · 4.3)2 + 52 = 5.68

and the probability of failure Pf = 6.60 · 10−9 .


The cost-benefits become:

Z = B0 + (z − z0 ) · 0.5 − (1 + (z − z0 ) · 2) − CF Pf
= 10 + (1.1 − 1) · 0.5 − (1 + (1.1 − 1) · 2) − 107 · 6.6010−9
= 8.78

Since the cost-benefits are larger for the modified design than continuing with the
existing design, the modified design should be chosen.
In figure 2.5 the cost-benefits are shown as function of z. It is seen that the optimal
decision is to chose a modified design with z = 1.12. It is noted that the known infor-
mation also could be in the form of an event, e.g. an inspection, and that there could
be many more decision alternatives.

5.2 Example2 – Repair decis ion for conc re te bri dg e


A road bridge with concrete columns is considered. The total expected lifetime is
assumed to be TL . The concrete columns are exposed to chloride ingress due to
spread of de-icing salts on and below the bridge. There are some indications that
46 Structural design optimization considering uncertainties

chloride has penetrated the concrete and that corrosion of the reinforcement could be
expected within the next few years. Therefore a re-assessment is performed at time TR
as illustrated in figure 2.4.
Chloride ingress is one of the most common destructive mechanisms for this type
of structures. The most typical type of chloride initiated corrosion is pitting corro-
sion which may locally cause a substantial reduction of the cross-sectional area and
cause maintenance and repair actions which can be very costly. Further, the corro-
sion may make the reinforcement brittle, implying that failure of the structure might
occur without warning. The probabilistic analysis of the time to initiation of corrosion
in concrete structures is in this example based on models described in (Engelund &
Sørensen 1998).
At the time of re-assessment it is assumed that chloride profiles are taken from
representative parts of the concrete columns. The estimation of the time to initia-
tion of corrosion is based on these chloride profiles combined with prior knowledge.
A chloride profile consists of a number of measurements of the chloride concentration
as a function of the distance to the surface, y. Using the chloride profiles, the surface
concentration and the diffusion coefficient can be estimated.
It is assumed that diffusion (transportation) of chlorides into the concrete can be
described by a one-dimensional diffusion model where C(y, t) is the content of chlo-
ride at time t in the depth y, D(y, t) is the coefficient of diffusion (transportation) at
time t in the depth y, CS is the surface concentration and Cinit is the initial chloride
concentration.
It is assumed that the diffusion coefficients can be written:
a
t0
D(y, t) = D0 (y) (35)
t

where D0 (y) is the reference diffusion coefficient at the reference time t0 and a is an
age coefficient (0 < a < 1). Models for the diffusion coefficient can include different
diffusion coefficients in different depths.
Based on n measurements in one chloride profile the surface concentration cS , the
coefficient of diffusion D0 and the age coefficient a can be estimated using the Max-
imum Likelihood method, see (Engelund & Sørensen 1998). Next using Bayesian
statistics a predictive (updated) distribution for the stochastic variables X can be
obtained.
On the basis of the available information described above the decision maker has
to decide which repair/maintenance strategy should be applied. As an example, three
different strategies are described below based on the models in (Engelund & Sørensen
1998). All the costs given below are in some monetary unit. It is assumed that the
repair is carried out before the probability of any critical event such as total collapse
of the bridge. Therefore, in the following the optimization problem is solved without
any restriction on the probability of some critical event.
Strategy 1: consists of a cathodic protection. This strategy is implemented when
corrosion has been initiated at some point. In order to determine when corrosion is
initiated, inspections are carried out each year, beginning five years before the expected
time of initiation of corrosion. The cost of these inspections is 25 each year except for
the last year before expected initiation of corrosion where the cost is 100. The cost of
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 47

the cathodic protection is 1000 and the cost of running the cathodic protection is 20
each year.
Strategy 2: is implemented when 5% of the surface of the bridge columns shows
minor signs of corrosion, e.g. small cracks and discolouring of the surface. The repair
consists of repairing the minor damages and applying a cathodic protection. As for
strategy 1 the costs related to this strategy are the costs of the repair and the costs
of an extended inspection programme which starts three years before the expected
time of repair. However, by this strategy, also the costs related to running the cathodic
protection must be taken into account. The cost of repair is 2000, the cost of inspection
for three years before the repair is 100 each year and the cost of running the cathodic
protection is 30 each year.
Strategy 3: repair is performed as a complete exchange of concrete and reinforcement
in the corroded areas. The strategy is implemented when 30% of the surface at the
bridge columns shows distinct signs of corrosion, such as cracking and spalling of the
cover. The cost related to this strategy are the cost of the repair and the cost of an
extended inspection programme which starts three years before the expected time of
repair. The cost of repair is 3000 and the cost of inspection in the three years before
repair is 200 each year. Traffic restrictions in the year of repair he bridge decrease the
benefits with 1000.
The total expected costs for maintenance/repair is determined from


TL
CS (z1 , z2 , z3 ) = Pi (z)Ci (z) (36)
i=TR

where z = (z1 , z2 , z3 ) is the three repair/maintenance options, Pi (z) is the probability


that repair/maintenance is performed in year i and Ci (z) is the total costs of the repair
strategy if the repair is performed in year i:


TL
1
Ci (z) = Ci,j (z) (37)
(1 + r)j−TR
j=TR

Ci,j (z) is the repair/maintenance cost in year j if the repair is performed in year i. These
costs can be found in the descriptions of the repair strategies. The costs are discounted
to the time of re-assessment TR using the real rate of interest r.
The expected benefits in the remaining lifetime are determined from


TL
1 
TL
B(z) = B0 − Pi (z)Bi (z) (38)
(1 + r)i−TR
i=TR i=TR

where

TL
1
Bi (z) =
Bi,j (z) (39)
(1 + r)j−TR
j=TR

B0 is the basic annual benefit from use of the bridge and


Bi,j (z) is the loss of benefits
in year j due to repair in year i, e.g. due to traffic restrictions.
48 Structural design optimization considering uncertainties

The optimal repair strategy is obtained solving the optimization problem

max B(z) − CS (z) (40)


z

The expected costs are determined using the predictive stochastic model for the sur-
face concentration cS , the coefficient of diffusion D0 and the age coefficient a obtained
using the available information.

5.3 Ex am pl e 3 – O pt imal d es ig n o f o ffsh o r e w i n d t u r b i n e s


Wind turbines for electricity production are increasing drastically these years both in
production capability and in size. Offshore wind turbines with an electricity produc-
tion of 2–5 MW are now being produced. The main failure modes are fatigue failure of
wings, hub, shaft and main tower, local buckling of main tower, and failure of the foun-
dation. This example considers reliability-based optimization of the tower and founda-
tion, see (Sørensen & Tarp-Johansen 2005a) and (Sørensen & Tarp-Johansen 2005b).

5.3.1 F ormu l a tio n o f r e lia b ilit y-b a s e d o p t im i z at i o n pr o bl e ms


f or w in d tu r b in e s
Reliability based optimization problems can be formulated in different ways, e.g. with
or without systematic reconstruction. In this example it is assumed that the control
system is performing as expected, one single wind turbine is considered and the wind
turbine is systematically reconstructed in case of failure. It is noted that it is assumed
that the probability of loss of human lives is negligible.
The the main design variables are denoted z = (z1 , . . . , zN ), e.g. diameter and thick-
ness of tower and main dimension of wings. The initial (building) costs are CI (z), the
direct failure costs are CF , the benefits per year are b and the real rate of interest is
γ. Failure events are assumed to be modelled by a Poisson process with rate λ. The
probability of failure is PF (z).
The optimal design can thus be determined from the following optimization problem,
see section 3.4:

b CI (z) CI (z) CF λPF (z)
max W(z) = − − +
z γC0 C0 C0 C0 γ
(41)
subject to zi ≤ zi ≤ zi , i = 1, . . . , N
l u

PF (z) ≤ PFmax

where zl and zu are lower and upper bounds on the design variables. C0 is the reference
initial cost of corresponding to a reference design z0 . PFmax is the maximum acceptable
probability of failure e.g. with a reference time of one year. This type of constraint
is typically required by regulators. The optimal design z∗ is determined by solution
of (41). If the constraint on the maximum acceptable probability of failure is omitted,
then the corresponding value PF (z∗ ) can be considered as the optimal probability of
failure related to the failure event and the actual cost-benefit ratios used.
The failure rate λ and probability of failure can be estimated for the considered
failure event, if a limit state equation, g(X1 , . . . , Xn , z) and a stochastic model for the
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 49

DT

t1 H/3

t2 H/3 H

t3 H/3

hw
d

tP HP

D
DP

Figure 2.6 Design variables in wind turbine example (not in scale).

stochastic variables, (X1 , . . . , Xn ) are established. If more than one failure event is
critical, then a series-parallel system model of the relevant failure modes can be used.
An offshore 2 MW wind turbine with monopile foundation is considered, see fig-
ure 2.6. The wind turbine tower has height h = 63 m and a diameter which h increases
linearly from D at bottom to DT at the top. The tower is divided in three sections each
with height h/3 and each with the same thickness: t1 in top section, t2 in middle and t3
in bottom section. Diameter and thickness of monopile are constant: DP and tP . Tower
and monopile are made of structural steel. The distance from bottom of the tower to
the water surface is hw = 7 m and the distance from the water surface to the sea bed
(the water depth) is d = 9 m. Wind and wave loads on the tower itself are neglected.
The following failure modes are included: (a) yielding in cross sections in tower just
above and below changes in thickness, (b) local stability in cross sections in tower
just above and below changes in thickness, (c) fatigue in cross sections just above
and below changes in thickness, and (d) yielding in monopile in cross-section with
maximum bending moment.
The stochastic model for the extreme loading at the top of the tower is described in
(Sørensen & Tarp-Johansen 2005a) and (Sørensen & Tarp-Johansen 2005b).
For the failure mode yielding of cross-section the limit state function is written:
N M
σ= + ≥ Fy (42)
A W
where the cross-sectional forces in the cross-section is the normal force N, a shear
force Q and a bending moment M. Further A is the cross-sectional area (= πt(D − t)),
W is the cross-sectional section modulus and Fy is the yield stress.
50 Structural design optimization considering uncertainties

The cross-sectional forces are calculated from the stochastic variables HT , MT , and
NT . The yield stress, Fy , is modelled as a LogNormal variable with coefficient of
variation (COV) = 0.05 and characteristic values (5 percentile) equal to 235 MPa and
340 MPa for the tower and the mono-pile, respectively.
For the failure mode local buckling of cross-section the limit state function is written:

N M
σ= + ≥ Fyc (43)
A W
where the local buckling strength is estimated by the model in (ISO 19902 2001).
The cross-sectional forces are calculated from the stochastic variables HT and MT .
The yield stress, Fy is modelled as for yielding failure. Model uncertainty is introduced
through a factor XB multiplied to Fyc . XB is assumed LogNormal distributed with
expected value 1 and COV = 0.10.
For the failure mode fatigue failure SN-curves and linear damage accumulation
by the Miner rule are used. It is assumed that the SN-curve is bilinear and can be
described by:

N = K1 (
s)−m1 for N ≤ NC (44)
N = K2 (
s)−m2 for N > NC (45)

where
s is the stress range, N is the number of cycles to failure, K1 , m1 are the
material parameters for N ≤ NC , K2 , m2 are the material parameters for N > NC ,
sC
is the stress range corresponding to NC .
Further it is assumed that the total number of stress ranges for a given fatigue critical
detail can be grouped in nσ groups/bins such that the number of stress ranges in group
i is ni per year.
In a deterministic design check the design equation can be written:
 ni TF  ni TF
+ ≥1 (46)
si ≥
sC
K1C s−m
i
1
si <
sC
K2C s−m
i
2

where si =
Mi /z is the stress range in group i,
Mi is the bending moment range z is a
design parameter, KiC is the characteristic value of Ki (log KiC mean of log Ki minus two
standard deviations of log Ki ), TF = FDF TL is the fatigue life time, TL is the service
life and FDF is the Fatigue Design Factor which can be considered as a fatigue safety
factor.
In a reliability analysis the reliability index (or the probability of failure) is calculated
using the limit state function associated with (46). This limit state equation can be
written:
 ni TL  ni TL
g =1− − (47)
si ≥
sC
K1 s−m
i
1
si <
sC
K2 s−m
i
2

where si = XS
Mi /p is the stress range in group i, XS is a stochastic variable modelling
model uncertainty related to the fatigue wind load and to calculation of the relevant
fatigue stresses with given wind load. XS is assumed LogNormal distributed with
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 51

Table 2.1 Stochastic model. D: Deterministic; N: Normal; LN: LogNormal.

Variable Distribution Expected value Standard deviation

X stress LN 1 COV stress = 0.05


X wind LN 1 COV wind = 0.15
TL D T F /FDF 20 years
m1 D 3
log K 1 N 12.151 + 2 · 0.20 0.20
m2 D 5
log K 2 N 15.786 + 2 · 0.25 0.25

log K 1 and log K 2 are fully correlated.


mean value = 1 and COV = COVwind 2
+ COVstress
2
. log Ki is modelled by a Normal
distributed stochastic variable according to a specific SN-curve.
Representative statistical parameters are shown in Table 2.1. The basic SN curve
used correspond to the SN 90 curve in (EC 3 2003).
The optimal design is determined from the following optimization problem:


b CI (z) CI (z) CF λPF (z)
max W(z) = − − +
z rC0 C0 C0 C0 γ
subject to pli ≤ pi ≤ pui , i = 1, . . . , N (48)
PF (z) ≤ PFmax
ω1 (z) ≥ ωL

where PFmax is the maximum acceptable annual probability of failure. ω1 is the lowest
natural frequency of the wind turbine structure and ωL is a minimum acceptable eigen
frequency. 
The probability of failure is estimated by the simple upper bound: PF ≈ Ni=1 (−βi )
where βi is the annual reliability index in failure element i of the N failure
elements/failure modes.
The following design/optimization variables related to the tower and pile model are
used: DT is the diameter at tower top, D is the diameter of tower at bottom, t1 , t2
and t3 are thickness of tower sections, DP is the diameter of the monopile, tP is the
thickness of monopile, HP is the length of the monopile.
The initial costs is modelled by:


1 1 Vmono
CI = C0,foundation +
2 2 Vmono,0

1 3 Vtower
+ C0,tower + + CI,blades + CI,powertrain + CI,others (49)
4 4 Vtower,0
  
turbine
52 Structural design optimization considering uncertainties

where Vmono,0 and Vtower,0 are reference cross-sectional areas for the mono-pile founda-
tion and the tower, respectively. Thus, the model is a linear model that gives the initial
costs for designs that deviate from a given reference. The term CI,others accounts for
initial costs connected to external and internal grid connections that are of course inde-
pendent of the extreme load. Because, in current practice, the design of the blades and
the power train are driven by fatigue and operation loads respectively, the dependence
of the initial costs of these main parts of the turbine on the extreme load is assumed
negligible in this model. The following model is used for the normalised initial costs
at the considered site
  
CI 1 1 1 Vmono 1 1 1 3 Vtower 1 1 1
= + + + + + + (50)
C0 6 2 2 Vmono,0 2 3 4 4 Vtower,0 3 3 3
  
turbine
The ratios appearing in this formula will be site specific. For a far off offshore
site the grid connection will become a larger part of the total costs. Likewise the
foundation costs will depend on water depth. For other sites the cost ratios may e.g. be:
1 5
, , and 13 for the foundation, the turbine, and the other costs, respectively. For the
4 12
reference turbine Vmono,0 = 25.5 m3 and Vtower,0 = 14.0 m3 , which have been derived
from the following reference values: h = 63 m, hw = 7 m, DT = 2.43 m, tT = 17 mm,
DB = 3.90 m, tB = 29 mm, hP = 41 m, tp = 49.5 mm, and DP = 4.1 m. Thus
CI 1 1 Vmono 1 1 Vtower 1 1
= + + + + + (51)
C0 12 12 14.0 m3 24 8 23.2 m3 3 3
  
turbine
It is noted that out of the total initial costs only a minor part depends on the loads
because the study is restricted to the support structure.
For a gravity foundation the normalised failure costs are estimated to be:

CF,foundation 1
= (52)
C0,foundation 6

Compared to this the failure costs for the turbine are negligible. The turbine failure
costs could be virtually zero if one just leaves the turbine at the bottom of the sea like
a shipwreck, a solution that may hardly be accepted by environmentalists. It is noted
that, at least in Denmark, it is for aesthetic reasons not accepted to rebuild the turbine
a little away from the collapsed turbine, whereby the failure costs could otherwise
practically vanish. Indeed Danish building licences demand that a new turbine, which
replaces a collapsed turbine, must be situated at the exact same spot. That is, the space
cannot even be left unused. Assuming that the damage to the grid is small the failure
costs become:
CF 1
= (53)
C0 36

For the considered site and turbine, and Assumption: Given site-i.e. climate
(A = 10.8, k = 2.4), specified rated power (2 MW) and turbine height and rotor
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 53

Table 2.2 Optimal values of design variables, objective function and natural frequency.

γ 0.03 0.05 0.10 0.05 0.05

b/C 0 1/8 1/8 1/8 1/10 1/8


C F /C 0 1/36 1/36 1/36 1/36 1/360
DT 2.92 m 2.89 m 2.81 m 2.89 m 2.77 m
D 4.00 m 4.00 m 4.00 m 4.00 m 4.00 m
t1 20 mm 20 mm 20 mm 20 mm 20 mm
t2 28 mm 29 mm 25 mm 29 mm 28 mm
t3 35 mm 33 mm 32 mm 33 mm 33 mm
DP 5.41 m 5.40 m 4.93 m 5.40 m 5.31 m
tP 21 mm 20 mm 20 mm 20 mm 20 mm
HP 34.7 m 34.7 m 34.7 m 34.7 m 34.7 m
W 3.264 1.602 0.359 1.102 1.603
ω1 2.71 2.67 2.43 2.67 2.63

diameter. The average power is 1095 kW which with an assumption of 2% down


time the annual average production may be computed. In the Danish community sub-
sidising currently ensures that the market price for 1 kWh wind turbine generated
electric power is 0.43 DKK/kWh. From this, one should subtract, as a lifetime aver-
age, 0.1 DKK/kWh for operation and maintenance expenses. The normalised average
benefits per year becomes approximately

b 1
= (54)
C0 8

The real rate of interest r is assumed to be 5% because, as argued, a purely monetary


reliability optimization is considered.
Assuming a lower tower frequency of 0.33 Hz a frequency constraint becomes
ω1 ≥ 2π 0.33 Hz = 2.07 s−1 .
The optimal design is determined from the optimization problem (5.9). The
following bounds on the design variables are used:
Thicknesses: 20 mm and 50 mm
Diameter tower: 2m and 4 m
Diameter monopile: 2 m and 6 m
The optimal values of the design variables are shown in Table 2.2, including cases
where the real rate of interest r is 3%, 5% and 10%, b/C0 is 1/8 and 1/10, and CF /C0
is 1/36 and 1/360. In Table 2.3 reliability indices for the different failure modes and
for the system are shown. It is seen that

• For increasing rate γ the dimensions and the value of the objective function as
expected decreases. Further also the corresponding system reliability indices and
eigenfrequencies decrease slightly.
• The optimal dimensions are not influenced by a change in the benefits – only the
value of the objective function decreases with decreasing benefits per year.
54 Structural design optimization considering uncertainties

Table 2.3 Optimal values reliability indices for failure modes and system – first value is for local
buckling/yielding and second value is for fatigue.

γ 0.03 0.05 0.10 0.05 0.05

b/C 0 1/8 1/8 1/8 1/10 1/8


C F /C 0 1/36 1/36 1/36 1/36 1/360
Top section Top 13.6/4.90 13.4/4.79 12.9/4.52 13.4/4.79 12.7/4.52
Bottom 5.63/4.25 5.55/4.20 5.37/4.08 5.55/4.20 5.28/4.01
Middle section Top 7.96/5.62 8.02/5.64 6.91/5.01 8.02/5.64 7.39/5.20
Bottom 5.12/3.67 5.22/3.72 4.35/3.50 5.22/3.72 4.83/3.54
Bottom section Top 6.37/4.32 6.08/4.09 5.79/3.95 6.08/4.09 5.95/4.01
Bottom 5.08/3.60 4.85/3.49 4.66/3.26 4.85/3.49 4.86/3.49
Pile 7.09/4.09 7.09/3.98 7.09/3.47 7.09/3.98 7.09/3.86
System 3.41 3.34 3.06 3.34 3.26

• For decreasing failure costs the optimal dimensions, the objective function, the
system reliability level and the eigenfrequency decrease slightly.
• The system reliability index β is 3.1–3.4.
• In this example the fatigue failure mode has the smallest reliability indices (largest
probabilities of failure).
• The frequency constraint is not active.

The example shows that the optimal reliability level related to structural failure of
offshore wind turbines is of the order of a probability per year equal to 2 · 10−4 − 10−3
corresponding to an annual reliability index equal to 3.1–3.4. This reliability level is
significantly lower than for civil engineering structures in general.

6 Conclusions
The theoretical basis for reliability-based structural optimization within the frame-
work of Bayesian statistical decision theory is briefly described. Reliability-based cost
benefit problems are formulated and exemplified with structural optimization. The
basic reliability-based optimization problems are generalized to the following exten-
sions: interactive optimization, inspection and repair costs, systematic reconstruction,
re-assessment of existing structures.
Illustrative examples are presented including a simple introductory example, a deci-
sion problem related to bridge re-assessment and a reliability-based decision problem
for offshore wind turbines.

References

Aitchison, J. & Dunsmore, I.R. 1975. Statistical Prediction Analysis. Cambridge University
Press, Cambridge.
Ang, H.-S.A. & Tang, W.H. 1975. Probabilistic concepts in engineering planning and design,
Vol. I and II, Wiley.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s 55

Benjamin, J.R. & Cornell, C.A. 1970. Probability, Statistics and Decision for Civil Engineers.
Mc-Graw-Hill.
EN 1993-1-9 2003. Eurocode 3: Design of steel structures – Part 1–9: Fatigue.
Enevoldsen, I. & Sørensen, J.D. 1993. Reliability-Based Optimization of Series Systems of
Parallel Systems. ASCE Journal of Structural Engineering, Vol. 119, No. 4, pp. 1069–1084.
Enevoldsen, I. 1994. Sensitivity Analysis of a Reliability-Based Optimal Solution. ASCE, Journal
of Engineering Mechanics.
Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering.
Structural Safety, Vol. 15, pp. 169–196.
Enevoldsen, S. & Sørensen, J.D. 1998. A Probabilistic Model for Chloride-Ingress and Initiation
of Corrosion in Reinforced Concrete Structures. Structural Safety, Vol. 20, pp. 69–89.
Engelund, S. 1997. Probabilistic models and computational methods for chloride ingress in con-
crete. Ph.D. thesis, Department of Building Technology and Structural Engineering, Aalborg
University.
Faber, M.H., Engelund, S. Sørensen, J.D. & Bloch, A. 1989. Simplified and generic risk based
inspection planning. Proc. OMAE2000, New Orleans.
Frangopol, D.M. 1985. Sensitivity of reliability-based optimum design. ASCE, Journal of
Structural Engineering, Vol. 111, No. 8, pp. 1703–1721.
Fujimoto, Y., Itagaki, H., Itoh, S., Asada, H. & Shinozuka, M. 1989. Bayesian Reliability
Analysis of Structures with Multiple Components. Proceedings ICOSSAR 89, pp. 2143–2146.
Fujita, M., Schall, G. & Rackwitz, R. 1989. Adaptive Reliability Based Inspection Strategies for
Structures Subject to Fatigue. Proceedings ICOSSAR 89, pp. 1619–1626.
Gill, P.E., Murray, E.W. & Wright, M.H. 1981. Practical Optimization. Academic Press.
Haftka, R.T. & Kamat, M.P. 1985. Elements of Structural Optimization. Martinus Nijhoff,
The Hague.
ISO 19902 2001. Petroleum and natural gas industries – Fixed steel offshore structures.
Kroon, I.B. 1994. Decision Theory Applied to Structural Engineering Problems. Ph.D. thesis,
Department of Building Technology and Structural Engineering, Aalborg University.
Kuschel, N. & Rackwitz, R. 1998. Structural optimization under time-variant reliability con-
straints. Proc. 8th IFIP WG 7.5 Conf. On Reliability and optimization of structural systems,
University of Ann Arbor, pp. 27–38.
Lindley, D.V. 1976. Introduction to Probability and Statistics from a Bayesian Viewpoint, Vol.
1 + 2. Cambridge University Press, Cambridge.
Madsen, H.O. & Friis-Hansen, P. 1992. A comparison of some algorithms for reliability-
based structural optimization and sensitivity analysis. Proc. IFIP WG7.5 Workshop, Munich,
Springer-Verlag, pp. 443–451.
Madsen, H.O. & Sørensen, J.D. 1990. Probability-Based Optimization of Fatigue Design
Inspection and Maintenance. Presented at Int. Symp. On Offshore Structures, University
of Glasgow.
Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety. Prentice-Hall.
Murotsu, Y., Kishi, M., Okada, H., Yonezawa, M. & Taguchi, K. 1984. Probabilistically opti-
mum design of frame structures. Proc. 11th IFIP Conf. On System modeling and optimization,
Springer-Verlag, pp. 545–554.
Madsen, H.O., Sørensen, J.D. & Olesen, R. 1989. Optimal Inspection Planning for Fatigue
Damage of Offshore Structures. Proceedings ICOSSAR 89, pp. 2099–2106.
Powell, M.J.D. 1982. VMCWD: A FORTRAN Subroutine for Constrained Optimization.
Report DAMTP 1982/NA4, Cambridge University, England.
Rackwitz, R. 2001. Risk control and optimization for structural facilities. Proc. 20th IFIP TC7
Conf. On System modeling and optimization, Trier, Germany.
Raiffa, H. & Schlaifer, R. 1961. Applied Statistical Decision Theory. Harward University Press,
Cambridge, Mass.
56 Structural design optimization considering uncertainties

Rosenblueth, E. & Mendoza, E. 1971. Reliability optimization in isostatic structures. J. Eng.


Mech. Div. ASCE, pp. 1625–1642.
Schittkowski, K. 1986. NLPQL: A FORTRAN Subroutine Solving Non-Linear Programming
Problems. Annals of Operations Research.
Skjong, R. 1985. Reliability-Based Optimization of Inspection Strategies. Proc. ICOSSAR’85
Vol. III. pp. 614–618.
Streicher, H. & Rackwitz, R. 2002. Structural optimization – a one level approach. Proc.
Workshop on Reliability-based Design and Optimization – rbo02, IPPT, Warsaw.
Sørensen, J.D. & Thoft-Christensen, P. 1985. Structural optimization with reliability constraints.
Proc. 12th IFIP Conf. On System modeling and optimization, Springer-Verlag, pp. 876–885.
Sørensen, J.D. & Thoft-Christensen, P. 1988. Inspection Strategies for Concrete Bridges. Proc.
IFIP WG 7.5, Springer Verlag, Vol. 48, pp. 325–335.
Sørensen, J.D., Thoft-Christensen, P., Siemaszko, A., Cardoso, J.M.B. & Santos, J.L.T. 1995.
Interactive reliability-based optimal design. Proc. 6th IFIP WG 7.5 Conf. On Reliability and
Optimization of Structural Systems, Chapman & Hall, pp. 249–256.
Sørensen, J.D. & Tarp-Johansen, N.J. 2005a. Reliability-based optimization and optimal relia-
bility level of offshore wind turbines. International Journal of Offshore and Polar Engineering
(IJOPE), Vol. 15, No. 2, pp. 1–6.
Sørensen, J.D. & Tarp-Johansen, N.J. 2005b. Optimal Structural Reliability of Offshore Wind
Turbines. CD-rom Proc. ICOSSAR’2005, Rome.
Thoft-Christensen, P. & Sørensen, J.D. 1987. Optimal Strategies for Inspection and Repair of
Structural Systems. Civil Engineering Systems, Vol. 4, pp. 94–100.
von Neumann, J. & Morgenstern, O. 1943. Theory of Games and Economical Behavior.
Princeton University Press.
Chapter 3

Reliability analysis and reliability-


based design optimization using
moment methods
Sang Hoon Lee
Northwestern University, Evanston, IL, USA

Byung Man Kwak


Korea Advanced Institute of Science and Technology, Daejeon, Korea

Jae Sung Huh


Korea Aerospace Research Institute, Daejeon, Korea

ABSTRACT: Reliability analysis methods using the design of experiments (DOE) are intro-
duced and integrated into reliability based design optimization (RBDO) frame work with a
semi-analytic design sensitivity analysis (DSA) for the reliability measure. A procedure using
the full factorial DOE with optimal levels and weighs is introduced and named as full factorial
moment method (FFMM) for reliability analysis. The probability of failure is calculated using an
empirical distribution system and the first four statistical moments of system performance func-
tion calculated from DOE. To enhance the efficiency of FFMM, a response surface augmented
moment method (RSMM) is developed to construct a series of approximate response surface
approaching to that of FFMM. A semi-analytic design sensitivity analysis for the probability
of failure is proposed in combination with FFMM and RSMM. It is shown that the proposed
methods are accurate and effective especially when the inputs are non-normal.

1 Introduction
One of the fundamental problems in the structural reliability theory is the calculation
of the probability of failure which is defined as a multifold probability integral of the
joint probability density function of random variables over the domain of structural
failure. Because the analytic calculation of this integral is practically impossible, many
approximate methods and simulation methods are developed so far (Madsen et al.
1986, Kiureghian 1996, Bjerager 1991). Among the methods, the first order reliability
method (FORM) (Hasofer & Lind 1974, Rackwitz & Fiessler 1978) is considered to
be one of the most efficient computational methods and over the past three decades,
contributions from numerous studies have made FORM the most popular reliability
method. The reliability based design approaches (Lee & Kwak 1987–1988, Enevoldsen
& Sørensen 1994, Frangopol & Corotis 1996, Tu et al. 1999, Youn et al. 2003) have
adopted FORM as their main analysis tool for reliability due to its efficiency.
The difficulties in FORM such as numerical difficulty in finding the most probable
failure point (MPFP), errors involved in the nonlinear failure surface including the pos-
sibility of multiple design points (Kiureghian & Dakessian 1998), and errors caused
58 Structural design optimization considering uncertainties

by non-normality of variables (Hohenbichler & Rackwitz 1981) are well recognized


and efforts to overcome these are also made. They include the second order reliability
method (SORM) (Fiessler et al. 1979, Breitung 1984, Koyluoglu & Nielsen 1994,
Kiureghian et al. 1987), the advanced Monte Carlo simulation (MCS) such as impor-
tance sampling (Bucher 1988, Mori & Ellingwood 1993, Melchers 1989), directional
sampling (Bjerager 1988, Nie & Ellingwood 2005) and the response surface based
approaches (Faravelli 1989, Bucher & Bourgund 1990, Rajashekhar & Ellingwood
1993). However, finding MPFP is a still numerically difficult task in FORM and often
the error involved degrades the accuracy of final results.
In this chapter, we investigate another route for structural reliability, the moment
method. The moment method calculates the probability of failure by computing the sta-
tistical moments of the performance function and fitting the moments with some empir-
ical distribution systems such as the Pearson system, Johnson system, Gram-Charlier
series, and so on (Johnson et al. 1995). For this purpose, the performance function
must be computed for a set of well-designed calculation points, often called quadrature
points or designed experimental points. Compared with FORM, the moment method
has advantages that it dose not involve the difficulties of searching for the MPFP and
the information of cumulative distribution function (CDF) is readily available.
Not so many attempts are reported about the reliability analysis using the moment
method. For statistical moment estimation, Evans (1972) proposed a quadrature for-
mula which uses 2n2 + 1 nodes and weights for a system with n random variables and
applied it to tolerance analysis problems. Li & Lumb (1985) adopted Evans’ quadra-
ture formula in structural reliability analysis in combination with the Pearson system.
Rosenblueth (1981) devised a 2n point estimate method and (Hong 1996) proposed a
nonlinear system of equations for point estimate of probability in combination with the
Johnson distribution system and Gram-Charlier series. Zhao & Ono (2001) proposed
a point estimate method using Rosenblatt transformation and kn point concentration
where k is the number of quadrature points for each random variable. Taguchi (1978)
proposed a design of experiment (DOE) technique which uses three level experiments
for each random variable to calculate the mean and standard deviation of performance
function for tolerance design. Taguchi’s method was improved by (D’Errico & Zaino
1988). These methods can treat only normally distributed random variables and the
DOE becomes a 3n full factorial design when n random variables are under consider-
ation. Actually, the levels and weights proposed by D’Errico & Zaino are equivalent
to the nodes and weights in Gauss-Hermite quadrature formula (Abramowitz &
Stegun 1972, Engels 1980). Seo & Kwak (2002) extended D’Errico & Zaino’s method
to treat non-normal distributions by deriving an explicit formula of three levels and
weights for general distributions. In addition to the strong points of moment method
mentioned above, the moment method using DOE has several good aspects. It is very
easy and simple to use and does not involve any deterioration of accuracy or additional
efforts for treating non-normal random variables. However, the common problem of
the moment based methods is the numerical efficiency. The methods often tend to
become very expensive as the number of random variables increases. To overcome this
shortcoming, (Lee & Kwak 2006) developed a new moment method integrating the
response surface method with the 3n full factorial DOE. (Huh et al. 2006) developed
a response surface approximation scheme based on the moment method and applied
it to the design study of a precision nano-positioning system.
Reliability analysis and optimization using moment methods 59

In this chapter, we present our previous developments of moment methods which


utilize DOE for statistical moment estimation and propose a RBDO framework with a
semi-analytic design sensitivity analysis in combination with the moment methods. In
section 2, the full factorial moment method (FFMM) is introduced with an explanation
on the selection of optimal DOE. In section 3, the response surface augmented moment
method (RSMM) is introduced and the accuracy and efficiency of RSMM are compared
with other methods via several examples. In section 4, a RBDO procedure is proposed
using FFMM and RSMM with a semi-analytic design sensitivity analysis. Section 5
provides some discussions on moment methods and the proposed RBDO procedure
and concluding remarks.

2 Reliability analysis using full factorial moment method


The probability of failure of a system is defined by a multifold probability integral as

Pf = Pr [g(X) < 0] = fX (x)dx (1)
g(x)<0

where X is the vector of input random variables, g(x) is the system performance func-
tion whose negative value indicates the failure state, and fX (x) is the joint probability
density function (PDF) of X. In moment method, the probability of failure is calculated
from the PDF of g(X) which is found by fitting the first four statistical moments of
system performance function with empirical distribution systems as in Equation 2:
  0
Pf = fX (x)dx = fg(X) (g(x))dg(x) (2)
−∞
g(x)<0

where fg means the PDF of g(X).


One essential procedure for this calculation is the accurate estimation of the statis-
tical moments of g(X). Since the empirical distribution systems require high order
moments usually up to fourth order, approximate methods using perturbation or
Taylor series expansion are not adequate in most cases. In this section, we introduce a
moment estimation scheme based on the design of experiments (DOE) which is appli-
cable to general non-normal random variables. And a reliability analysis procedure
using the Pearson system is introduced with examples.

2.1 Design of experiments for s tatis tical mo me nt e s ti mati o n


For a random variable X, the k-th order statistical moments of a one-dimensional
function g(X) can be calculated using a quadrature formula with m nodes as follows:
 ∞
E{g k } = [g(x)]k fX (x)dx
−∞

≈ w1 [g(µ + α1 σ)]k + w2 [g(µ + α2 σ)]k + · · · + wm [g(µ + αm σ)]k (3)


60 Structural design optimization considering uncertainties

where fX (x) is the probability density function of X, µ and σ denote the mean and
standard deviation of X, respectively. To estimate accurately up to the fourth moment,
which is often required by the empirical distribution systems such as the Pearson sys-
tem, at least three node quadrature rule is necessary and the parameters, αi and wi can
be found ideally by solving the following moment matching equations (Engels 1980):
 ∞
µk = (x − µ)k fX (x)dx
−∞

= w1 (α1 σ)k + w2 (α2 σ)k + · · · + wm (αm σ)k , (k = 0, 1, . . . , 2m − 1) (4)

where µk is the k-th statistical moment of random variable X, which can be calculated
from the PDF of X, and 2m − 1 is the polynomial order of the quadrature rule. By
introducing levels, li = µ + αi σ, Equation 4 can be rewritten in terms of li and wi from
the point of view of an experimental design. In the equations, there are 2m unknowns
and the number of equations is also 2m. Thus, if we provide the values of µk , li and
wi are uniquely determined. It is not a simple task to find the solution of Equation 4
algebraically. For 3 level DOE, (Seo & Kwak 2002) derived a simple explicit formula
of li and wi , which will be discussed in section 2.2. For general cases, Equation 4 can
be solved with a numerical method but from the experience of the authors, it is found
that solving Equation 4 for cases m ≥ 7 is very difficult.
When there are n random variables in the system and if we use the same number of
levels, m for each random variable, the DOE becomes a mn full factorial design from
the product quadrature rule and the first four statistical moments of system response
function g(X) can be calculated as follows:


m 
m
µg = w1·i1 · · · wn·in g(l1·i1 , . . . , ln·in ) (5)
i1 =1 in =1

⎡ ⎤1/2

m 
m
σg = ⎣ w1·i1 · · · wn·in (g(l1·i1 , . . . , ln·in ) − µg )2 ⎦ (6)
i1 =1 in =1

⎡ ⎤
 
m 
m
β1g = ⎣ w1·i1 · · · wn·in (g(l1·i1 , . . . , ln·in ) − µg )3 ⎦ σg3 (7)
i1 =1 in =1

⎡ ⎤

m 
m
β2g = ⎣ w1·i1 · · · wn·in (g(l1·i1 , . . . , ln·in ) − µg )4 ⎦ σg4 (8)
i1 =1 in =1


where wi·j and li·j mean the j-th weight and level of i-th variable and µg , σg , β1g and
β2g denote the mean, standard deviation, skewness and kurtosis of g(X), respectively. If
we know a priori that g(X) shows more nonlinear dependence on some of the random
variables, we can use more levels for those variables. In general, we can use different
number of levels for each random variable, e.g. m1 , . . . , mn instead of m in Equations
Reliability analysis and optimization using moment methods 61

5–8, and the total number of experiments then becomes m1 · m2 · · · mn . The extension
to the general case is straightforward.

2.2 Determination of optimal levels and we i g hts


In this section, we look further inside how we can find the optimal levels and weights
for statistical moment estimation. As mentioned in section 2.1, they can be found by
solving the moment matching equation, Equation 4. When we use three levels, that is,
m is chosen as 3, the system of equations can be written as follows:

w1 + w2 + w3 = 1 (9)

w1 l1 + w2 l2 + w3 l3 = µ (10)

w1 (l1 − µ)2 + w2 (l2 − µ)2 + w3 (l3 − µ)2 = σ 2 (11)



w1 (l1 − µ)3 + w2 (l2 − µ)3 + w3 (l3 − µ)3 = β1 σ 3 (12)

w1 (l1 − µ)4 + w2 (l2 − µ)4 + w3 (l3 − µ)4 = β2 σ 4 (13)

w1 (l1 − µ)5 + w2 (l2 − µ)5 + w3 (l3 − µ)5 = µ5 (14)

Since it is difficult to solve these equations algebraically, (Seo & Kwak 2002) replaced
the condition on the fifth moment (Eq. 14) with l2 = µ and obtained an explicit formula
of li and wi in terms of first four statistical moments of input random variable as
follows:
⎡ √ ⎤T
β1 σ σ
⎢ µ + 2 − 2 4β2 − 3β1 ⎥
⎢ ⎥
⎢ ⎥

{l1 , l2 , l3 } = ⎢ µ ⎥ (15)

⎢ √ ⎥
⎣ β1 σ σ ⎦
µ+ + 4β2 − 3β1
2 2
⎡ √  ⎤T
(4β2 − 3β1 ) + β1 4β2 − 3β1
⎢ ⎥
⎢ 2(4β2 − 3β1 )(β2 − β1 ) ⎥
⎢ ⎥
⎢ β2 − β1 − 1 ⎥
{w1 , w2 , w3 } = ⎢


⎥ (16)
⎢ β2 − β 1 ⎥
⎢ √  ⎥
⎣ (4β2 − 3β1 ) − β1 4β2 − 3β1 ⎦
2(4β2 − 3β1 )(β2 − β1 )

The levels and weights in Equations 15 and 16 can be calculated very easily, but
applicable to only symmetric distributions in their exactness. With dissymmetric dis-
tributions such as lognormal, exponential distribution, the first level l1 can be located
outside the domain where the distribution is defined. For example, for an exponential
62 Structural design optimization considering uncertainties

1.0
0.9
0.8
Proposed 3 level
0.7 Seo & Kwak's 3 level
0.6 P.D.F.
Weight

Outside
0.5
0.4
0.3
0.2
0.1
0.0
1 0 1 2 3 4 5 6 7
Level

Figure 3.1 Levels and weights from the moment matching equation and by Seo & kwak’s formula
for an exponential distribution (λ = 1).

distribution defined in (x ≥ 0) with distribution parameter λ (Hahn & Shapiro 1967),


the four statistical moments are calculated as,

1 1 
µ= , σ= , β1 = 2, β2 = 9 (17)
λ λ

and the levels by Equation 15 are given as follows:

1 √ 1 1 √
l1 = (2 − 6) < 0, l2 = , l3 = (2 + 6) (18)
λ λ λ

It is seen that the first level is located outside the domain of distribution and this
may cause severe numerical problems in applying DOE for moment estimation.
The other problem of the levels and weights by Equations 15 and 16 is that since the
requirement on the fifth moment is replaced with the requirement that the midlevel
should be the mean value of the random variable, the accuracy of high moment esti-
mation degrades for a nonlinear performance function. It will be illustrated in the
examples and the levels and weights by Equations 15 and 16 may not be optimal in
terms of accuracy, unless all the distributions have symmetry.
With this reason, it is preferable to solve Equation 4 directly with numerical methods
and numerical equation solving algorithms such as modified Powell’s hybrid method
(More et al. 1980) can be used for this purpose. Figure 3.1 compares the levels and
weights obtained by solving Equation 4 with numerical methods with those obtained
by Equations 15 and 16 for the exponential distribution example mentioned above. It
is found that the levels and weight obtained directly from Equation 4 are free from the
Reliability analysis and optimization using moment methods 63

0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Normal distribution m  5, s  1 (b) Uniform distribution 0  x  10
0.7 7

0.6 6

0.5 5

0.4 4

0.3 3

0.2 2

0.1 1

0 0
0 1 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

s2
(c) Rayleigh distribution ^ (d) Beta distribution h  0.3, g  0.6

Figure 3.2 Three level DOE for different distributions. Notations of the distribution parameters
are from (Hahn & Shapiro 1967).

problems found in Seo & Kwak’s levels and weights. In Figure 3.2, levels and weights
for different distributions are depicted for a three level case.
The extension of the procedure for finding levels and weights to cases with more
than three levels is straightforward, but there are two difficulties in calculation. Firstly,
the calculation of high order moments of input random variable can be complicated
and tedious depending on the PDF of the input variable. For example, when calculating
the levels and weights for a five level DOE (m = 5), the moments of input X should be
provided up to ninth order, which might be very complicated for some distributions.
Secondly, the solution of Equation 4 becomes very difficult or sometimes impossible.
From our experience, the calculation up to five level DOE turns out to be manageable,
but the calculation for more levels than five was not successful in many cases.
When the PDF of a distribution has the same form with the weight function in a
Gaussian quadrature formula (Abramowitz & Stegun 1972), the levels and weights can
be directly derived from the nodes and weights of the Gaussian quadrature rule instead
of solving Equation 4. Among the well known distributions, the normal distribution
matches with the Gauss-Hermite quadrature, the exponential distribution matches
with the Gauss-Laguerre quadrature, and the uniform distribution matches with the
Gauss-Legendre quadrature formula.
64 Structural design optimization considering uncertainties

2.3 Em pi ri c al d is t r ib ut io n s y s t ems
Once the four statistical moments of g(X) are obtained, the PDF of g(X) can be approx-
imated by the empirical distribution systems, such as Johnson system and Pearson
system (Johnson et al. 1995, Hahn & Shapiro 1967). In our approach, the Pearson
system of distribution is adopted, which approximates the PDF of a random variable
X as a solution of a differential equation which follows:

1 df (x) x+a
=− (19)
f (x) dx c0 + c1 x + c2 x2

where f (x) is the PDF to be found, x is x − µ and c0 , c1 , c2 and a are the coefficients
determined from the four statistical moments of X. The relation is given as,

c0 = (4β2 − 3β1 )(10β2 − 12β1 − 18)−1 σ



c1 = a = β1 (β2 + 3)(10β2 − 12β1 − 18)−1 σ 2
c2 = (2β2 − 3β1 − 6)(10β2 − 12β1 − 18)−1 (20)

The shape of f (x) changes considerably with the characteristics of roots of the
following equation:

c0 + c1 x + c2 x2 = 0 (21)

Pearson classified the types of distribution into seven groups according to the types
of roots of Equation 21. It is summarized in Table 3.1.
It is notable that the type of a distribution is determined solely by the skewness
and kurtosis. As a special case, β1 = 0, β2 = 3 corresponds to the normal distribution.
The Pearson system is a convenient tool that enables to find a PDF from the first four
statistical moments of a random variable, however, one thing that should be noted is
that the PDF found by the Pearson system is not the unique solution corresponding
to the moments. It is important with this reason, to understand the mathematical
background and assumptions in the Pearson system: For example, it can represent

Table 3.1 Pearson system of distributions (classifications and corresponding


distributions).

Types Case Distributions

β1 (β2 + 3)2
Type I: κ= <0 Beta
4(2β2 − 3β1 − 6)(4β2 − 3β1 )
Type II: β1 = 0,β2 < 3 Beta (Symmetric)
Type III: 2β2 − 3β1 − 6 = 0 Gamma
Type IV: 0<κ<1 No match
Type V: κ=1 Inverse Gaussian
Type VI: κ>1 Beta prime
Type VII: β1 = 0,β2 > 3 Student’s t
Reliability analysis and optimization using moment methods 65

PDF which has only one mode. More detailed explanations about the Pearson system
can be found in (Johnson et al. 1995).

2.4 Proc edure of reliability analys is us i ng f ul l f ac to ri al mo me nt


method
The procedure for reliability analysis using the full factorial DOE and the Pearson
system is summarized in Figure 3.3. This procedure calculating the probability of
failure using the full factorial experimental set is named in our work as full factorial
moment method (FFMM).

2.5 Examples
Two examples are provided to demonstrate the accuracy of FFMM in statistical
moment estimation and reliability calculation. The first example is a simple linear
polynomial function (Eq. 22) whose statistical moments can be calculated analyti-
cally. FFMM is applied with several different input distributions and compared with
exact solution to verify its accuracy. The input setting of the problem is summarized
in Table 3.2.

g(x1 , x2 , x3 ) = 3x1 − 2x2 + 5x3 (22)

The 3n full factorial DOE has been applied and the results of moment estimation
are summarized in Table 3.3. It is seen that FFMM provides very accurate results
regardless of the types of input distributions.
The second example of FFMM is an application to the overrunning clutch assembly
known as Fortini’s clutch (Fig. 3.4). This problem has been discussed by several authors
including (Greenwood & Chase 1990, Creveling 1997) and so on. The contact angle

Given input distribution

Calculate first 2m 1 moments of Xi

mX , sX , b1Xi , b2Xi , m5.Xi ,…, m2m1.Xi


i i

Find levels and weights: section 2.2

{li .1,…,li .m} and {wi .1,…,wi .m}

Run full factorial DOE: Eqs. (5)~(8)

mg , sg , b1g , b2g

Pearson system: Eqs. (19), (20)

fg(X) (g(x)) and Pr[ g(X)  0]

Figure 3.3 Overall procedure of full factorial moment method.


66 Structural design optimization considering uncertainties

Table 3.2 Four different settings of input distributions and their parameters.

Case Distribution Parameters of distribution∗

x1 x2 x3

(a) Exponential λ1 = 1.0 λ2 = 2.0 λ3 = 3.0


(b) Gamma η1 = 1.0, λ1 = 1.5 η2 = 3.0, λ2 = 5.0 η3 = 3.0, λ3 = 1.0
(c) Lognormal µ̂1 = 0.1, σ̂1 = 0.1 µ̂2 = 0.5, σ̂2 = 0.1 µ̂3 = 1.0, σ̂3 = 0.1
(d) Mixed λ1 = 4.0 µ̂2 = 0.5, σ̂2 = 0.1 a3 = 4.0, b3 = 1.0
(exponential) (lognormal) (Weibull)
∗ The notations of the distribution parameters are from (Hahn & Shapiro 1967).

Table 3.3 Results of moment estimation using FFMM for linear performance function.

Case Method µg σg β1g β2g µ5g

(a) Exact 3.6667 3.5746 1.3412 6.2969 1.3944e4


Proposed 3.6667 3.5746 1.3412 6.2969 1.3944e4
(b) Exact 15.800 8.9152 1.0805 4.7962 8.3428e5
Proposed 15.800 8.9152 1.0805 4.7962 8.3428e5
(c) Exact 13.678 1.4482 0.25520 3.1306 16.871
Proposed 13.678 1.4482 0.25520 3.1306 16.871
(d) Exact 1.9680 1.5131 0.18862 3.2369 21.712
Proposed 1.9680 1.5131 0.18862 3.2369 21.712

y
Cage
Hub
x2

x4 x1

x3

Roller
bearing

Figure 3.4 The overrunning clutch assembly (Fortini’s clutch).

Y is given in terms of the independent component variables, X1 , X2 , X3 and X4 as


follows:

X1 + 0.5(X2 + X3 )
Y = arccos (23)
X4 − 0.5(X2 + X3 )

The design requirement for this mechanism is that the contact angle Y must lie
between 0.087264 radian (5◦ ) and 0.157075 radian (9◦ ). The distribution types and
parameters of random variables are listed in Table 3.4.
Reliability analysis and optimization using moment methods 67

Table 3.4 Distribution types and parameters of input random variables in clutch example.

Component Distribution Mean Standard Parameters for


deviation non-normal variables

X1 Beta 55.29 mm 0.0793 mm γ1 = η1 = 5.0


X2 Normal 22.86 mm 0.0043 mm (55.0269 ≤ x1 ≤ 55.5531)
X3 Normal 22.86 mm 0.0043 mm σ̂4 = 0.1211
X4 Rayleigh 101.60 mm 0.0793 mm (x4 ≥ 101.45)

Table 3.5 Results of moment estimation and probability calculation for Fortini’s clutch example.

Moment 3n (S&K ∗ ) 3n (MM∗∗ ) 5n (MM∗∗ ) FORM MCS

Mean 0.1219 0.1219 0.1219 • 0.1219


STD 0.0117 0.0117 0.0117 • 0.0117
Skewness −0.0578 −0.0497 −0.0530 • −0.0523
Kurtosis 2.9216 2.8488 2.8827 • 2.8822
Pr [y < 5◦ ] 0.00159 0.00124 0.00140 diverge 0.00122
Pr [y < 6◦ ] 0.07266 0.07288 0.07272 0.08777 (40) 0.07388
Pr [y < 7◦ ] 0.50430 0.50467 0.50452 0.52037 (25) 0.50265
Pr [y < 8◦ ] 0.93617 0.93570 0.93595 0.93564 (15) 0.93666
Pr [y < 9◦ ] 0.99925 0.99943 0.99934 0.99921 (15) 0.99926
Pr [5◦ < y < 9◦ ] 0.99767 0.99819 0.99794 • 0.99804
Function call 81 81 625 No. in ( ) 1,000 k
∗ Seo & Kwak’s three level formula (Eqs. 15, 16) ∗∗ Levels and weights from Equation 4.

For comparison, FORM with HL-RF algorithm (Hasofer & Lind 1974, Rackwitz &
Fiesseler 1978) and Monte Carlo simulation (MCS) are applied together with FFMM
to calculate the probability of the contact angle being outside the allowable range. And
to check the difference in the accuracy made by the selection of levels and weights, the
Seo & Kwak’s three level DOE (Eqs. 15 and 16) is also tried. The results are summa-
rized in Table 3.5. In this example, FORM has some numerical difficulty in treating
the non-normal distributions when the probability is small. During the calculation of
the probability Pr(y < 5◦ ), the MPFP search point goes far outside the domain of the
non-normal random variables and this results in the divergence of HL-RF algorithm.
On the contrary, FFMM shows good accuracy throughout the range of y values. It
is seen that there is a subtle difference in the results by Seo & Kwak’s 3 level DOE
and DOE obtained by directly solving the moment matching equation (Eq. 4). This
difference becomes more significant as the system performance function becomes more
nonlinear. The PDF found by the Pearson system is plotted in Figure 3.5 with the his-
togram obtained by MCS. It is shown that the Pearson system gives very accurate PDF
estimation in this problem.
The number of function calls required for calculating the probability is also listed
in Table 3.5. The number of function evaluations in FFMM seems significantly bigger
than that of FORM, and it increases very rapidly as the number of random variables
increases. This becomes the weak point of FFMM, which hinders its application to
more practical engineering problems.
68 Structural design optimization considering uncertainties

0.7 PDF by MCS


PDF by Pearson system
0.6

0.5

0.4
Failure Failure
f (y)

0.3 region region

0.2

0.1

0.0
3 4 5 6 7 8 9 10
y (degree)

Figure 3.5 Probability density function of contact angle y.

3 Response surface augmented moment method


The FFMM introduced in section 2 provides good accuracy and ability to treat non-
normal distributions as shown in section 2.5. However, the number of function
evaluations required by FFMM increases exponentially with the number of random
variables and it is often prohibitive for applications to practical engineering problems
where the evaluation of system performance function requires considerable time and
computational resources. To tackle this problem, a novel way to integrate the response
surface approximation with the 3n FFMM was developed and named as response sur-
face augmented moment method (RSMM) (Lee & Kwak 2006). In RSMM, instead of
performing expensive full factorial experiments, experiments are selectively performed
at the points with bigger weight and the rest of the data are approximated by a second
order response surface. This response surface is updated with addition of experiments
one by one until a convergence in the probability of failure is achieved. In this section,
the overall procedure of RSMM is introduced and some important concepts utilized
in the method are explained together with examples of reliability analysis.

3.1 Ov era l l pr o c e d ur e
Two strategies are taken in developing RSMM. Firstly, to reduce the number of function
evaluations, the experimental data are used not only for moment estimation but also for
function approximation. Experiments important in probability calculation are selec-
tively performed and the rest of data in the set of full factorial design are approximated
using a response surface. Secondly, the initially simple response surface is updated pro-
gressively with the addition of experiments by introducing new cross product terms
into the approximation model. The overall procedure of RSMM is as follows:

(a) Establish 3n full factorial DOE with levels and weights obtained by solving
Equation 4.
Reliability analysis and optimization using moment methods 69

x2.2

1 1 1
36 9 36

1 4 1
9 9 9

x1.2

1 1 1
36 9 36

Figure 3.6 Experiment layout of RSMM at the initial approximation stage.

(b) Calculate performance function g(x) at the 2n + 1 experimental points located


on the axis of mid-level. Usually, the weight on the mid-level is much larger than
the rest. Figure 3.6 is the example of a 2 normal variable case. The numbers in
the figure are the weights imposed on the experimental points calculated by,

n
wi = w1i · w2i · · · · · wni = wji (24)
j=1

where wi means the overall weight imposed on the i-th experimental point and
wji is the weight of the j-th variable at the i-th experimental point. The circled
experimental points in the figure are those at which experiments for the initial
approximation are performed.
(c) With the 2n + 1 data obtained in step (b), build a quadratic response surface
using the least square estimation (Myers & Montgomery 1995) without cross
product terms as,

n 
n
g̃(x) = a + bi xi + ci x2i (25)
i=1 i=1

(d) Using g̃(x) complement data at points where experiments are not performed and
calculate the first four statistical moments of g(x) with Equations 5–8. Then
obtain the probability of failure Pr [g(x) < 0] using the Pearson system just as
done in FFMM.
(e) Calculate the influence index at the points where experiments are not performed.
The influence index, κi , at the i-th experimental point is defined as follows:
 
 dPf 
κi =   (26)
d g̃(xi ) 
70 Structural design optimization considering uncertainties

where xi is the vector x at the i-th experimental point. The influence index is a
measure devised to figure out the relative importance of experiments in calculat-
ing the probability. Detailed explanations about the influence index are given in
the subsequent section.
(f) Perform one additional experiment at the point with the biggest κi .
(g) With the data obtained in step (f), update g̃(x). A new cross product term may
be added into g̃(x) as in Equation 27,


n 
n 
nmix
g̃(x) = a + bi xi + ci x2i + dk xi(k) xj(k) (27)
i=1 i=1 k=1

where nmix is the number of cross product terms included in the formulation
and i(k) and j(k) are the indices of the first and second variable in the k-th cross
product term respectively where i(k) < j(k). The way of updating response surface
approximation is discussed in the subsequent section.
(h) With the updated g̃(x), calculate the probability of failure as in step (d).
(i) Repeat the steps from (e) to (h) until the value of the probability of failure
converges.

3.2 Inf l uen c e ind ex


RSMM tries to find a reliable solution by taking the converged value of probability
as its final solution after sufficient updates of the approximation with successive addi-
tions of experiments. One important procedure in RSMM is then the arrangement of
the order of experiment addition. It is obvious that to reduce the number of exper-
iments effectively a higher priority should be given to the experimental point which
can bring in the greatest change of probability when the approximation at that point
is replaced with the real experimental data. Influence index is devised to compare the
magnitudes of the expected change of probability at the points where experiments are
not performed.
The change of probability can be roughly estimated as follows:

dPf

Pf · (g(xi ) − g̃(xi )) (28)
d g̃(xi )

where the term  g(xi ) − g̃(xi ) means the approximation error at the i-th experimental
point and dPf d g̃(xi ) is the derivative of Pf with respect to g̃(x) at the i-th experimental
point, which implies the importance of the point in the calculation of Pf in the cur-
rently approximated system. Since g(xi ) − g̃(xi ) cannot be estimated unless the value
of g(xi ) is calculated with an additional experiment, the influence index, κi , is defined
as the absolute value of the coefficient term, dPf d g̃(xi ). This derivative denotes the
sensitivity of Pf due to the change of the estimated value g̃(x) at xi .
The influence index κi can be calculated very effectively without additional g(x)
evaluations. Pf is a function of four statistical moments of g(X) and can be expressed
as follows:

Pf = Pf (µg , σg , β1g , β2g ) (29)
Reliability analysis and optimization using moment methods 71

Equations 5–8 about µg , σg , β1g , β2g can be rewritten as follows:


n exp
µg wi ĝ(xi ) (30)
i=1

n exp  12

σg w (ĝ(xi ) − µg )
i 2
(31)
i=1

n
exp
wi (ĝ(xi ) − µg )3
 i=1
β1g (32)
σg3

n
exp
wi (ĝ(xi ) − µg )4
i=1
β2g (33)
σg4

where n exp is the number of total experimental points, which is equal to 3n and wi is
the weight imposed on the i-th experimental point calculated by Equation 24. ĝ(xi ) is
defined in RSMM as follows:

g(xi ) if experiment at xi is performed
ĝ(xi ) = (34)
g̃(xi ) if experiment at xi is not performed yet

The derivative of Pf can be written as follows:



dPf ∂Pf dµg ∂Pf dσg ∂Pf d β1g ∂Pf dβ2g
= · + · +  · + ·
d g̃(xi ) ∂µg d g̃(xi ) ∂σg d g̃(xi ) ∂ β1g d g̃(xi ) ∂β2g d g̃(xi )


Pf dµg
Pf dσg
Pf d β1g
Pf dβ2g
· + · +  · + · (35)

µg d g̃(xi )
σg d g̃(xi )
β1g d g̃(xi )
β2g d g̃(xi )

The terms,
Pf /
µg ,
Pf /
σg ,
Pf /
β1g and
Pf /
β2g can be calculated using
the finite difference method from the Pearson system program and the rest of the terms
can be obtained by differentiating Equations 30–33 as follows:

dµg
= wi (36)
d g̃(xi )
dσg wi
= (g̃(xi ) − µg ) (37)
d g̃(xi ) σg
 
d β1g 3wi 3 dσg 
= 3 · (g̃(xi ) − µg )2 − wi + · β1g (38)
d g̃(xi ) σg σg d g̃(xi )

dβ2g 4wi 4  dσg
= 4 · (g̃(xi ) − µg )3 − wi β1g + · β2g (39)
d g̃(xi ) σg σg d g̃(xi )
72 Structural design optimization considering uncertainties

3.3 U pd ate o f r es po ns e s ur fac e appr ox i m a t i o n


Once an additional experiment is performed, we can update the response surface with
the newly obtained data. In RSMM, not only the regression coefficients but also the
regression bases are updated to improve the accuracy of the approximation for the
given set of observations. At the initial stage of RSMM, a total of 2n + 1 polynomial
terms are used in the response surface of up to quadratic terms. The cross product
terms are introduced into the approximation model during the update so that we can
account for the interactions among the random variables. As we may use at most the
same number of terms in the approximation model with the number of observations
and experiments are added one by one, one cross product term may be added in one
updating step. To select the appropriate cross product term to be added, a simple
procedure is established as follows.
Let the coordinate of the experimental point where experiment has been newly
performed denoted by xN = {x1·N , x2·N , . . . , xn·N } where xi·N is the value of variable xi
at xN which takes a value among {li·1 , li·2 , li·3 }. Suppose the response surface before
update is given as,


n 
n 
nmix
g̃(x) = a + bi xi + ci x2i + dk xi(k) xj(k)
i=1 i=1 k=1


n 
n 
nmix
= a+ bi ξi + ci ξi2 + d k ξi(k) ξj(k) (40)
i=1 i=1 k=1

where ξi is the coded variable defined as follows:

xi − (li·1 + li·3 )/2


ξi = (41)
(li·3 − li·1 )/2

and a, bi , ci and d k are the regression coefficients when g̃(x) is expressed in the coded
variables.

(a) List up the variables which do not have mid-level at xN , say xi·N = li·2 .
(b) Among all possible combinations with the variables listed in step (a), select the
cross product terms that are not included in the former approximation, Equation
40 and add them into C, the set of candidate cross product terms. At the initial
stage of RSMM, C is a null set.
(c) If C is null, that is, all addible cross product terms are already used in the former
approximation, no term is added at this updating step. If C has only one element,
it is added to the regression bases. If C has more than one element, build g̃(x)
including each cross product term in C and compare the residual sum of squares
of each model. If there is a cross product term which makes the regression matrix
singular, it is discarded. Choose the model corresponding to the minimum resid-
ual sum of squares. If all the RS models have the same residual sum of squares,
then go to step (d). Else, go to step (e).
Reliability analysis and optimization using moment methods 73

(d) Calculate coefficient sum CSij defined as Equation 42 for all members of C,

CSij = |bi | + |ci | + |bj | + |cj | (42)

where bi and ci are the coefficients in Equation 40. Choose xi xj that has the
greatest value of CSij and add it to the regression bases.
(e) If a term is added, then remove it from the candidate set C and go to the next
stage of RSMM.

The reason why cross product terms which consist only of variables off the mean-axis
at currently or formerly executed experiments are added is to prevent the singularity
or ill-conditioning during the least square estimation. With this adaptive update of
response surface approximation, the number of terms in the response surface model is
kept as small as possible while including the interaction effect that might exist among
variables.

3.4 Examples
Two examples are taken to check accuracy and efficiency of RSMM. For comparison,
the results by FFMM, MCS, FORM and method proposed by (Zhao & Ono 2001) are
provided. In FORM, the HL-RF algorithm (Rackwitz & Fiessler 1978) is used to find
MPFP. The method of Zhao & Ono is a recently reported moment based reliability
method which uses samples solely on the mean axis of each variable while in RSMM
non-axial samples are also utilized.
The first example is taken from (Masen et al. 1986, Kiureghian et al. 1987 and
Zhao & Ono 2001). The performance function representing the failure in one plastic
collapse mechanism of a one bay frame is given as,

g(x) = x1 + 2x2 + 2x3 + x4 − 5x5 − 5x6 (43)

where the variables are statistically independent and log-normally distributed with the
means µ1 = . . . µ4 = 120, µ5 = 50, µ6 = 40 and standard deviations σ1 = . . . σ4 = 12,
σ5 = 15, σ6 = 12.
The probabilities Pr [g(x) < 0] calculated are summarized in Table 3.6. It is observed
that the RSMM and FFMM give equally accurate results for present example while

Table 3.6 Results of moment estimation and reliability analysis of the first example.

FORM Zhao & Ono FFMM RSMM MCS

Mean • 270.000 270.000 270.000 269.990


STD • 103.271 103.271 103.271 103.174
Skewness • −0.528 −0.523 −0.523 −0.530
Kurtosis • 3.650 3.612 3.612 3.623
Pr[g(x) < 0] 9.430e-3 1.219e-2 1.212e-2 1.212e-2 1.213e-2
Function calls 50 42 729 16 1,000 k
74 Structural design optimization considering uncertainties

P1 P2 P3 P4 P5 P6
E1, A1

2m
E2, A2 PNT1 E2, A2

E1, A1
DISP1

6@4 m

Figure 3.7 Truss structure with 23 members.

Table 3.7 Input random variables for truss example.

Variables (unit) Distribution type Mean Standard deviation

1 E1 (N/m2 ) lognormal 2.1 × 1011 2.1 × 1010


2 E2 (N/m2 ) lognormal 2.1 × 1011 2.1 × 1010
3 A1 (m2 ) lognormal 2.0 × 10−3 2.0 × 10−4
4 A2 (m2 ) lognormal 1.0 × 10−3 1.0 × 10−4
5 P1 (N) gumbel 5.0 × 104 7.5 × 103
6 P2 (N) gumbel 5.0 × 104 7.5 × 103
7 P3 (N) gumbel 5.0 × 104 7.5 × 103
8 P4 (N) gumbel 5.0 × 104 7.5 × 103
9 P5 (N) gumbel 5.0 × 104 7.5 × 103
10 P6 (N) gumbel 5.0 × 104 7.5 × 103

FORM shows rather erroneous result due to the non-normality of variables. Zhao &
Ono’s method also gives good results for the present example. Since the perfor-
mance function is approximated exactly with the quadratic response surface without
cross product terms, convergence is achieved just after the initial approximation
in RSMM.
The second example is a truss structure with 23 members, as shown in Figure 3.7.
Ten random variables are considered which are summarized in Table 3.7. It is assumed
that all the horizontal members have perfectly correlated Young’s modulus and cross
sectional areas with each other and so is the case with the diagonal members. The
requirement of this problem is that the displacement at PNT1 in Figure 3.7 should not
exceed 0.11 m. The performance function is defined as,

g(x) = 0.11 − DISP1 (44)

The displacement at PNT1 is calculated using the commercial software, ANSYS 6.0.
Reliability analysis and optimization using moment methods 75

0.009

0.008

0.007
Pf
0.006

0.005

0.004
25 30 35 40 45
Number of function call

Figure 3.8 Convergence history of probability of failure in truss example.

RSMM finds a solution with 24 additional experiments after initial approximation


with 21 experiments. At the final stage, the response surface model of g(x) is obtained
with 17 cross product terms as follows:

g̃(ξ(x)) = 2.8070 + 1.2598ξ1 + 0.2147ξ2 + 1.2559ξ3 + 0.2133ξ4 − 0.1510ξ5


− 0.4238ξ6 − 0.6100ξ7 − 0.6100ξ8 − 0.4238ξ9 − 0.1510ξ10
− 0.1978ξ12 − 0.0362ξ22 − 0.2016ξ32 − 0.0346ξ42 + 0.0023ξ52
+ 0.0008ξ62 + 0.0036ξ72 + 0.0036ξ82 + 0.0008ξ92 + 0.0023ξ10
2

− 0.0042ξ1 ξ2 − 0.3022ξ1 ξ3 − 0.0110ξ1 ξ4 + 0.0381ξ1 ξ5 + 0.0871ξ1 ξ6


+ 0.1232ξ1 ξ7 + 0.1232ξ1 ξ8 + 0.0871ξ1 ξ9 + 0.0346ξ1 ξ10 + 0.0041ξ2 ξ3
+ 0.0110ξ3 ξ4 + 0.0261ξ3 ξ5 + 0.0831ξ3 ξ6 + 0.1172ξ3 ξ7 + 0.1172ξ3 ξ8
+ 0.0832ξ3 ξ9 + 0.0296ξ3 ξ10 (45)

The history of probability is depicted in Figure 3.8. At the first approximation, the
probability is calculated as 0.00451 and the converged value is 0.00880. The finally
obtained distribution of g(x) is Pearson type I, a beta distribution as in Figure 3.9.
The results are compared in Table 3.8 with the other methods. As in the previous
examples, the result of RSMM shows good agreement with those of FFMM and MCS
with 100,000 samples but the results of FORM and Zhao & Ono’s method have some
discrepancy with the other results.
When checking the numerical efficiency, RSMM shows very good performance com-
pared to the other methods. FORM and Zhao & Ono’s method also show good
numerical efficiency but the accuracies of those methods are not satisfactory for the
present example. The total elapsed time for RSMM is 90 seconds with a Pentium 4
machine while FORM spends 151 seconds.
76 Structural design optimization considering uncertainties

PDF by MCS
PDF by Pearson system
0.40

Probability density function 0.35

0.30

0.25
Failure
region
0.20

0.15

0.10

0.05

0.00
2 0 2 4 6
g(x)

Figure 3.9 Probability density function of g(x) in truss example.

Table 3.8 Results of moment estimation and reliability analysis of truss example.

FORM Zhao & Ono’s FFMM RSMM MCS

5n 7n

Mean • 0.0306 0.0307 0.0306 0.0306 0.0307


STD • 0.0111 0.0110 0.0109 0.0109 0.0111
Skewness • −0.4989 −0.5708 −0.2009 −0.2008 −0.4789
Kurtosis • 3.4289 3.4120 3.0786 3.0785 3.3826
Pr[g(x) < 0] 5.019 × 10−3 4.356 × 10−3 4.357 × 10−3 8.821 × 10−3 8.804 × 10−3 8.330 × 10−3
Function calls 77 50 70 59,049 45 100,000

4 Reliability-based design optimization using


moment methods
The reliability-based design optimization (RBDO) can generally be formulated as
follows:
Minimize W(d, z)
 im(d, z, x) < 0] ≤ pi i = 1, . . . , m
subject to Pr [g
(46)

Pr {gi (d, z, x) < 0} ≤ p0
i=1

where W and gi are the objective function and limit state function and d, z, x are the
vector of design variables, state variables and random variables respectively. The design
Reliability analysis and optimization using moment methods 77

Initial design,
Distribution of random
variables

Optimization engine
(SQP, MMFD, etc)

EvaluateW, Pr[gi < 0] RSMM or FFMM

dW dP f N Converge?
Evaluate ,
dd dd Feasible?

Y
STOP

Figure 3.10 Flowchart of RBDO using RSMM or FFMM.

variable can be a deterministic variable or a mean value of a random variable existing


in the system. The first constraint is imposed on the component failure probability and
the second one is imposed on the system failure probability which can be evaluated by
the reliability bounds concept (Ditlevsen 1979). The formulation in Equation 46 can
be applied also to the tolerance synthesis problems (Creveling 1997, Lee & Woo 1990),
and in this case, the tolerance, which is usually defined as some multiple of standard
deviation of a dimension becomes a design variable. In this section, a RBDO procedure
using FFMM and RSMM is introduced with an approximate design sensitivity analysis
for probabilistic constraints.

4.1 Proc edure of RBDO us ing moment me tho ds


FFMM and RSMM can be combined with a mathematical programming for RBDO.
The overall procedure is depicted in Figure 3.10.
The optimization engine calls RSMM or FFMM whenever it needs to evaluate
the probabilistic constraints. The procedure is close to the double loop strategy
where constraint feasibility is checked at every design point during the optimization.
The efficiency of the procedure will be discussed in section 5.

4.2 Design sens itivity analys is


Since one evaluation of a probabilistic constraint usually takes considerable number
of performance function evaluations, it is important to provide the design sensitivity
of the probabilistic constraint in an analytic or semi-analytic way during the RBDO
process. In this section, a semi-analytic design sensitivity analysis with moment method
is provided (Lee & Kwak 2005). Both RSMM and FFMM can be adopted in this
procedure and the calculation is done without any additional g(x) evaluations. Instead,
78 Structural design optimization considering uncertainties

the experimental data or response surface model previously obtained in the reliability
analysis is utilized. The procedure is as follows. 
Since the probability of failure is a function of four statistical moment, µg , σg , β1g
and β2g , the design sensitivity of Pf can be written as follows:

dPf ∂Pf dµg ∂Pf dσg ∂Pf d β1g ∂Pf dβ2g
= · + · +  · + ·
dd ∂µg dd ∂σg dd ∂ β1g dd ∂β 2g dd


Pf dµg
Pf dσg
Pf d β1g
Pf dβ2g
· + · +  · + · (47)

µg dd
σg dd
β1g dd
β2g dd

The terms
Pf /
µg ,
Pf /
σg ,
Pf /
β1g and
Pf /
β2g can be calculated using
the finite difference method with the Pearson system program and the rest of terms
can be obtained from Equations 5–8, as follows:

dµg 
3
∂µg dlk·ik  ∂µg dwk·ik
3
= + (48)
ddk ∂lk,ik ddk ∂wk,ik ddk
ik =1 ik =1

dσg 
3
∂σg dlk·ik  ∂σg dwk·ik
3
= + (49)
ddk ∂lk,ik ddk ∂wk,ik ddk
ik =1 ik =1
  
d β1g 
3
∂ β1g dlk·ik 3
∂ β1g dwk·ik
= + (50)
ddk ∂lk,ik ddk ∂wk,ik ddk
ik =1 ik =1

dβ2g 
3
∂β2g dlk·ik  ∂β2g dwk·ik
3
= + (51)
ddk ∂lk,ik ddk ∂wk,ik ddk
ik =1 ik =1

where dk is the design variable related with the k-th random variable. The par-
tial derivatives in Equations 48–51 can be calculated by directly differentiating
Equations 5–8 as follows:

∂µg 
3 
3 
3 
3
∂g(l1·i1 , . . . , ln·in )
= w1·i1 · · · wk−1·ik−1 wk+1·ik+1 · · · wn·in · wk·ik
∂lk·ik ∂lk·ik
i1 =1 ik−1 =1 ik+1 =1 in =1

(52)

∂µg 
3 
3 
3 
3
! "
= w1·i1 · · · wk−1·ik−1 wk+1·ik+1 · · · wn·in g l1·i1 , . . . , ln·in (53)
∂wk·ik
i1 =1 ik−1 =1 ik+1 =1 in =1

For the lack ofspace, the derivation only for the mean is presented. The partial
derivatives of σg , β1g , β2g with respect to lk·ik and wk·ik can be obtained in the same
way. It is noted that for calculating the partial derivatives of moments with respect to
lk·ik , we have to calculate the partial derivative of performance function g with respect
to lk·ik . When FFMM is used, it is calculated using a backward or forward difference
Reliability analysis and optimization using moment methods 79

schemes on the previously obtained experimental data. For a three level case, it can be
formulated as follows:
∂g(l1·i1 , . . . , ln·in ) ∼ g(l1·i1 , . . . , lk·3 , . . . , ln·in )h1 g(l1·i1 , . . . , lk·2 , . . . , ln·in )(h1 + h2 )
=− +
∂lk·1 h2 (h1 + h2 ) h1 h2
g(l1·i1 , . . . , lk·1 , . . . , ln·in )(2h1 + h2 )
− (54)
h1 (h1 + h2 )
∂g(l1·i1 , . . . , ln·in ) ∼ g(l1·i1 , . . . , lk·3 , . . . , ln·in )h1 g(l1·i1 , . . . , lk·2 , . . . , ln·in )(h2 − h1 )
= +
∂lk·2 h2 (h1 + h2 ) h1 h2
g(l1·i1 , . . . , lk·1 , . . . , ln·in )h2
− (55)
h1 (h1 + h2 )
∂g(l1·i1 , . . . , ln·in ) ∼ g(l1·i1 , . . . , lk·3 , . . . , ln·in )(h1 + 2h2 )
=
∂lk·3 h2 (h1 + h2 )
g(l1·i1 , . . . , lk·2 , . . . , ln·in )(h1 + h2 ) g(l1·i1 , . . . , lk·1 , . . . , ln·in )h2
− +
h1 h2 h1 (h1 + h2 )
(56)

where h1 and h2 are lk·2 − lk·1 and lk·3 − lk·2 respectively. This finite difference scheme
can be extended to cases where more than 3 levels are used in DOE.
When RSMM is used, the partial derivative can be calculated using the previously
obtained response surface model g̃ as follows:

⎪ 0 if ik = m m = 1, 2, 3
∂g(l1,i1 , . . . , ln,in ) ⎨  
= ∂g(x)  ∂g̃(x)  (57)
∂lk,m ⎪
⎩ ∂x  
k x=(l1,i ,...,lk,m ,...,ln,in ) ∂xk x=(l1,i ,...,lk,m ,...,ln,in )
1 1

The derivatives dlk·ik /ddk , dwk·ik /ddk in Equations 48–51 can be obtained from the
relationship between lk·ik , wk·ik and dk . Since lk·ik , and wk·ik are determined from the
four statistical moments of xk (Eq. 4), the derivatives can be written as follows:

dlk·ik ∂lk·ik dµxk ∂lk·ik dσxk ∂lk·ik d β1xk ∂lk·ik dβ2xk
= + +  + (58)
ddk ∂µxk ddk ∂σxk ddk ∂ β1xk ddk ∂β2xk ddk

dwk·ik ∂wk·ik dµxk ∂wk·ik dσxk ∂wk·ik d β1xk ∂wk·ik dβ2xk
= + +  + (59)
ddk ∂µxk ddk ∂σxk ddk ∂ β1xk ddk ∂β2xk ddk

When we use the explicit formula of levels and weights derived by Seo & Kwak
(2002), the partial derivatives in Equations 58 and 59 can be obtained by directly
differentiating the formula. If we solve Equation 4 numerically in order to obtain lk·ik
and wk·ik , then they can be calculated with finite difference method.

The derivatives, dµxk /ddk , dσxk /ddk , d β1xk /ddk and dβ2xk /ddk , are determined
from the definition of the optimization problem. In RBDO, the design variable dk
becomes the mean value of xk , that is, µxk , and it is usually assumed that the distribu-
tion characteristics are not changed during the optimization, so dµxk /ddk becomes 1
80 Structural design optimization considering uncertainties

and the others become 0. In tolerance synthesis problem, dk becomes the tolerance of
dimension xk , which is 3σk for the usual definition of tolerance, and the other moments
are assumed invariant, so dσxk /ddk become 1/3 with the other derivatives become 0.
With these, all the derivatives and partial derivatives necessary to calculate Equa-
tions 48–51 have been derived and the design sensitivity of failure probability can be
calculated from Equation 47. The whole procedure described in this section seems
a little bit tedious and complex, but it is easy to implement and the computational
efficiency is also good.

4.3 Ex am pl es
In this section, we present two examples of RBDO performed by RSMM and the design
sensitivity analysis explained in the earlier section. The first example is a mathematical
problem introduced in (Xiao et al. 1999). The problem is stated as follows:

Minimize W(d) ≡ πd12 + d2


subject to Pf 1 ≡ Pr [g1 ≡ X13 X2 − 95.5 ≤ 0] ≤ 0.0010
Pf 2 ≡ Pr [g2 ≡ X12 X2 − 70.7 ≤ 0] ≤ 0.0010 (60)
1.0 ≤ d1 ≤ 2.0
20.0 ≤ d2 ≤ 50.0

where the design variables, d1 and d2 are the mean values of random variable X1 , X2
respectively and X1 and X2 follow normal distribution with standard deviation 0.1 and
3.0. Initial design is (1.5, 35.0) and the optimization is performed with the sequential
quadratic programming (SQP). For comparison, the reliability index approach (RIA)
is also applied and the results are summarized in Table 3.9.
It is seen that the two methods converged to similar solutions with the second con-
straint activated. However, slight differences in the values of d2 and the objective
function are noticed. When we apply the MCS to the final designs found by RSMM
and RIA for verification, the actual value of Pf 2 is calculated as 0.001012 for RSMM

Table 3.9 Optimization results of first example.

RSMM RIA

W d1 d2 W d1 d2

Initial 42.069 1.500 35.000 42.069 1.500 35.000


Final 41.564 2.000 28.998 41.459 2.000 28.893

Probability Initial Final Final by MCS∗ Initial Final Final by MCS∗

P f1 1.7917e-1 2.5110e-6 8.5000e-6 1.7097e-1 7.6580e-6 1.0000e-5


P f2 2.6086e-1 9.9975e-4 1.0120e-3 2.5203e-1 9.9958e-4 1.1060e-3

Function W g1 g2 W g1 g2
calls
27 342 342 49 1011 972
∗ Sample size: 1,000,000.
R e l i a b i l i t y a n a l y s i s a n d o p t i m i z a t i o n u s i n g m o m e n t m e t h o d s 81

and 0.001106 for RIA. The result of FORM contains rather larger error than that of
RSMM and actually, RIA violates the constraint slightly more than RSMM. RSMM
provides more accurate estimate of probability of failure than FORM and this is the
reason why the two methods converge to different solutions. The numbers of function
calls are also listed in the table. It should be noted here that during the optimization
with RIA, the design sensitivity is calculated with finite difference method, so judging
the efficiency of the method via the number of function calls may not appropriate in
this case. However, even considering this fact, it is seen that the RBDO by RSMM
shows good performance in terms of efficiency.
The second example is the optimization of the truss structure introduced in section 3.
In this example, a symmetry condition is imposed on the geometry and load condition
with respect to the axis of symmetry (Fig. 3.11). Three new random variables, X1 , X2 ,
X3 are introduced which determine the shape of the truss structure so there are 10
random variables in total.
X1 , X2 , X3 are distributed normally with standard deviation of 2 cm and the dis-
tribution parameters of the other variables are the same as in Table 3.7. The system
requirement is that the displacement at the center point should not exceed 11.5 cm.
The optimization problem is formulated as follows:

Minimize Weight(d)
Subject to Pf ≡ Pr [G(X) < 0] ≤ 0.05 (61)
where G(X) = 11.5 − |disp1|, 100 < di < 400 i = 1, 2, 3

where di is the mean value of random variable Xi with initial value of 200 cm. We apply
SQP to solve this problem and optimization with RIA is also tried for comparison. The
results are summarized in Table 3.9.
It is seen that RIA and RSMM find somewhat different solutions. RIA converges to
a solution activating the constraint, but in the result of RSMM, the constraint is not
activated. We perform MCS with 100,000 samples to verify the reliability analysis at
the final design found by the two methods and it is listed in Table 3.10 as Pf _MCS . It
is seen that greater error in the probability is involved in the result of RIA and the
final design found by RIA actually violates the constraint by a relatively large amount.
RSMM finds the solution with 903 function evaluations of G(x) and RIA spends 4004.
As in the first example of this section, design sensitivity is calculated using finite
difference method in case of RIA, so the direct comparison of the numerical efficiency

P1 P2 P3 P3 P2 P1
E1, A1

E2, A2
x1 x2 x3
E2, A2
E1, A1
400 Disp1

Figure 3.11 Truss structure wit 23 members (RBDO example).


82 Structural design optimization considering uncertainties

Table 3.10 Optimization result of truss structure with RSMM and RIA.

RSMM RIA

d weight/w 0 Pf P ∗f_MCS d weight/w 0 Pf P ∗f_MCS

115.143 0.9806 0.0456 0.0465 110.142 0.9775 0.0500 0.0632


155.181 166.210
220.897 903 G(x) evaluations 204.371 4004 G(x) evaluations
∗ Sample size: 100,000.

via the number of function evaluations is not appropriate. However, even considering
that fact, the numerical efficiency of RSMM seems satisfactory in this problem.

5 Conclusions
So far, moment based reliability analysis methods, FFMM and RSMM have been intro-
duced and illustrated by examples. The following strong points of moment methods
could be figured out. Firstly, it does not involve the difficulties of searching for the
most probable failure point (MPFP) as in FORM/SORM. Especially, the procedure of
FFMM is very simple and easy to use. Secondly, not only the probability value but
also the information of PDF and cumulative distribution function (CDF) of a system
response function is made available, which can give a deeper insight about the statistical
characteristics of the engineering system. Thirdly, they do not use any transformation
to deal with non-normal distributions and is free from the deterioration of accuracy
and efficiency, which is suffered by other existing methods.
Meanwhile, the moment methods have certain drawbacks and limitations and those
should also be well recognized and reminded before the applications. The information
of MPFP is very important in calculating the small probability in tail region, but
moment methods do not make use of it. Also the approximation using only finite
number of moments has put some limitation in the accuracy. For this reason, moment
methods are often considered more suitable for high probability problems. In most of
our test cases, probability calculation using the Pearson system gives reliable results
up to about 4 sigma level, corresponding to a probability of 10−5 order. However, at
probability levels less than 10−5 , the failure probability found by the Pearson system
might not be reliable. The non-uniqueness of the PDF mentioned in section 2.3 also
should be carefully reminded.
The accuracy of moment estimation is determined from the integration order pro-
vided by the method and the degree of nonlinearity of performance function in the
high probability region that is defined in terms of coefficients of variation of vari-
ables. It is noted that large system nonlinearity and coefficient of variation of variables
can degrade the accuracy of moment estimation. The accuracy of moment calculation
can be extended by introducing more levels into DOE, which may cost much more
computational cost.
In RSMM, the probability converges to the value which is expected to be found by
FFMM since all the samples are taken from the set of full factorial design. The con-
vergence is expedited since important samples in probability are selectively taken in
R e l i a b i l i t y a n a l y s i s a n d o p t i m i z a t i o n u s i n g m o m e n t m e t h o d s 83

the early stage of RSMM, while the rest of samples are approximated with a response
surface. The initial approximation is made using samples with high weights and addi-
tional samples are selected considering the impacts they will bring to the probability.
Up to now, we covered examples with uni-modal distributions, and hence the levels
of the highest weight are mid-levels. In the case of non-uni-modal shape, the selec-
tion of the initial set of experiments must be changed accordingly. One difficulty in
RSMM is to determine stopping criterion. A tight criterion is necessary but there should
be a compromise between accuracy and computational cost. This is a topic of more
study in the future. Since the current version of RSMM is based on the 3n FFMM,
in case more accurate DOE is necessary, an extension of the method should be made
accordingly.
A semi-analytic design sensitivity analysis is proposed in combination with FFMM
and RSMM, which is shown to be robust and accurate through several tests. The pro-
posed procedure of RBDO is applied successfully to simple RBDO problems and the
efficiency of the procedure turns out to be comparable to or even better than conven-
tional RIA. However, it should be noted that the RIA is a classical approach and these
days, much more efficient algorithms and strategies are available in the field of RBDO,
such as the performance measure approach (Lee & Kwak 1987–88, Tu et al. 1999,
Youn & Choi 2003) and the single loop optimization strategy like sequential optimiza-
tion and reliability assessment (SORA) method (Du & Chen 2003). The comparison
with those methods are not provided in this chapter, however in our comparative study,
it was recognized that the efficiency of the RBDO procedure proposed in this chapter
cannot match that of a single loop method like SORA due to its double loop nature. It
is not easy to adopt the single loop optimization strategy into RBDO with the moment
methods. However, better accuracy of moment methods than the first order reliability
approximation is still a strong point compared to the RBDO method based on FORM.

References

Abramowitz, M. & Stegun, I.A. 1972. Handbook of mathematical functions. 10th ed. New
York: Dover.
Bjerager, P. 1988. Probability integration by directional simulation. ASCE Journal of Engineer-
ing Mechanics 114(8):1285–1302.
Bjerager, P. 1991. Methods for structural reliability computation. In: F. Casciati (ed.), Reliability
problems: general principles and applications in mechanics of solid and structures. New York:
Springer Verlag.
Breitung, K. 1984. Asymptotic approximation for multi-normal integrals. ASCE Journal of
Engineering Mechanics 10(3):357–366.
Bucher, C.G. 1988. Adaptive sampling: an iterative fast Monte-Carlo procedure. Structural
Safety 5(2):119–126.
Bucher, C.G. & Bourgund, U. 1990. A fast and efficient response surface approach for structural
reliability problems. Structural Safety 7:57–66.
Creveling, C.M. 1997. Tolerance design: A handbook for developing optimal specification.
Cambridge, MA: Addison-Wesley.
D’Errico, J.R. & Zaino, N.A. 1988. Statistical tolerancing using a modification of Taguchi’s
method. Technometrics 30(4):397–405.
Ditlevsen, O. 1979. Narrow reliability bounds for structural systems. Journal of Structural
Mechanics 7:453–472.
84 Structural design optimization considering uncertainties

Du, X. & Chen, W. 2004. Sequential optimization and reliability assessment for efficient
probabilistic design. ASME Journal of Mechanical Design 126(2):225–233.
Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural Engineering.
Structural Safety 15:169–196.
Engels, H. 1980. Numerical quadrature and qubature. New York: Academic Press.
Evans, D.H. 1972. An application of numerical integration techniques to statistical tolerancing,
III – general distributions. Technometrics 14:23–35.
Faravelli, L. 1989. Response surface approach for reliability analysis. ASCE Journal of
Engineering Mechanics 115(12):2763–2781.
Fiessler, B. et al. 1979. Quadratic limit states in structural reliability. ASCE Journal of
Engineering Mechanics 105(4):661–676.
Frangopol, D.M. & Corotis, R.B. 1996. Reliability-Based Structural System Optimization:
State-of-the-Art versus State-of the-Practice. In F.Y. Cheng (ed.), Analysis and Computation:
Proceedings of the Twelfth Conference held in Conjunction with Structures Congress XIV
pp. 67–78.
Greenwood, W.H. & Chase, K.W. 1990. Root sum squares tolerance analysis with nonlinear
problems. ASME Journal of Engineering for Industry 112:382–384.
Hahn, G.J. & Shapiro, S.S. 1967. Statistical models in engineering. John Wiley & Sons:
New York.
Hasofer, A.M. & Lind, N.C. 1974. Exact and invariant second order code format. ASCE Journal
of Engineering Mechanics 100(1):111–121.
Hohenbichler, M. & Rackwitz, R. 1981. Nonnormal dependent vectors in structural reliability.
ASCE Journal of Engineering Mechanics 107:1127–1238.
Hong, H.P. 1996. Point-estimate moment-based reliability analysis. Civil Engineering Systems
13(4):281–294.
Huh, J.S. et al. 2006. Performance evaluation of precision nanopositioning devices caused by
uncertainties due to tolerances using function approximation moment method. Review of
Scientific Instrument 77:15–103.
Johnson, N.L. et al. 1995. Continuous univariate distributions. New York: John Wiley & Sons.
Kiureghian, A.D. et al. 1987. Second-order reliability approximations. ASCE Journal of
Engineering Mechanics 113(8):1208–1225.
Kiureghian, A.D. 1996. Structural reliability methods for seismic safety assessment: a review.
Engineering Structures 95:412–24.
Kiureghian, A.D. & Dakessian, T. 1998. Multiple design points in first and second-order
reliability. Structural Safety 20:37–49.
Koyluoglu, H.U. & Nielsen, S.R.K. 1994. New approximations for SORM integrals. Structural
Safety 13:235–246.
Lee, S.H. & Kwak, B.M. 2005. Reliability based design optimization using response sur-
face augmented moment method. Proceedings of 6th World Congress on Structural and
Multidisciplinary Optimization, Rio de Janeiro, Brazil.
Lee, S.H. & Kwak, B.M. 2006. Response surface augmented moment method for efficient
reliability analysis. Structural Safety 28:261–272.
Lee, T.W. & Kwak, B.M. 1987–88. A reliability-based optimal design using advanced first order
second moment method. Mechanics of Structures and Machines 15(4):523–542.
Lee, W.J. & Woo, T.C. 1990. Tolerances: their analysis and synthesis. ASME Journal of
Engineering for Industry 112:113–121.
Li, K.S. & Lumb, P. 1985. Reliability analysis by numerical integration and curve fitting.
Structural Safety 3:29–36.
Madsen, H.O. et al. 1986. Methods of structural safety. Englewood Cliffs: Prentice-Hall.
Melchers, R.E. 1989. Importance sampling in structural systems. Structural Safety 6(1):3–10.
R e l i a b i l i t y a n a l y s i s a n d o p t i m i z a t i o n u s i n g m o m e n t m e t h o d s 85

Moré, J.J. et al. 1980. User guide for MINPACK-1. Argonne National Labs Report ANL-80-74.
Argonne. Illinois.
Mori, Y. & Ellingwood, B.R. 1993. Time-dependent system reliability analysis by adaptive
importance sampling. Structural Safety 12(1):59–73.
Myers, R.H. & Montgomery, D.C. 1995. Response surface methodology: process and product
optimization using designed experiments. New York: John-Wiley & Sons.
Nie, J. & Ellingwood, B.R. 2005. Finite element-based structural reliability assessment using
efficient directional simulation. ASCE Journal of Engineering Mechanics 131(3):259–267.
Rackwitz, R. & Fiessler, B. 1978. Structural reliability under combined random load sequences.
Computers and Structures 9:489–494.
Rajashekhar, M.R. & Ellingwood, B.R. 1993. A new look at the response surface approach for
reliability analysis. Structural Safety 12:205–220.
Rosenblueth, E. 1981. Two-point estimate in probabilities. Applied Mathematical Modeling
5:329–335.
Seo, H.S. & Kwak, B.M. 2002. Efficient statistical tolerance analysis for general distributions
using three-point information. International Journal of Production Research 40(4):931–44.
Taguchi, G. 1978. Performance analysis design. International Journal of Production Research
16:521–530.
Tu, J. et al. 1999. A new study on reliability-based design optimization. ASME Journal of
Mechanical Design 121(4):557–564.
Xiao, Q. et al. 1999. Computational strategies for reliability based Multidisciplinary optimiza-
tion, Proceedings of the 13th ASCE EMD Conference.
Youn, B.D. & Choi, K.K. 2003. Hybrid Analysis Method for Reliability-Based Design
Optimization. ASME Journal of Mechanical Design 125(2):221–232.
Zhao, Y.G. & Ono, T. 2001. Moment methods for structural reliability. Structural Safety 23:
47–75.
Chapter 4

Efficient approaches for system


reliability-based design optimization
Efstratios Nikolaidis
University of Toledo, Toledo, OH, USA

Zissimos P. Mourelatos & Jinghong Liang


Oakland University, Rochester, MI, USA

ABSTRACT: Two efficient approaches for series system reliability-based design optimization
(RBDO) are presented. Both approaches apportion optimally the system reliability among the
failure modes by considering the target values of the failure probabilities of the modes as design
variables. The first approach uses a sequential optimization and reliability assessment (SORA)
approach. It approximates the coordinates of the most probable points of the failure modes
as the design changes through linear extrapolation. The second system RBDO approach uses a
single-loop method where the searches for the optimum design and for the most probable failure
points proceed simultaneously. The efficiency and robustness of the single-loop based approach
is enhanced through an easy to implement active set strategy. The two approaches are illustrated
and compared on design examples. It is shown that both approaches yield more efficient designs
than a conventional component RBDO formulation. Moreover, it is shown that the single-loop
approach is considerably more efficient than the SORA approach.

1 Introduction
This section presents an overview of reliability-based design optimization
(RBDO). Two groups of methods for performing component RBDO efficiently are
explained. The section concludes with an outline of this chapter.

1.1 RBDO: Benefits and challenges


Competitive pressures compel manufacturers to minimize weight and cost while main-
taining an acceptable safety level. Designers face significant uncertainty such as
uncertainty in the operating conditions, material properties and the geometry of a
design. In this chapter, the term “uncertainty’’ refers to both random and epistemic
uncertainties. In deterministic design, uncertainty is traditionally accounted for by
using safety factors and conservative design (characteristic) values for strength and
loads. This empirical approach often leads to overdesign or occasionally to unsafe
designs. Moreover, deterministic design is not adequate for novel designs involving
new materials and geometries.
RBDO provides safer and more efficient designs than deterministic design opti-
mization because it explicitly accounts for uncertainty using probability theory. As a
result, RBDO is being increasingly accepted as an effective design tool for aerospace,
automotive, civil and ocean engineering structures. An overview of RBDO is given in
88 Structural design optimization considering uncertainties

(Frangopol & Maute 2004) with applications to aerospace, civil and micoelectrome-
chanical system design. Studies have demonstrated that RBDO can produce a more
efficient design than a deterministic approach without sacrificing safety, or alterna-
tively, RBDO can yield a safer design than a deterministic approach for a given
maximum allowable cost. Here the safety of a design is measured by its system
reliability.
A designer faces many challenges when applying RBDO to practical problems. The
high computational cost and the consideration of the system failure probability in
RBDO are two principal challenges that the study presented in this chapter tries to
address. Finding the probability of failure of a design requires repeated structural
analyses for different sets of values of the random variables, which may be computa-
tionally expensive, especially when finite element analysis is required (Madsen et al.
1986, Moses 1995, Melchers 2001). Also, the computational cost is grossly com-
pounded when calculating the probability of failure of different designs during the
search for the optimum reliability-based design. For example, if a second moment
method is employed to calculate the probability of failure, then two nested optimiza-
tions must be performed. This approach, called double loop (DLP) approach, requires
one optimization for finding the optimum values of the design variables (outer loop)
and a second optimization (inner loop) for finding the most probable point (MPP),
which is needed for estimating the probabilities of the failure modes.
Another principal challenge in RBDO is to consider the system reliability. Calculating
system reliability is expensive, especially if one wants to account for the probability of
the intersection of the failure modes. As a result, most RBDO studies constrain only the
safety levels of the individual failure modes. Thus, the user must decide the minimum
allowable safety level (or equivalently safety index) of each failure mode (constraint)
instead of specifying only an acceptable system safety level. This approach has several
shortcomings. First, the required safety levels of the failure modes are usually not
optimal. Second it does not allow consideration of the cost required to achieve a certain
reliability level for each failure mode. Finally, it does not account for the interactions of
the failure modes (e.g., the probability of these modes occurring simultaneously). We
believe that we can obtain considerably better designs by enabling the user to specify
the minimum acceptable safety level of the system only, and allowing the optimizer
to determine the required safety levels the failure modes, instead of asking the user to
select them.

1.2 Pro g ress i n r e d uc ing t he c o mput at i o n a l co s t o f R B D O


To address the problem of the high computational cost required by DLP approaches,
two new classes of RBDO formulations have been recently proposed. The first class
decouples the RBDO process into a sequence of cycles consisting of a deterministic
design optimization followed by a reliability assessment of the found optimum (Du &
Chen 2004, Royset et al. 2001). The constraints in the deterministic optimization
dictate that the design does not fail at a checking point, which is some approximation
of the MPP. The Sequential Optimization and Reliability Assessment (SORA) method
uses the reliability information from the previous cycle to shift the violated determinis-
tic constraints in the feasible domain. SORA appears to be similar to the safety-factor
approach reported in (Wu et al. 2001). Another decoupled approach has also been
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 89

proposed in (Royset et al. 2001). However, it is restricted to deterministic design


variables in the design optimization loop.
The second class of RBDO methods converts the problem into an equivalent, single-
loop deterministic optimization by integrating the two optimization loops into one. The
approach in (Thanedar & Kodiyalam 1992) uses a mean value first-order reliability
method. However, it is numerically inaccurate or unstable due to a wrong estimation
of the probabilistic constraints. The single-loop, single-vector (SLSV) approach (Chen
et al. 1997) is the first attempt in a truly single-loop approach. It improves the RBDO
computational efficiency by eliminating the inner reliability estimation loops. However,
it requires a probabilistic active set strategy for identifying the active constraints, which
may hinder its practicality.
A single-level RBDO approach has also been reported in (Kuschel & Rackwitz 2000,
Streicher & Rackwitz 2004, and Agarwal et al. 2004). It integrates the nested optimiza-
tion loops into one by enforcing the Karush-Kuhn-Tucker (KKT) optimality conditions
of the inner loop as equality constraints in the outer design optimization loop. In doing
so however, it increases the number of design variables because it uses the standard
normal variants for each constraint as additional design variables. This can increase the
computational cost substantially, especially for practical problems with many design
variables and constraints. Furthermore, the approach requires second-order deriva-
tives that are computationally costly and difficult to calculate accurately. One of
the system RBDO approaches presented in this chapter uses the single-loop RBDO
of Liang et al. (2004). This single-loop system RBDO approach is summarized in
section 2.3.

1.3 Approaches for s ys tem RBDO


This chapter presents two RBDO approaches for series systems. The probability of
failure of a series system is approximated using the first-order or the second-order
Ditlevsen upper bounds (Ditlevsen 1979). In the first-order bound, the system failure
probability is approximated by the sum of the failure probabilities of all failure modes.
This approximation is accurate for most systems whose probability of failure is low
(e.g., less than 10−5 ). The second-order bound provides a more accurate approximation
of the system failure probability than the first-order bound by accounting for the joint
probability of pairs of failures modes.
The first approach, which was first reported in (Ba-abbad et al. 2006), uses a
sequence of deterministic optimization problems. The MPPs of the failure modes are
approximated using linear extrapolation. This approach is a modified formulation
of the SORA method in which the reliabilities of all failure modes are considered as
design variables. As a result, the approach allows for an optimal apportionment of the
reliability of a system among its failure modes.
The second approach for system RBDO utilizes the single-loop RBDO approach of
(Liang et al. 2004) to determine the optimum design. This approach approximates
the MPP’s in each design iteration, using a relation representing the KKT optimality
conditions instead of using linear extrapolation (Ba-abbad et al. 2006). To facilitate
convergence, an active set strategy is used for identifying the critical failure modes
whose failure probabilities affect significantly the system failure probability, in each
90 Structural design optimization considering uncertainties

iteration. The failure probabilities of the remaining non critical failure modes are
assumed zero.
Three major developments are involved in the above two approaches for system
RBDO. The fundamental development is the approach that allows for an optimal
apportionment of the reliability of a system among its components. Although this
approach increases the number of design variables, it does not affect significantly the
algorithmic efficiency because the objective function is not a function of the addi-
tional design variables. Furthermore, the inclusion of the failure probabilities of the
modes in the design variable set does not increase significantly the cost of calculating
the constraints. The second major development is the use of approximations (SORA
or a single-loop approach) to solve efficiently the RBDO problem. The single-loop
approach is more efficient than the SORA approach because the single-loop approach
eliminates the sequence of solutions of deterministic optimization problems. Moreover,
the single-loop approach does not approximate the MPP using extrapolation. Finally,
the third development is an active set strategy used by the single-loop system RBDO
approach. This strategy updates the active constraint set in each iteration to ensure
algorithmic stability.
Both system RBDO approaches presented in this chapter provide more efficient
designs than component RBDO approaches because the system RBDO approaches
account for the relation between the cost (or weight) and the reliability of each failure
mode. Specifically, these approaches optimize the reliabilities of the failure modes using
information about the sensitivity derivatives of the reliabilities of the modes and the
sensitivity derivatives of the cost with respect to the design variables. An optimality
condition for the reliabilities of the modes that involves these sensitivities is presented
in section 4.
The details of the proposed system RBDO approaches for series systems including
algorithms for these approaches are described in section 2. The efficiency and accu-
racy of the two approaches are demonstrated and compared in section 3, using two
examples that involve a cantilever beam and an internal combustion engine. Section 4
presents an optimality condition for a general, system RBDO problem and shows that
the optimum cantilever beam design satisfies this condition. Section 5 presents the
conclusions.

2 System RBDO methods


The two system RBDO approaches are presented in this section. First, a general form
for a series system RBDO problem is presented in subsection 2.1. An equivalent per-
formance measure approach formulation, in which the failure probabilities of the
constraints are considered as design variables, is derived. This formulation is used
as a basis to develop the SORA and single-loop based approaches in sections 2.2
and 2.3.

2.1 F o rm ul a ti o n o f s y s t e m R B DO pr o bl e m
The general system RBDO problem seeks the most efficient design whose system prob-
ability of failure does not exceed a maximum allowable value pall
f
. The formulation of
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 91

the system RBDO problem is as follows,

min f (d, µX , µP ) (1)


d,µx

 
'
n
subject to psys = P Gi (d, X, P) ≤ 0 ≤ pall
f
i=1

dL ≤ d ≤ dU , µLX ≤ µX ≤ µU
X

where d ∈ Rk is the vector of deterministic design variables, X ∈ Rm is the vector of ran-


dom design variables and P ∈ Rq is the vector of random parameters. Symbols µX and
µP denote the mean values of the random variables and random parameters respec-
tively. A bold letter indicates a vector, an upper case letter indicates a random variable
or parameter and a lower case letter indicates a realization of a random variable or
parameter. The probability of failure of a system with n failure modes is denoted by psys
and it is equal to P[∪ni=1 Gi (d, X, P) ≤ 0], where Gi (d, X, P) is the performance function
of the ith failure mode. In subsequent formulations of the RBDO problem, the side
constraints on the design variables will be omitted for simplicity.
The system failure probability is a function of the probabilities pfi of the failure
modes and their joint probabilities,


n 
n 
n
psys = pfi − pfij + · · · (−1)(n−1) pf1,2,...,n (2)
i=1 i=2 j<i

Let us constrain the probabilities of failure of the modes to be no greater than some
bounds ptfi , called target probabilities, and consider these probabilities as design
variables. Then the system RBDO formulation in Eq. (1) becomes,

min f (d, µX , µP ) (3)


d,µx ,ptf ,...,ptf
1 n

subject to pfi ≤ ptfi i = 1, 2, . . . , n


n 
n 
n
psys ≈ ptfi − pfij + · · · (−1)(n−1) pf1,2,...,n ≤ pall
f
i=1 i=2 j<i

In this formulation, the optimizer should determine the optimum values of the target
failure probabilities of the modes, besides the values of the original design variables,
d and µX .
Consider a performance measure approach (PMA) formulation (Tu et al. 1999,
Youn et al. 2003) of problem (2) in which the constraints on the failure probabilities
of the modes are written in terms of their safety indices, βi . In the PMA formulation,
instead of checking if the minimum distance of the MPP from the origin is no less than
92 Structural design optimization considering uncertainties

the target value of the safety index of each failure mode, we check if the performance
function is nonnegative at the MPPs, XMPP (βit ), PMPP (βit ). This formulation is,

min f (d, µX , µP ) (4)


d,µx ,β1t ,...,βnt

subject to Gi [d, XMPP (βit ), PMPP (βit )] ≥ 0, i = 1, 2, . . . , n


n 
n 
n
psys ≈ (−βit ) − pfij + · · · (−1)(n−1) pf1,2,...,n ≤ pall
f
i=1 i=2 j<i

where  is the cumulative probability distribution of a standard normal random vari-


able (zero mean, unit standard deviation) and (−βit ) = ptfi and XMPP and PMPP are the
values of the random design variables and parameters at the MPP for each constraint.
Symbol βit denotes the target value of the safety index of the ith failure mode. As Eq. (3)
indicates, for each mode, XMPP and PMPP are functions of the target value of its safety
index.
The above RBDO problem is too expensive to solve for most practical problems
because the probabilities of failure of the modes need to be calculated each time a
design changes. Therefore, two efficient approaches that use approximations of the
system failure probability in terms for the failure probabilities of the modes and approx-
imations of the MPP are presented below. The first approach uses linear extrapolation
to find the MPP of a design as a function of its safety index, while the second solves
the optimality conditions for the MPP.

2.2 SORA-b a se d s y s t em R B DO appr o ac h

2.2.1 F ormu l a tio n


Assume that the system failure probability
 is equal to the sum of the probabilities of
the failure modes psys ≈ ni=1 pfi ≈ ni=1 (−βit ). This is a conservative approximation
and it is accurate for small failure probabilities, i.e., 10−5 (Liang et al. 2007).
The key idea of the SORA approach is to approximate the coordinates of the
most probable point as a function of the value of the safety index. Let UMPP (βi ) =
X (β )
T −1 MPP i denote the vector of the reduced values of random variables at the
PMPP (βi )
MPP, and T the transformation from the space of the reduced variables to the space of
the original variables. The MPP, UMPP (βi ), is approximated as a function of the value
of the safety index, βi , given the MPP, UMPP (βi0 ), for a baseline value of the safety
index, βi0 , as follows (Fig. 4.1),

βi
UMPP (βi ) = UMPP (βi0 ) (5)
βi0
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 93

U MPP2 (b)
b G  const.

U MPP2 (b0)
b0

G  const.

U MPP1 (b0) U MPP1 (b)

Figure 4.1 Approximation of MPP as function of the safety index.

This approximation allows recasting the system RDBO problem formulation in


Eq. (4) as a deterministic optimization problem as follows;

min f (d, µX , µP ) (6)


d,µx ,β1t ,...,βnt
  
βt
subject to Gi d, T 0i UMPP (βi0 ) ≥ 0, i = 1, 2, . . . , n
βi


n
psys ≈ (−βit ) ≤ pall
f
i=1

In the above formulation, the target failure probabilities of the modes have been
replaced by the corresponding target values of the safety indices. Since there is a one-
to-one relation between the probability of failure and the safety index of a mode, this
substitution of the design variables does not change the solution of the optimization
problem. The solution of Eq. (6) is a design whose failure modes have safety indices
approximately equal to βit . These approximate values are called herein “projected
values of the safety indices’’.
The approximation of the design point as a function of the safety index in Eq. (5) is
only valid in a trust region around the baseline value of the safety index. The progress
of the optimization should be monitored in each iteration and the change in the value
of the safety index should be constrained within some move limit to remain in the
trust region. Available methods for optimization using trust regions can be used for
this purpose (Moré & Sørensen 1983, Steihaug 1983, Sørensen 1994).

2.2.2 A lgorithm
Figure 4.2 describes the system RBDO algorithm. Each cycle of this algorithm con-
sists of three operations: (a) reliability analysis using a First-Order Reliability Method
94 Structural design optimization considering uncertainties

Select initial design

Perform PMA analysis to find MPP for


minimum acceptable value of safety index

Approximate MPP as
function of safety index

Solve approximate deterministic


optimization problem

N
Convergence?

Stop

Figure 4.2 Algorithm of a SORA based system RBDO method.

(FORM) of the initial design or of the design obtained from the previous cycle to
check if this design has acceptable reliability, (b) PMA reliability analysis of the design
to determine the MPPs of the failure modes, UMPP (βi0 ), and (c) approximate deter-
ministic optimization to update the optimum design and find the target values of the
probabilities of the failure modes, Pfti .
First, we perform a deterministic optimization using a factor of safety to find an
initial design. Then we perform FORM reliability analysis of the deterministic opti-
mum. At this stage we can identify the non-critical failure modes, that is, those failure
modes of the deterministic optimum design that do not affect significantly the system
reliability because they have negligible probability of failure compared to the failure
probabilities of the remaining critical modes. The target values of the safety indices
of these modes are removed from the set of design variables to facilitate convergence.
Then we perform an inverse reliability analysis (PMA) of the deterministic optimum
assuming equal probabilities of failure, which are obtained by dividing the allowable
failure probability of the system by the number of critical failure modes; ptfi = pall f
/nc .
Finally, we perform approximate deterministic optimization considering the target
values of the safety indices of the nc failure modes, β1t , . . . , βnt c , as design variables in
addition to the original design variables. Now the optimizer seeks both the optimum
values of the design variables and the optimum target values of the safety indices to
minimize the objective function (i.e. cost or weight), and the optimization problem is
given by Eq. (6.b). In this case, the optimization problem formulation (6) becomes,

min f (d, µX , µP ) i = 1, 2, . . . , nc
d,µx ,β1t ,...,βnt c
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 95
  
βi
subject to Gi d, T 0 UMPP (βi0 ) ≥ 0, i = 1, 2, . . . , nc
βi

nc (6.b)
psys ≈ (−βit ) ≤ pall
f
i=1

Once we have found the optimum, we check the failure probabilities of all failure
modes (including the non critical ones) using FORM. At this step, we may remove (or
add) the target values of the safety indices of some failure modes with low (high) failure
probabilities from the set of design variables. Then we perform PMA analysis for the
optimum values of the target values of the safety indices found from the deterministic
optimization. Finally, we solve the approximate deterministic optimization problem
again. We repeat this process until convergence.
2.3 Single-loop RBDO approach
The proposed approach is based on the single-loop RBDO algorithm of Liang et al.
(2004), which is referred herein as a component, single-loop algorithm. It is based on an
equivalent deterministic optimization formulation, which eliminates the need for inner
reliability loops without increasing the number of design variables. For completeness,
a brief overview of the component, single-loop RBDO algorithm is given below.
2.3.1 O verview of a component si ngl e-l oop RB D O
Designers replace the constraint on the system reliability with constraints dictating
that the reliabilities of the components are greater than or equal to some target values,
in order to circumvent the calculation of the system reliability. These target values are
often chosen based on judgment and experience. A typical component RBDO problem
is formulated as,

min f (d, µX , µP )
d,µX
(7)
subject to Ri = P[Gi (d, X, P) ≥ 0] ≥ Rti , i = 1, 2, . . . , n

where Ri = 1 − pfi is the actual reliability level of the ith constraint (or failure mode)
and Rti is the corresponding minimum allowable reliability.
A method solving directly the optimization problem (7) constitutes the double-loop
RBDO method. This method employs two nested optimization loops; the design opti-
mization loop (outer) and the reliability assessment loop (inner). The latter is needed for
the evaluation of each probabilistic constraint. If the probability of failure is estimated
using FORM, then every time the design optimization loop calls for a constraint evalu-
ation, a reliability assessment loop is executed that searches for the MPP in the standard
normal space. If the random variables are not normal, a nonlinear transformation maps
the original space to the standard (or reduced) normal space.
Using an R-percentile formulation (Du & Chen 2004), the RBDO problem (7) can
be expressed as,

min f (d, µX , µP )
d,µX (8)
subject to Gi (d, X, P) ≥ 0, i = 1, 2, . . . , n
96 Structural design optimization considering uncertainties

where the vectors X and P are evaluated at the MPP; i.e. X = XMPP and P = PMPP for
each constraint. The objective function is minimized subject to constraints that are
evaluated in the X space. It is therefore, necessary to have a consistent relationship
between vectors d, µX , µP , for which the objective function is evaluated, and vectors d,
X, P for which the constraints are evaluated. This is done by solving the Karush-Kuhn-
Tacker (KKT) optimality conditions (Papalambros & Wilde 2000) of the reliability
loops in the design optimization loop for X and P.
Using the PMA method, the performance measure Gp = minG(U) is minimized on
U
the beta-circle H(U) = U − βt = 0 in the standard normal space U. At the optimal
point, according to the KKT optimality condition, the gradient ∇G(U) of the limit
state and the gradient ∇H(U) of the beta-circle are collinear and point in opposite
directions. This condition yields,

U = −βall · α (9)
where
α = ∇GU (d, X, P)/∇GU (d, X, P) (10)

is the normalized gradient of the constraint in U-space.


Based on Eq. (9), the following relations between X, P and µX , µP hold for normal
random variables,

X = µX − σ · βt · α, P = µP − σ · βt · α (11)

where α = σ · ∇GX,P (d, X, P)/σ · ∇GX,P (d, X, P).


Using Eqs (11), the double-loop RBDO problem (8) is transformed to the following
single-loop, equivalent deterministic optimization problem,

min f (d, µX , µP )
d,µx
(12)
subject to Gi (d, Xi , Pi ) ≥ 0, i = 1, 2, . . . , n

where Xi = µX − σ · βit · αi ; Pi = µP − σ · βit · αi ;


αi = σ · ∇GiX,P (d, Xi , Pi )/σ · ∇GiX,P (d, Xi , Pi )
Symbol αi represents the normalized gradient of the ith constraint and σ is the
standard deviation vector of random variables X and random parameters P.
In the single-loop RBDO problem (12), the objective function is evaluated at the
mean point d, µX , µP and the constraints are calculated at point d, X, P. The relation-
ship of Eq. (11) is used to evaluate the constraints consistently with the values of the
design variables. The single-loop method does not search for the MPP of each con-
straint in each iteration. Instead, in each iteration, an approximation of the MPP’s of
the active constraints is used for each constraint. The sequence of the MPP approxi-
mations convergences to the correct MPP. This dramatically improves the efficiency
of the single-loop method without compromising the accuracy.
Figures 4.3a and 4.3b show a combined flowchart of the component single-loop
method and the proposed system, single-loop RBDO method (see section 2.3.2). The
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 97

Initialize d0, ␮0X, µp, p t,0


f , ␴, lb, ub

Assign X0␮0X, P0  ␮P

Assign CF = 1
Calculate bit  Φ1(1 − pfit)

Calculate ␣i0␴•∇Gi(X,P)(d0, X0, P0)/||␴•∇Gi(X,P) (d0, X0, P0)||

Repeat the following steps until convergence:


• Update counter, k
• Determine active constraint set
• Calculate objective function
• Approximate MPPs of random variables
• Check active constraint set
• Perform new iteration of problem (12) for
component RBDO or problem (19) for system
RBDO
See Fig. 3b for detailed calculations in this part of the
algorithm

Figure 4.3a Overview of the single-loop RBDO algorithm.

component, single-loop algorithm consists only of operations in boxes with solid line
border. Operations in boxes with dashed line border belong to the system, single-
loop algorithm. For the component single-loop method, the initial point d0 , µ0X , µP
and the target safety index vector βt for all constraints are first specified. Also, the
user specifies the standard deviation vector σ for the random variables and random
parameters and the upper and lower bound vectors (ub and lb, respectively) for all
deterministic and probabilistic design variables. The initial point d0 , X0 , P0 that is
needed to evaluate the constraints is assumed equal to d0 , µ0X , µP ; i.e. X0 = µ0x and
P0 = µP . At this point, the initial normalized gradient vector α for the ith constraint is
calculated as,

α0i = σ · ∇Gi(X,P) (d0 , X0 , P0 )/σ · ∇Gi(X,P) (d0 , X0 , P0 ) (13)

At the kth iteration of the optimization loop, the objective function is calculated
at point dk , µkX , µP . For the evaluation of the constraints, the algorithm checks if the
optimizer has changed the design vector µkX compared with the previous iteration;
98 Structural design optimization considering uncertainties

dk, ␮kX Change?

Yes, k  k1
No

Calculate
␣ik␴⋅∇Gi(X,P)(dk1,Xk1
i ,Pk1
i )/|| ␴•∇Gi(X,P) (dk,Xk1
i ,Pk1
i )||

Yes
ptf ≤ ε ?
i

No
Assign CF(i)  0; bti  0
Assign CF(i) 1
Calculate bti  Φ−1(1ptf )
i

Assign MaxBeta  max(bti )

CF (i)  0? Yes
Perform new βi t  MaxBeta
iteration

Calculate Xki  ␮Xk  βit•␣ki •␴, Pik  ␮Pbti •␣ki •␴

Yes
CF (i)  0?
Calculate Gi(dk,Xki ,Pki )

No
No
Gi  0?
Yes

Update: C F (i) 1

Calculate Gi(dk,Xki ,Pik) for active constraints

n n
Calculate ∑ ptfi  ∑ max pfij
i1 i=2 j<i

No
Is f minimized?

Figure 4.3b Calculations performed in one iteration of single-loop algorithm.

if not, the current gradient vector αk is used to calculate Xk = µkX − σ · βt · αk and


Pk = µP − σ · βt · αk for each constraint. If µkX has changed from the previous iteration,
the normalized gradient vector αk is updated before it is used to calculate Xk and Pk ,
which are needed for the constraint evaluation.
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 99

This is an essential step for keeping the design variable vector µX and the X, P
vectors consistent, resulting in a robust and stable algorithm. Furthermore, it greatly
improves the efficiency since the algorithm avoids unnecessary gradient evaluations.
When non-normal variables are used, equivalent σXN and µN X (Haldar & Mahadevan
2000) are calculated every time the optimizer updates the design variables and design
parameters.
The main advantage of the component single-loop method is that it eliminates the
repeated reliability analysis loops without increasing the number of design variables or
adding equality constraints. Instead of performing nested design optimization and reli-
ability loops, it solves an equivalent single-loop deterministic optimization problem.
The consistency between the design variable vector d, µX , µP and vector d, X, P needed
to evaluate the constraints makes the single-loop algorithm robust. It should be noted
that the component single-loop RBDO algorithm does not require an active constraint
set as is the case with the SLSV algorithm (Chen et al. 1997). The active constraint
set is simply identified by the algorithm. This is a significant advantage, which simpli-
fies the implementation of the single-loop algorithm and enhances its robustness and
efficiency.

2.3.2 Single-loop approach for seri es sy stem RB D O


The component single-loop RBDO approach of the previous subsection handles RBDO
problems in which each critical failure mode of a series system has a predetermined
minimum target safety index βit . Thus the user arbitrarily assigns a minimum safety
level for each failure mode instead of letting the optimizer determine this level in order
to achieve some required system reliability.
A single-loop RBDO approach for series systems is proposed in this subsection.
The optimizer determines the optimal values of the target failure probabilities of all
failure modes besides the original design variables d, µX and µP . The user specifies
a system reliability level and the optimizer allocates optimally the specified system
reliability among its failure modes. The optimal reliability allocation and the optimal
reliable design are determined simultaneously using an equivalent single-loop RBDO
formulation.
The proposed series system approach is a modification of the component single-
loop approach of the previous section. According to Eq. (12), a target safety index
βit = −−1 (ptfi ) is needed for each constraint (failure mode). However, the optimizer
must determine the target failure probability ptfi of each failure mode by apportioning
the allowable system probability of failure pall f
among all failure modes. A natural way
to do this is to include all the target values of the failure probabilities of the constraints,
ptfi , into the design variable set. In each iteration, the optimizer determines each ptfi and
the corresponding target safety index βit = −−1 (ptfi ) is calculated so that transforma-
tions Xi = µX − σ · βit · αi and Pi = µP − σ · βit · αi in Eq. (12) hold. Simultaneously,
we must make sure that the system failure probability does not exceed the maximum
allowable its value psys ≤ pall . The system failure probability is approximated by the
f  
upper second order Ditlevsen bound, psys = ni=1 ptfi − ni=2 max (pfij ), where Pfij is the
j<i
joint probability between the ith and jth failure modes.
100 Structural design optimization considering uncertainties

Based on the first-order reliability analysis (FORM), the failure set is approximated
by a polyhedral bounded by the tangent hyperplanes at the MPP points. In this case,
ptfi = (−βit ), where βit is the safety index for the ith failure mode. Similarly pfij is
obtained by approximating the joint failure set using the tangent hyperplanes at the
MPP points of the two failure modes,

 −βi  −βj
pfij = (−βi , −βj ; ρij ) = ϕ(x, y; ρij ) dxdy (14)
−∞ −∞

where ϕ(, ; ρ) is the PDF of a bivariate normal vector with zero means, unit variances,
and a correlation coefficient ρ given by,
 
1 βi + βj − 2ρβi βj
2 2
1
ϕ(−βi , −βj ; ρ) =  exp − (15)
2π 1 − ρ2 2 1 − ρ2

In Eq. (14), (, ; ρ) is the bivariate normal CDF, which has the property,

∂2  ∂ϕ
= (16)
∂x∂y ∂ρ

Combining Eqs (14) to (16) yields,


 ρij
∂
pfij = (−βi , −βj ; 0) + (−βi , −βj ; z) dz (17)
0 ∂ρ
 ρij
= (−βi )(−βj ) + ϕ(−βi , −βj ; z) dz
0

Note that pfij is directly related to the degree of correlation between the failure
modes, expressed by the correlation coefficient ρij , which varies between −1 (fully,
negatively correlated) and +1 (fully, positively correlated). In this work, pfij is obtained
by approximating the joint failure set by the tangent hyperplanes at the corresponding
MPP points of the two failure modes. If U denotes the MPP point in standard normal
safety margins Gi (U) and G
space, the  j (U) are then replaced by the linear safety margins
Mi = βi − m r=1 α ir U r and M j = β j − m
s=1 is Us , so that the correlation coefficient ρij is
α
given by,


n
ρij = ρ[Mi , Mj ] = αir αjr = cos (υij ) (18)
r=1

After the correlation coefficient ρij between two active failure modes is calculated,
Eq.
n (17) provides
n the joint probability of failure needed in evaluating constraint
p t
i=1 fi − i=2 max ptfij ≤ pall
f
.
j<i
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 101

The component, single-loop RBDO algorithm of Eq. (12) is therefore, modified as


follows,

min f (d, µX , µP ) (19)


d,µx ,ptf ,...,ptf
1 n

subject to Gi (d, Xi , Pi ) ≥ 0, i = 1, 2, . . . , n
 n 
'
psys = P Gi (d, Xi , Pi ) ≤ 0
i=1


n 
n
≈ ptfi − max (pfij ) ≤ pall
f
j<i
i=1 i=2

where

Xi = µX − σ · βit · αi ; Pi = µP − σ · βit · αi ,
αi = σ · ∇GiX,P (d, Xi , Pi )/σ · ∇GiX,P (d, Xi , Pi ) i = 1, 2, . . . , n

If the target probability of the ith failure mode ptfi is very small and the corresponding
safety index βit = −−1 (ptfi ) is very large, the optimization algorithm of problem (2.3.2)
becomes computationally inefficient and in most cases, it fails to converge because the
constraints become insensitive to the values of design variables ptfi . This is also true for
any DLP RBDO algorithm based on the PMA approach if a very large target safety
index is used. That is why the PMA approach with a “small’’ target safety index is
superior to the RIA approach where the safety index of inactive constraints is very
large (Youn et al. 2003). To avoid this problem, the proposed single-loop system
RBDO algorithm uses an active set strategy that is very easy to implement because all
target failure probabilities are included in the design variable set.
In every iteration of the optimization process, ptfi is compared with a small prede-
fined threshold value ε. If ptfi ≤ ε, the ith constraint is probabilistically inactive and its
probability of failure is assumed zero (i.e. ptfi = 0 for all inactive constraints). There-
fore, in each iteration, an active constraint is easily identified. This set is updated in
each iteration. The predefined threshold value ε depends on the problem and can be set
equal to a fixed small value or equal to small percentage of the largest failure probabil-
ity of the modes. In this study the threshold was set equal to 10−7 , which corresponds
to a safety index of 5.1993.
The flowchart of the proposed single-loop, series system RBDO algorithm is shown
in Figs. 3a and 3b. First, the algorithm initializes all design variables and parameters
(including probabilities of failure of each failure mode), and specifies lower and upper
bounds of design variables and standard deviations. Subsequently, the initial gradients
for all constraints are calculated. After initialization, the flowchart is similar to that
of the component single-loop method. The only difference is the implementation of an
active set strategy, which is explained in detail below.
The following two points regarding the efficiency and stability of the proposed
system single-loop algorithm are important.
102 Structural design optimization considering uncertainties

• The increased number of design variables increases the number of iterations of


the optimizer but it does not affect appreciably the computational cost in each
iteration.
The new approach adds the target failure probabilities of the constraints, ptfi , to
the design variable set, thereby increasing the number of design variables by the
number of constraints. This increases the number of iterations of the optimizer
needed to achieve convergence. However, the computational cost in each iteration
does not increase appreciably because the biggest part of this cost is due to gradient
calculation, which is easy to perform. The reasons are that the objective function
is not a function of the target failure probabilities of the modes and each constraint
is a linear function of its safety index.
• A probabilistic active set strategy is used to increase the efficiency and stability of
the algorithm.
It has been mentioned that in any iteration, if the probability of failure of the
ith constraint is smaller than a threshold value ε the constraint is assumed prob-
abilistically inactive. It is very easy to check if a constraint is smaller than the
threshold because all failure probabilities are design variables. As is indicated in
Fig. 3b, a constraint flag CF(i), which is set equal to zero for inactive constraints
and one for active constraints, is used to identify inactive constraints. The safety
indices of all inactive constraints are set equal to the maximum safety index of the
active constraints. Fig. 3b shows the details of the procedure identifying inactive
constraints.

After a safety index is calculated or assigned for each constraint, the algorithm
calculates quantities Xi = µX − σ · βit · αi and Pi = µP − σ · βit · αi for the ith constraint,
relating X, P and µX , µP at the current MPP approximation using the KKT con-
ditions (see section 2.1). At this point, the feasibility of all inactive constraints is
checked by calculating the value of each inactive constraint at (Xi , Pi ) allowing us
to update the active constraint set (see Fig. 3b) in each iteration. Subsequently, the
value of all active  at (Xi , Pi ) is obtained as well as the system probability
constraints
of failure psys = ni=1 ptfi − ni=2 max (pfij ). In calculating the joint failure probability
j<i
pfij , we first check the value of the correlation coefficient ρij . If ρij is equal to 1, the
two failure modes are positively fully correlated and their joint failure probability
can be approximated by min (ptfi , ptfj ). If ρij is equal to −1 the two failure modes are
negatively fully correlated and their joint failure probability can be assumed zero. If
ρij ≈ 0, then the failure modes are independent and ptfij = (−βi ) · (−βj ). In any other

case, ptfij = (−βi )(−βj ) + 0 ij ϕ(−βi , −βj ; z) dz is used in problem (2.3.2), and the sys-
tem single-loop RBDO algorithm continues similarly with the component single-loop
version.

3 Applications
This section demonstrates and the accuracy and efficiency of the proposed SORA
and single-loop approaches for series system RBDO problems using a beam exam-
ple, and an internal combustion engine design example. Deterministic optimization,
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 103

t Z

L  100 in w

Figure 4.4 Cantilever beam under vertical and lateral bending.

a component single-loop method, and the proposed series system, single-loop RBDO
approaches are compared. In all cases, the same initial point and similar convergence
criteria are used.
MATLAB was used for deterministic optimization, and the single-loop approaches.
The add-in tool “Solver’’ in MS-Excel was used for constrained optimization in the
SORA system RBDO approach.

3.1 A cantilev er beam example


Consider a cantilever beam in vertical and lateral bending (Wu et al. 2001) (see Fig. 4.4).
The beam is loaded at its tip by vertical and lateral loads Y and Z, respectively. Its
length L is equal to 100 in. The width w and thickness t of the cross-section are
deterministic design variables. The objective is to minimize the weight of the beam.
This is equivalent to minimizing the cross sectional area f = w · t, assuming that the
material density and the beam length are constant.
Two non-linear failure modes are considered. The first failure mode is yielding at
the fixed end of the beam; the other failure mode is that the tip displacement exceeds
the allowable value D0 = 2.2535 in. In the single-loop system RBDO approach the
problem is formulated as,

min f = w · t
w,t,ptf ,ptf
1 2

subject to P[Gi (X) ≥ 0] ≥ 1 − ptfi i = 1, 2



600 600
G1 (SY , Z, Y, w, t) = SY − · Y + · Z
wt 2 w2 t

 2 2
4L3 Y Z
G2 (E, Z, Y, w, t) = D0 − +
Ewt t2 w2

Gsys = 0.0027 − psys = 0.0027 − (ptf1 + ptf2 − pf12 ) ≥ 0

0 ≤ w, t ≤ 5

where G1 and G2 are the limit state functions corresponding to the two failure modes.
In the SORA approach, the problem formulation is identical but the system probability
104 Structural design optimization considering uncertainties

Table 4.1 Comparison of RBDO methods for the beam example.

Deterministic Component System single-loop


optimization single-loop (SORA)

w = x1 2.3520 2.4484 2.6093 (2.6209)


t = x2 3.3263 3.8884 3.6126 (3.6001)
ptf1 0.00135 0.002412 (0.002328)
ptf2 0.00135 0.000426 (0.000372)
pf12 Neglected 0.0001389 (Neglected)
psys 0.00270 0.00270
Objective f (X) 7.8235 9.5202 9.4263 (9.4356)
G 1 (X) 640.3600 0 0
G 2 (X) 0 0 0
No. of F.E. 83 115 624∗
∗ This number refers to the number of function evaluations of the Single-Loop approach.
The number of function evaluations for the SORA approach could not be determined.

10
9.5
9
Objective values

8.5
8
7.5
7 Deterministic optimization
Component single-loop
6.5 System single-loop/JPDF
6
0 2 4 6 8 10 12 14 16
Iteration numbers

Figure 4.5 Optimization history for the beam example.

of failure is approximated by the sum of the failure probabilities of the two modes.
Design variables w and t are deterministic while Y, Z, SY and E are normally dis-
tributed random parameters with Y ∼ N (1000, 100) lb, Z ∼ N (500, 100) lb, SY ∼ N
(40 000, 2000) psi and E ∼ N (29 · 106 , 1.45 · 106 )psi; SY is the random yield strength,
Z and Y are mutually independent random loads in the vertical and lateral directions
respectively, and E is the Young’s modulus. A safety index βit = 3, which corresponds
to a maximum allowable probability of failure of 0.00135, is used for both constraints
in the component single-loop approach (see Eq. 7).
Table 4.1 and Fig. 4.5 compare the efficiency of the deterministic, component
single-loop and system single-loop optimizations and the resulting optimum designs.
A common initial point (w = 2.5, t = 2.5) and common convergence conditions were
used for all three optimizations. Both the component and system RBDO problems
have the same allowable system failure probability pall f
= 0.0027, which corresponds
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 105

to βsys
all
= 2.78215 – the difference is that in component RBDO the probabilities of both
failure modes are were constrained to be less than or equal to 0.00135, whereas in sys-
tem RBDO the maximum allowable failure probabilities of the modes were determined
by the optimizer. The component and system single-loop methods converged in six and
nine iterations, respectively (see Fig. 4.5). The deterministic optimization converged in
eight iterations. The SORA system RBDO approach required four cycles each involv-
ing a deterministic optimization and reliability analysis. A total of 25 iterations were
required by the optimizer (seven in each of the first three cycles and four in the fourth).
Due to the optimal apportionment of the allowable system reliability, both sys-
tem RBDO approaches resulted in an optimum cross sectional area of approximately
9.43 in2 , which is better than the component optimum of 9.5202 in2 . The optimizer
allocated a much lower probability of failure to the second failure mode than the
first, indicating that the reliability of this mode can be increased at a much lower cost
(cross sectional area) than the reliability of the first mode. Note that for the compo-
nent approach, the system failure probability is 0.0027 (0.00135 + 0.00135). Both
constraints are active in the component and system approaches while only the second
constraint is active in the deterministic optimization. The single-loop system approach
yielded a more efficient design than SORA because the former approximates more
accurately the system failure probability than the latter, but the difference between the
two optimum designs is small.
The last row of Table 4.1 shows the number of function evaluations for the deter-
ministic, component single loop and system single-loop optimizations. The number of
function evaluations of the SORA system RBDO approach could not be determined
in MS-Excel solver. Each call of the objective function or any constraint, counts as a
function evaluation. The system RBDO uses more function evaluations than the com-
ponent RBDO due to the active constraint set strategy. Also, the component RBDO
uses more function evaluations than the deterministic optimization. However, both the
component and single-loop system approaches are efficient, as evidenced by the low
number of function evaluations.
The optimal apportionment of the system probability of failure among the failure
modes indicates the significance of each mode in the overall system probability of
failure. In this beam example, the first failure mode is much more significant than the
second because the failure probability of the first mode is about seven times bigger
than the second. Therefore, if we want to change the problem formulation in order
to reduce the system probability of failure and/or the area of the optimum design, we
must allocate more resources to reduce the probability of failure of the first mode.
For example if we can select a different material for the beam it is better to increase
the yield stress than the Young’s modulus of the beam, because yielding is the most
important mode.
An important advantage of a system RBDO approach is that it allocates the system
reliability among the failure modes by accounting for the cost of increasing reliability.
An optimality condition relating the sensitivity derivatives of the reliabilities of the
failure modes and the sensitivity derivatives of the cost (objective) function will be
presented in section 4. This condition indicates that high reliability is allocated to
failure modes whose reliability can be increased at low cost, that is, a small increase
in the objective function is required in order to increase the reliability of these modes.
106 Structural design optimization considering uncertainties

3.2 Intern al c o mb us t io n e ng ine examp l e


This example addresses a flat head design of an internal combustion engine from a
thermodynamic viewpoint (Papalambros & Wilde 2000, McAllister & Simpson 2003).
Design variables are the cylinder bore b, compression ratio cr , exhaust valve diameter
dE , intake valve diameter dI , and the revolutions per minute (RPM) at peak power
divided by 1000, ω. The goal is to obtain preliminary values for these variables that
maximize the power output per unit displacement while meeting specific fuel economy
and packaging constraints. The problem is stated as,

Find : b, dI , dE , cr , ω, ptf1 , ptf2 , . . . , ptf9

max f = ω[3688ηt (cr , b)ηv (ω, dI ) − FMEP(cr , ω, b)]/120

FMEP = 4.826(cr − 9.2) + 7.97 + 0.253 · [8V/(πNc )] · ωb(−2)


+ 9.7 · 10−6 {[8V/(πNc )] · ωb(−2) }2
ηt = 0.8595(1 − cr−0.33 ) − Sv (1.5/ω)0.5
Sv = 0.83 · [8 + 4cr + 1.5(cr − 1)b3 πNc /V]/[(2 + cr ) · b]
ηv = ηvb (1 + 5.96 · 10−3 ω2 )/{1 + [9.428 · 10−5 · 4V/(πNc Cs ) · (ω/dI2 )]2 }

1.067 − 0.038 e(ω−5.25) for ω ≥ 5.25
ηvb =
0.637 + 0.13 ω − 0.014 ω2 + 0.00066 ω3 for ω ≤ 5.25

subject to:
P[Gi (X) ≥ 0] ≥ 1 − ptfi i = 1, 2, . . . , 9
 

9 
9
Gsys = 0.006539 − ptfi − max (ptfij ) ≥0
j<i
i=1 i=2

where:
G1 = 400 − 1.2Nc b (min. bore wall thickness),
G2 = b − [8V/(200πNc )]0.5 (max. engine height),
G3 = 0.82b − dI − dE (valve geometry & structure),
G4 = dE − 0.83dI (min. valve diameter ratio),
G5 = 0.89dI − dE (max. valve diameter ratio),
G6 = 0.6Cs − 9.428 · 10−5 (4V/πNc )(ω/dI2 ) (max. Mech Index),
G7 = − 0.045 · b − cr + 13.2 (knock-limit compression ratio),
G8 = 6.5 − ω (max. torque converter RPM),
G9 = 230.5Qηtw − 3.6 · 106 (max. fuel economy),
ηtw = 0.8595 · (1 − cr−0.33 ) − Sv
V = 1.859 · 106 (mm3 )
Q = 43,958 kJ/kg
Cs = 0.44
Nc = 4
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 107

Table 4.2 Distribution parameters and bounds of design variables.

Standard Lower Upper


deviation bound bound

Cylinder bore, b, mm 0.4 70 90


Intake valve diameter, d I , mm 0.15 25 50
Exhaust valve diameter, d E , mm 0.15 25 50
Compression ratio, c r 0.05 6 12
RPM at peak power/1000, ω 0.25 5 12

Many of the above expressions are valid only within the limited range of bore-to-
stroke ratio of 0.7 ≤ b/s ≤ 1.3. More information on the problem definition can be
found in (Papalambros & Wilde 2000). All design variables are assumed normally
distributed with standard deviation and bounds as shown in Table 4.2.
First, the deterministic optimization problem was solved. For the component single-
loop approach, a target safety index of 3 (which corresponds to a failure probability
of 0.00135) was assumed for each failure mode. The system probability of failure
was therefore, equal to 0.00675 (=5 × 0.00135), assuming that all modes are disjoint.
The assumed system probability of failure of 0.00675 was checked using Monte Carlo
(MC) simulation with importance sampling. It was found that the actual probability
of failure is 0.006539, instead. The latter was used as the maximum allowable system
probability of failure for both the single-loop and SORA system RBDO approaches.
It was also verified using MC simulation that the joint probabilities of failure for all
pairs of active constraints are negligible for this example. The same initial point of (80,
35, 40, 11, 6) and convergence conditions were used in all optimizations.
Table 4.3 compares the results from deterministic optimization, component single-
loop RBDO, system single-loop RBDO and SORA system RBDO. In the deterministic
optimization, the constraint values are given at the optimum point. For the component
and system approaches, the constraint values are given at their corresponding MPP’s.
Finally, in the system approaches, the active constraint values are given at their MPP
points corresponding to different β values as calculated by the algorithm, while the
inactive constraint values are given at their approximate MPP’s (point on β – circle
closest to the limit state) corresponding to the assigned maximum β (see section 2.3.2).
As shown in Table 4.3, the system single-loop problem has the same active constraint
set with the component single-loop problem. Also, the deterministic optimization has
the same active set excluding the sixth constraint.
Table 4.3 shows that the single loop and SORA based approaches yielded practically
identical designs. Table 4.3 shows the apportionment of the specified 0.006539 system
probability of failure among all failure modes. The most critical mode is the sixth one
followed by the third and first, with probabilities of failure of 0.002370, 0.001665
and 0.001448, respectively.
The deterministic optimum has highest output power but the least reliability. The
component and the system optima are very similar (see values of design variables) and
have almost the same output power (50.9713 and 51.1023, respectively), and sys-
tem reliability. However, the system reliability approach is superior to the component
108 Structural design optimization considering uncertainties

Table 4.3 Comparison of results for the engine design example.

Design Deterministic Component System System


variables optimization single-loop single-loop SORA

b 83.3333 82.1333 82.1419 82.1413


dI 37.3406 35.8430 35.8456 35.8356
dE 30.9927 30.3345 30.3641 30.3645
cr 9.4500 9.3446 9.3174 9.3170
ω 6.0720 5.3141 5.3598 5.3639
pf1 0.00135 0.001448 0.001441
pf3 0.00135 0.001665 0.001544
pf4 0.00135 0.000811 0.000722
pf6 0.00135 0.002370 0.002595
pf7 0.00135 0.000232 0.000223
psys 0.006539 0.006539 0.006539
Objective f(X) 55.6677 50.9713 51.1023 51.1013
G 1 (X) 0 0 0 0
G 2 (X) 6.4088 4.0088 3.548016 3.14∗
G 3 (X) 0 0 0 0
G 4 (X) 0 0 0 0
G 5 (X) 2.2404 0.9633 0.700382 0.63∗
G 6 (X) 0.0211 0 0 0
G 7 (X) 0 0 0 0
G 8 (X) 0.4280 0.4359 0.0968 0.01∗
G 9 (X) 0.1179 0.0892 0.0771 0.0476∗
No. of F.E. 309 471 1290 N/A
∗ The values of the performance function correspond to a safety index of 4.5 for modes 2, 5, 8, and 9 for the SORA
system RBDO. Therefore, they should not be compared to the results of the single loop RBDO.

reliability approach for the following two reasons. First, the system approach allows
the designer to control directly the system reliability, whereas the component reliability
approach only allows the designer to control the reliabilities of the failure modes. For
example, if the designer did not know that modes 2, 5, 8 and 9 were insignificant
and distributed the minimum allowable reliability equally among the modes then he/
she would obtain a design with lower power output than the maximum achievable
one and unnecessarily high reliability. Second, the system reliability approach helps
the designer identify the significant failure modes that contribute the most to the over-
all system reliability. This is very important considering that if we wanted to change
the problem formulation then resources should be targeted towards improving the
reliability of those “critical’’ failure modes.
Figure 4.6 shows the optimization histories and the last row of Table 4.3 shows
the number of function evaluations for all approaches, but SORA. The computational
cost of the single-loop system approach is higher than that of the component approach
due to the active set strategy. In general, the more failure modes in a problem, the
higher the system computational cost is expected to be. The SORA based RBDO was
considerably slower than its single-loop counterpart. The SORA approach converged
in two cycles involving a total of 22 iterations; the first cycle involved 15 iterations
while the second involved 7 iterations.
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 109

57
56
55
54
Objective value

53
52
51
50
49 Deterministic optimization
48 Component single-loop
System single-loop/JPDF
47
0 1 2 3 4 5 6 7
Iteration number

Figure 4.6 Optimization histories for the engine design example.

4 Optimality condition for series system RBDO and


validation of the optimum of the beam example
Consider the general system RBDO problem formulation in Eq. (1). The Lagrangian,
L, of this constrained optimization problem is,

f − psys )
L = f + λ(pall (20)

At the optimum, the constraint on the system safety index is usually active, that
is the system failure probability assumes its maximum allowable value to minimize
the objective function, f . At the optimum, the gradient of the Lagrangian is zero.
Therefore, the optimality conditions become,

∂L ∂f ∂psys
=0⇒ =λ (21)
∂yk ∂yk ∂yk

where yk is the kth design variable.


From Eq. (21) we obtain the following optimality criterion for the series system
RBDO problem,
∂f
∂yk
∂psys
= λ = constant (22)
∂yk

Equation (21) says that at the optimum, the iso-cost surface (loci of all designs with
constant objective function, f ) and iso-reliability surface (loci of all designs with same
system reliability, or equivalently system failure probability) have same slope, that is
they are tangent to each other. This equation can be used to validate the optimum
design obtained by the SORA or single-loop system RBDO approaches. Note that,
there is no reason that the failure probabilities of the modes be equal at the optimum.
110 Structural design optimization considering uncertainties

Any deviation from system reliability optimum results in a heavier or less reliable design
4
System failure probability  0.0027
3.9 Area  9.43 in2
System reliability optimum
3.8 Component RBDO optimum

3.7
t (in)

3.6

3.5

3.4

3.3
2.35 2.45 2.55 2.65 2.75
w (in)

Figure 4.7 Optimality condition for beam design.

We will check if the optimum beam design in the first example (Table 1) satisfies
the above optimality condition. Figure 4.7 compares the optimum designs obtained by
the proposed system RBDO approach and the design obtained by component RBDO.
It is observed that the system reliability optimum has the smallest area among all
the designs with same system reliability (Pf = 0.0027, system reliability = 1 − 0.0027,
system safety index = 2.782). Iso-reliability curves (curves representing designs with
system reliability 1 − 0.0027) and iso-cost curves (designs with area = 9.43 in2 ) are
tangent at the point representing the optimum reliability-based design. Any deviation
from this optimum will result in a design with larger area or an infeasible design (viola-
tion of the minimum system reliability constraint). The design obtained by component
RBDO lies on the same iso-reliability curve as the RBDO optimum but it has higher
area. This shows that the proposed approach saves area by apportioning reliability in
an optimal way among the failure modes of a system.

5 Conclusions
Two approaches for system RBDO were presented that allow for an optimal apportion-
ment of the reliability of a series system among its failure modes (constraints). These
approaches use SORA and single-loop RBDO algorithms to determine the optimum
design. The target values of the failure probabilities of the failure modes are consid-
ered as design variables. An active set strategy is used for algorithmic stability. The
active constraint set is updated in each iteration during the optimization process. The
first-order and second-order Ditlevsen upper bounds are used to approximate conser-
vatively the probability of failure of a series system. The proposed algorithms ensure
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 111

overall system reliability rather than an arbitrary reliability for each failure mode, as
is the case with component RBDO methods. The user specifies an acceptable system
reliability level instead of deciding arbitrarily on a minimum reliability level for each
failure mode, which is usually not optimal.
The efficiency and robustness of the two approaches was demonstrated on two
design examples involving a beam, and an internal combustion engine. The results
were compared with deterministic optimization and the conventional RBDO formu-
lation. It was shown that both system RBDO approaches identified identical optimal
designs that have the specified system reliability and provide an optimal reliability for
each failure mode. In doing so, the algorithms for the two system RBDO approaches
identified the critical failure modes that contributed the most to the system reliability.
The single-loop system RBDO approach was found considerably more efficient than its
SORA counterpart because the former approach performs only one deterministic opti-
mization, while the latter approach performs a sequence of optimizations. Moreover,
the single-loop approach avoids the extrapolation of the MPP’s of the SORA approach.
The results from the examples also showed that the number of function evaluations is
higher for the system approaches compared with the component approach due to the
active set strategy. In general, the more failure modes are in a problem, the higher the
system computational cost is expected to be.

Acknowledgments
This study was performed with funding for the last two authors from the Gen-
eral Motors Research and Development Center and the Automotive Research Center
(ARC), a U.S. Army Center of Excellence in Modeling and Simulation of Ground
Vehicles at the University of Michigan. The support is gratefully acknowledged. Such
support does not however, constitute an endorsement by the funding agencies of the
opinions expressed in the chapter.

Nomenclature
Latin symbols
d: deterministic design variables
f : objective function
Gi (d, X, P): ith deterministic constraint (performance function of the ith failure mode
of a system)
P: random parameters
PMPP : values of the random parameters at the Most Probable Point
psys : actual system failure probability
pall
f
: maximum allowable failure probability of a system
pfi : actual failure probability of ith mode of a system
ptfi : target failure probability of ith mode of a system
T: Transformation from space of reduced variables (U or Z) to the space of the space
of the original variables (X)
UMPP : values of the vector of the random variables and the random parameters at the
Most Probable Point in the reduced space (Z- or U-space)
112 Structural design optimization considering uncertainties

X: random variables
XMPP : values of the random variables at the Most Probable Point in the space of the
original variables (X-space)
y: set of all design variables (both deterministic and random)
Greek symbols
α: normalized gradient of the performance function of a failure mode
βsys
all
: minimum allowable value of the safety index of a system
βi : actual value of the safety index of the ith mode of a system
βit : target value of the safety index of the ith mode of a system
µX : mean values of random variables
µP : mean values of random parameters
∇: gradient operator.

A b b rev i a ti o ns
CF: Constraint Flag
DLP: Double Loop
FORM: First Order Reliability Method
IC: Internal Combustion
KKT: Karush-Kuhn-Tucker
MPP: Most Probable Point
PMA: Performance Measure Approach
RBDO: Reliability-Based Design Optimization
SLSV: Single-Loop, Single-Vector
SORA: Sequential Optimization and Reliability Assessment.

References

Agarwal, H., Renaud, J.E., Lee, J.C. & Watson, L.T. 2004. A Unilevel Method for Relia-
bility Based Design Optimization. 45th AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics and Materials Conference. Palm Springs, CA.
Ba-abbad, M., Nikolaidis, E. & Kapania, R. 2006. A New Approach for System Reliability-
Based Design Optimization. AIAA Journal, Vol. 44, No. 5, pp. 1087–1096.
Chen, X., Hasselman, T.K. & Neill, D.J. 1997. Reliability Based Structural Design Optimization
for Practical Applications. Proceedings of the 38th AIAA/ASME/ASCE/AHS/ASC Structures,
Structural Dynamics, and Materials Conference.
Ditlevsen, O. 1979. Narrow Reliability Bounds for Structural Systems. Journal of Structural
Mechanics 3:453–472.
Du, X. & Chen, W. 2004. Sequential Optimization and Reliability Assessment Method for
Efficient Probabilistic Design. ASME Journal of Mechanical Design 126(2):225–233.
Frangopol, D.M. & Maute, C. 2004. Reliability-Based Optimization of Civil and Aerospace
Structural Systems. Engineering Design Reliability Handbook. Chapter 24, CRC Press, Boca
Raton, Florida.
Haldar, A. & Mahadevan, S. 2000. Probability, Reliability and Statistical Methods in
Engineering Design. John Wiley & Sons, Inc.
Kuschel, N. & Rackwitz, R. 2000. Optimal Design under Time-Variant Reliability Constraints.
Structural Safety 22(2):113–128.
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 113

Liang, J., Mourelatos, Z.P. & Tu, J. 2004. A Single-Loop Method for Reliability-Based
Design Optimization. International Journal of Product Development. Interscience Enterprises
Limited, Great Britain (in press).
Liang, J., Mourelatos, Z.P. & Nikolaidis, E. 2007. A Single-Loop Approach for System
Reliability-Based Design Optimization. ASME Journal of Mechanical Design (accepted).
Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety. Prentice-Hall, Inc.
McAllister, C.D. & Simpson, T.W. 2003. Multidisciplinary Robust Design Optimization of an
Internal Combustion Engine. ASME Journal of Mechanical Design 125(1):124–130.
Melchers, R.E. 2001. Structural Reliability Analysis and Prediction. John Wiley & Sons.
Moré, J.J. & Sorensen, D.C. 1983. Computing a Trust Region Step. SIAM Journal on Scientific
and Statistical Computing, Vol. 3, pp. 553–572.
Moses, F. 1995. Probabilistic Analysis of Structural Systems. Probabilistic Structural Mechanics
Handbook: Theory and Industrial Applications, edited by C. Raj Sundararajan, Chapman &
Hall, 166–187.
Papalambros, P.Y. & Wilde, D.J. 2000. Principles of Optimal Design; Modeling and Computa-
tion. 2nd Edition, Cambridge University Press.
Royset, J.O., Der Kiureghian, A. & Polak, E. 2001. Reliability-based Optimal Design of Series
Structural Systems. Journal of Engineering Mechanics 607–614.
Sørensen, D.C. 1994. Minimization of a Large Scale Quadratic Function Subject to an Ellip-
soidal Constraint. Department of Computational and Applied Mathematics, Rice University.
Technical Report TR94-27.
Steihaug, T. 1983. The Conjugate Gradient Method and Trust Regions in Large Scale
Optimization. SIAM Journal on Numerical Analysis, Vol. 20, pp. 626–637.
Streicher, H. & Rackwitz, R. 2004. Time-Variant Reliability-Oriented Structural Optimization
and a Renewal Model for Life-cycle Costing. Probabilistic Engineering Mechanics 19(1–2):
171–183.
Thanedar, P.B. & Kodiyalam, S. 1992. Structural Optimization Using Probabilistic Constraints.
Structural Optimization 4:236–240.
Tu, J., Choi, K.K. & Park, Y.H. 1999. A New Study on Reliability-Based Design Optimization.
ASME Journal of Mechanical Design 121:557–564.
Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety – Factor Based Approach for
Probabilistic – Based Design Optimization. 42nd AIAA/ASME/ASCE/AHS/ASC Structures,
Structural Dynamics and Materials Conference. Seattle, WA.
Youn, B.D., Choi, K.K. & Park, Y.H. 2003. Hybrid Analysis Method for Reliability-Based
Design Optimization. ASME Journal of Mechanical Design 125:221–232.
Chapter 5

Nondeterministic formulations of
analytical target cascading for
decomposition-based design
optimization under uncertainty
Michael Kokkolaras & Panos Y. Papalambros
University of Michigan, Ann Arbor, MI, USA

ABSTRACT: Analytical target cascading (ATC) is a methodology for optimal design of hier-
archically decomposed systems. In this chapter, we present non-deterministic formulations of
ATC to account for uncertainties in decomposition-based optimal system design. Depending
on the amount of available information, we adopt either a probabilistic or a robust optimiza-
tion approach to formulate the multilevel design optimization problem, and use appropriate
techniques to estimate uncertainty propagation. We demonstrate the application of all pre-
sented ATC formulations using an engine optimal design example, and discuss the obtained
results.

1 Introduction
The dictionary definition of a system is “an organized integrated whole made up of
diverse but interrelated and interdependent parts,’’ and complex is one of its synonyms
(Merriam-Webster 2007). It is not surprising then that developing large engineering
systems is accomplished by assigning the task of designing the diverse but interrelated
parts to different teams, and that the challenge is to organize these activities so that
the parts can be integrated successfully to form the whole.
Accordingly, large engineering systems are typically decomposed into subsystems,
subsystems are decomposed into components, components are decomposed into parts,
and so on. This results in a multilevel hierarchy, an example of which is shown in
Figure 5.1. Different teams (or individuals) are then assigned with the optimal design

Level i = 0 System j = 1

Level i = 1 Subsystem j = 1 Subsystem j = 2 Subsystem j = n

Level i = 2 Component j = 1 Component j = 2 Component j = m

Figure 5.1 Example of a hierarchically decomposed multilevel system.


116 Structural design optimization considering uncertainties

problem of each element in this hierarchy. If these design teams are not given exact
specifications, they focus on their own objectives without taking into consideration
interactions with other elements. This situation is compounded when design variables
are shared among elements; if the obtained values of shared design variables are not
equal in all elements, the system design is inconsistent and cannot be realized. Hierar-
chical decomposition facilitates the use of decentralized optimization approaches that
aid systems engineers to identify interactions among elements at lower levels and to
transfer this information to higher levels, and has become standard design practice,
as evidenced by the organizational structure of engineering companies (Haimes et al.
1990).
Analytical target cascading (ATC) is a methodology for solving such hierarchi-
cal multilevel optimal design problems. Design targets are cascaded to lower levels
using the model-based hierarchy. An optimization problem is posed and solved for
each design subproblem to minimize deviations from propagated targets. Solving the
subproblems according to an appropriate coordination strategy yields overall system
compatibility.
The deterministic formulation of the ATC methodology assumes that complete infor-
mation of the system design problem is available, and that design decisions can be
implemented precisely. These assumptions imply that optimization results are as good
(and therefore useful) as the design and simulation/analysis models used to obtain
them, and that they are meaningful only if they can be realized exactly.
In reality, these assumptions do not hold. We are rarely in a position to repre-
sent a physical system without using approximations, have complete knowledge on
all of its parameters, or control the design variables with high accuracy. Therefore,
uncertainty is inherently present in simulation-based design of complex engineering
systems. The analysis models used for the simulation depend on assumptions and
include many approximations and empirical constants. Also, advanced yet relatively
immature technologies are often associated with uncertainty. The designer is not sure
about the validity of the decisions he/she has made, and would like to be able to
perform optimization studies under uncertainty. It is therefore imperative to rep-
resent uncertainties and take them into account during the early design assessment
process.
Uncertainty identification, representation, and quantification are the cornerstones
of design optimization under uncertainty. Given the design model and the necessary
analysis/simulation models, the designer must first identify all possible sources of uncer-
tainty. Then, she/he must choose an appropriate means to represent and quantify them.
A popular approach is to represent them as random variables, and quantify them by
means of some probability distribution utilizing expertise and data. This approach
is useful when there are sufficient data to infer probability distributions for the con-
sidered random variables. It should be adopted since a plethora of techniques exists
for solving probabilistically formulated optimal design problems. However, in many
situations the designer does not have the necessary information available. In this case,
he/she must assume that the uncertain quantities can take any value within intervals
that are used to quantify uncertainty.
In this chapter, we review the ATC methodology, and we extend its deterministic
formulation using both probabilistic and interval analysis approaches. We address
the issue of representing uncertain quantities as optimization variables, formulate the
Nondeterministic formulations of analytical target cascading 117

associated nondeterministic design problems appropriately, and present techniques for


estimating uncertainty propagation through the multilevel hierarchy of decomposed
systems. The proposed methodologies are applied to a simple engine design example
to illustrate the introduced concepts.

2 Analytical target cascading


Analytical target cascading (ATC) is a mathematical methodology for translating (“cas-
cading’’) overall system design targets to element specifications based on a hierarchical
multilevel decomposition (Michelena et al. 1999; Papalambros 2001; Kim 2001; Kim
et al. 2003). The objective is to assess interactions and identify possible tradeoffs among
elements early in the design development process, and to determine specifications that
yield consistent system design with minimized deviation from system design targets.
For an engineering corporation, ATC provides a means to dictate technical objectives
to different design teams, knowing a-priori that these goals can be achieved without
conflicting with those of other teams. Consistent system design can then be accom-
plished with minimum communication overhead, i.e., maximum efficiency, avoiding
costly iterations late in the process.
ATC operates by formulating and solving a minimum deviation optimization prob-
lem for each element in the hierarchy. Assuming that responses of higher level elements
are functions of responses of lower-level elements, it aims at minimizing the gap
between what upper-level elements “want’’ and what lower-level elements “can.’’ Sim-
ilarly, if design variables are shared among some elements at the same level, their
consistency is coordinated by their common parent element at the level above.
The ATC process is proven to be convergent when using a specific class of coordina-
tion strategies (Michelena et al. 2003), and has been successfully applied to a variety
of optimal design problems, e.g., (Kim et al. 2002; Kokkolaras et al. 2002; Kim et al.
2003). We refer the reader to the above references for a detailed description of ATC.
Here, we present the concept and the general mathematical formulation.
The key assumption of the ATC methodology is that there is a functional dependency
in the hierarchical, multilevel system decomposition. Assuming that element j at level
i has nij children, this functional dependency is expressed as

rij = fij (r(i+1)1 , . . . , r(i+1)nij , xij , yij ) (1)

where rij are element responses, r(i+1)1 , . . . , r(i+1)nij denote children responses, xij rep-
resent local design variables, and yij denote local shared design variables (i.e.,
design variables that this element shares with other elements at the same level). The
mathematical formulation of problem pij for element j at level i is

min rij (r(i+1)1 , . . . , r(i+1)nij , xij , yij ) − riju 22 + yij − yiju 22 +
nij nij
k=1
r(i+1)k − r(i+1)kl
22 + k=1 y(i+1)k − y(i+1)k
l
22 (2)
with respect to r(i+1)1 , . . . , r(i+1)nij , xij , yij , y(i+1)1 , . . . , y(i+1)nij
subject to gij (rij , xij , yij ) ≤ 0

where coordinating variables for the shared design variables of the children are denoted
by y(i+1)1 , . . . , y(i+1)nij , and superscripts u (l) are used to denote response and shared
118 Structural design optimization considering uncertainties

Optimization inputs Optimization outputs

Response and shared ruij rlij Response and shared


variable values cascaded variable values passed
down from the parent yuij ylij up to the parent

Element optimization problem pij,


where rij is provided by the
analysis/simulation model
rij fij (r(i1)k1,..., r(i1)kc ,xij, yij)
ij

Response and shared r1(i1)k ,..., r1(i1)k ru(i1)k1,..., ru(i1)kcij Response and shared
1 cij
variable values passed variable values cascaded
1 1
up from the children y(i1)k ,..., y(i1)k yu(i1)k1,..., yu(i1)kc down to the children
1 cij ij

Figure 5.2 ATC information flow at element j of level i.

variable values that have been obtained at the parent (children) problem(s), and have
been cascaded down (passed up) as design targets (consistency parameters). Shared
design variables are restricted to exist only among elements at the same level having the
same parent. The top-level problem of the hierarchy is a special case: at this level (i = 0),
there is only one element (j = 1 – the system), and responses cascaded from above
are the given system design targets T = Ru01 (there is no parent element); also, since
this is the sole element of the level, there exist no shared variables. The bottom-level
problems are also a special case since they have no children. Finally, note that although
communication among levels, i.e., updating parameter values associated with the ATC
process, is bi-directional, functional dependency is strictly hierarchical. Figure 5.2 illus-
trates the information flow of the ATC process at element j in level i. Assuming that
all the parameters have been updated using the solutions obtained at the parent- and
children-problems, Problem (2) is solved to update the parameters of the parent- and
children-problems. This process is repeated until the variables in all optimization
problems do not change significantly after consecutive iterations.
The sequence in which the subproblems are solved is called a coordination strategy.
As in any distributed multidisciplinary optimization methodology, the choice of coor-
dination strategy among the many available alternatives is critical. In contrast to other
methodologies for multilevel system design, global convergence properties have been
proven for a specific class of coordination strategies under standard smoothness and
convexity assumptions (Michelena et al. 2003). Nevertheless, case studies have also
demonstrated that the ATC process may terminate successfully in practice when other
coordination schemes are used (Michelena et al. 2001; Kim et al. 2002; Louca et al.
2002; Kokkolaras et al. 2004).
It is emphasized that ATC should not be viewed either solely or merely as a design
optimization methodology. ATC addresses the early part of the product development
process (cf. Figure 5.3). Its purpose is to account for the interrelations of the system
Nondeterministic formulations of analytical target cascading 119

s
tion
Analytical target cascading process

ifica
Part 1 design

pec
embodiment
Part 1

ts s
Par
Part 2 design
Design Part 2 embodiment
Enterprise targets Yes Design targets Final
Part specifications
target System Yes system
feasible? feasible?
setting design

Part n No No Part n design


embodiment

Figure 5.3 ATC in the product development process.

Brake-specific fuel consumption

Engine simulation

Power loss due to friction

Oil consumption
Piston-ring/cylinder-liner
blow-by
subassembly
liner wear rate

Ring and liner surface roughness Liner material properties


(Young’s modulus and hardness)

Figure 5.4 Hierarchical bi-level system.

parts, identify possible tradeoffs, and determine optimal and consistent design speci-
fications to match design targets as close as possible (i.e., it can also be used to check
whether the given design targets can be achieved by the available means). Once this
is accomplished, the design embodiment for each part can be carried out concurrently
or outsourced.

3 Application to engine design


In this section, we apply the ATC methodology to a simple yet illustrative simulation-
based optimal design example to demonstrate the introduced concepts. Specifically,
we consider a V6 gasoline engine as the system, which is decomposed into six subsys-
tems, each of which represents the piston-ring/cylinder-liner subassembly of a single
cylinder. The system simulation predicts engine performance in terms of brake-specific
fuel consumption. Although the engine has six cylinders, they are all designed to be
identical. For this reason, we can actually consider only one subsystem.
The associated bi-level hierarchy, shown in Figure 5.4, includes the engine as a sys-
tem at the top level and the piston-ring/cylinder-liner subassembly as a subsystem at
120 Structural design optimization considering uncertainties

the bottom level. The ring/liner subassembly simulation takes as inputs the surface
roughness of the ring and the liner and the Young’s modulus and hardness and com-
putes power loss due to friction, oil consumption, blow-by, and liner wear rate. The
engine simulation takes then as input the power loss and computes brake-specific fuel
consumption of the engine. Commercial software packages were used to perform the
simulations. A detailed description of the problem can be found in (Chan et al. 2004).
Due to the simplicity of the given problem structure, we use a simplified version of
the notation introduced earlier. Since there are only two levels with only one element
in each, we skip element indices and denote the upper-level element with subscript
0 and the lower-level element with subscript 1. We use second indices to denote the
components of the design variable vector of the lower-level element optimization prob-
lem. The design problem is to find optimal values for the piston-ring and cylinder-liner
surface roughness design optimization variables x11 and x12 , respectively, and opti-
mal values for the design optimization variables representing the material properties
(Young’s modulus x13 and hardness x14 ) of the liner that yield minimized brake-specific
fuel consumption, i.e., system response R0 . The optimal design problem includes con-
straints on liner wear rate, oil consumption, and blow-by. The power loss due to
friction, i.e., subsystem response R1 , links the two levels. The top- and bottom-level
ATC problems are formulated as

min (R0 (R1 ) − T)2 + (R1 − Rl1 )2 (3)


R1

and

min (R1 (x11 , x12 , x13 , x14 ) − Ru1 )2


x11 ,x12 ,x13 ,x14
subject to liner wear rate = g11 (x11 , x12 , x13 , x14 ) ≤ 2.4 × 10−12 m3 /s
blow-by = g12 (x11 , x12 , x13 , x14 ) ≤ 4.25 × 10−5 kg/s
oil consumption = g13 (x11 , x12 , x13 , x14 ) ≤ 15.3 × 10−3 kg/hr (4)
1 µm ≤ x11 , x12 ≤ 10 µm
80 GPa ≤ x13 ≤ 340 GPa
150 BHV ≤ x14 ≤ 240 BHV

respectively. The fuel consumption target T was set to zero to achieve the best fuel
economy possible.

3.1 D eterm i nis t ic d es ig n o pt imizat io n r e s u l t s


It is desired to minimize power loss due to friction in order to optimize engine
operation and thus maximize fuel economy. Therefore, it was anticipated that the
bottom-level optimization problem would yield a design with as smooth surfaces (low
surface roughnesses) as possible without violating the bounds or the nonlinear design
constraints.
The ATC process of solving Problems (4) and (3) iteratively converged after two
iterations. The obtained deterministic optimal ring/liner subassembly design is shown
in Table 5.1. The ring surface roughness and the liner’s Young’s modulus optimal
values are at their lower bounds; the liner surface roughness and hardness have interior
optimal values. Figure 5.5 shows the two-dimensional projection of the design space
Nondeterministic formulations of analytical target cascading 121

Table 5.1 Deterministic optimal ring/liner subassembly design.

Variable Description Value

µX 11 Ring surface roughness, [µm] 1.0


µX 12 Liner surface roughness, [µm] 3.5
x 13 Liner Young’s modulus, [GPa] 80
x 14 Liner hardness, [BHV] 175

10
0.48

0.486
0.475

Liner wear rate  2.4  1012


8

SFC
asing B
Decre
Liner surface roughness [m]

7
0.485
0.48

6
0.475

49

Optimal design
4
Oil consumption  15.3
BowBy  4.5  105
3
0.48

1
1 2 3 4 5 6 7 8 9 10
Ring surface roughness [m]

Figure 5.5 Two-dimensional projection of the design space.

spanned by the two surface roughness variables when the liner Young’s modulus and
the liner hardness are kept fixed at 80 GPa and 175 BHV, respectively. The liner surface
roughness is not at its lower bound because the oil consumption constraint is active:
increased liner surface roughness is required to maintain an optimal oil film thickness
in order to avoid excessive oil consumption.

4 The probabilistic ATC formulation


In this section, the ATC formulation is extended to account for uncertainties. Adopt-
ing a probabilistic framework, we model uncertain quantities as random variables
122 Structural design optimization considering uncertainties

(denoted by upper case Latin symbols). In general, we use the terms random design opti-
mization variable and random design optimization parameter to differentiate between
random variables that are design optimization variables and random variables that are
design optimization parameters in the optimization problems. Here, to avoid confu-
sion, and without loss of generality, we assume that all design optimization parameters
are deterministic, and we omit them in the mathematical formulations.
We use the means of random design variables as optimization variables and assume
that their standard deviation is known or has been estimated with sufficient accu-
racy. The objective and the constraints must be reformulated. We replace the objective
function with its expectation, and we now require that the probability of violating a
constraint is less than some pre-specified probability of failure. The probabilistic
formulation of Problem (2) is (Kokkolaras et al. 2006)

min E[Rij ] − µuRij 22 + µYij − µuYij 22 +


nij nij
k=1
µR(i+1)k − µlR(i+1)k 22 + k=1 µY(i+1)k − µlY(i+1)k 22
with respect to µR(i+1)1 , . . . , µR(i+1)nij , µXij , µYij , µY(i+1)1 , . . . , µY(i+1)nij (5)
subject to P[gijk (Rij , Xij , Yij ) > 0] ≤ Pfk , k = 1, 2, . . . , Mij
with Rij = fij (R(i+1)1 , . . . , R(i+1)nij , Xij , Yij )

where Mij is the number of design constraints, P[ · ] denotes probability measure, and
Pfk is a pre-specified probability of failure for design constraint k. Liu et al. consid-
ered more than one moments to represent random variables in the ATC optimization
problems (Liu et al. 2006).

4.1 U n c erta i n t y pr o pag at io n


In a multilevel hierarchy, responses (outputs) of lower-level elements are inputs to
higher-level elements. This is an issue of utmost importance in design optimization
of hierarchically decomposed systems under uncertainty, since the solution of prob-
abilistic optimization problems requires moment estimation of high-level random
optimization variables that are functions of low-level random optimization variables.
In other words, we need appropriate techniques for uncertainty propagation.
Consider element j at level i. By solving Problem (5), we obtain optimal val-
ues µ∗R(i+1)1 , . . . , µ∗R(i+1)n , µ∗Xij , and µ∗Yij . Using the functional dependency relation
ij
Rij = fij (R(i+1)1 , . . . , R(i+1)nij , Xij , Yij ), we must now estimate the moments (typically
the first two, mean and standard deviation) of the responses Rij since the latter consti-
tute random optimization variables of the parent probabilistic optimal design problem.
This needs to be done for all problems at all levels of the hierarchy. An efficient and
accurate technique is therefore required for propagating uncertainties through the mul-
tilevel hierarchy. We assume that all element responses in the multilevel hierarchy are
uncorrelated.
Many probabilistic design methods and software packages use a first-order Taylor
expansion about the current mean design to estimate the mean and standard devi-
ation of propagated random responses. We have found that while the mean values
can be estimated relatively accurately, standard deviation estimates are unacceptably
inaccurate in may cases (Youn et al. 2004; Kokkolaras et al. 2004). Thus, we propose
Nondeterministic formulations of analytical target cascading 123

an uncertainty propagation technique we developed based on the highly efficient and


accurate Advanced Mean Value (AMV) method (Wu et al. 1990).
The AMV method has been originally proposed as a computationally efficient
method for generating the cumulative distribution function (CDF) of a response
R = f (X) that is a random variable (Wu et al. 1990). It uses a simple correction to
compensate for errors introduced by a utilized Taylor series approximation.
Based on the CDF definition, we have the following first-order relation between the
CDF value of R at a particular value f0 and the reliability index β:

P[f ≤ f0 ] = P[g ≤ 0] = (−β) (6)

where g(X) = f (X) − f0 and  is the standard normal cumulative distribution function.
According to the AMV method, if the random variables X are uncorrelated and nor-
mally distributed with means µX and standard deviations σX , the most probable point
(MPP) of failure (or design point) in the standard normal space can be computed by

∇glin (µX ) ∇f (µX )


U∗ = −βX = −βX (7)
|∇glin (µX )| |∇f (µX )|

where glin (X) is a linear approximation of g(X) at µX and X is a diagonal matrix,


whose diagonal is the vector σX . In the original space the MPP coordinates are

X ∗ = x U ∗ + µ x (8)

Note that for random variables that are not normally distributed, a nonlinear transfor-
mation is needed according to the Rackwitz-Fiessler method (Haldar and Mahadevan
2000).
The AMV method corrects the CDF value of R in Equation (6) with

P[f ≤ f (X∗ )] = (−β) (9)

by replacing the f0 value corresponding to the reliability index β with f (X∗ ). The
process of Equations (6) through (9) is repeated for a few (different) β values, so that
a region of the CDF of R is constructed. The derivative of that CDF region provides
the corresponding probability density function (PDF) value. The obtained CDF and
PDF values are finally used to compute equivalent mean and standard deviation at the
current design point.
This AMV-based technique is used to estimate the mean and standard deviation of
each response for all the elements of the multilevel hierarchy according to the discussion
in Section 4.1. The technique is computationally efficient since it requires only a single
linearization of the performance function at the mean value and an additional function
evaluation at each required CDF level. Reference (Wu 1994) provides more details
regarding the accuracy and efficiency of the AMV method on several applications.

4.1.1 Illustra tive exampl es


The linearization (or MVFOSM-based or method of moments) and AMV-based tech-
niques were used to estimate the first two moments of several nonlinear functions. All
124 Structural design optimization considering uncertainties

random variables were assumed to be normal. Test functions and input statistics are
presented in Table 5.2 and results are summarized in Table 5.3. One million samples
were used for the Monte Carlo simulations.
By inspecting Table 5.3, it can be seen that while the mean-related errors of the
linearization approach are within acceptable limits, standard deviation errors can be
quite large. The AMV-based moment estimation method performs always better, and
never exhibits unacceptable errors.

4.2 Pro b ab i l i st ic e ng ine d es ig n


We now apply the probabilistic ATC methodology to our bi-level engine design prob-
lem. Here, the root mean square (RMS) of asperity height is used to represent asperity
roughness, which is assumed to be normally distributed. Thus, the surface rough-
ness design variables are now normal random design optimization variables. The
probabilistic formulation of the top- and bottom-level ATC problems are

min (E[R0 ] − T)2 + (µR1 − E[R1 ]l )2


µR1
(10)
with R0 = f0 (R1 )

Table 5.2 Test functions and input statistics.

# Function Input statistics

1 X 21 + X 22 X 1 ∼ N(10,2), X 2 ∼ N(10,1)
2 −exp(X 1 − 7) − X 2 + 10 X 1,2 ∼ N(6,0.8)
X21 X2
3 1− 20
X 1,2 ∼ N(5,0.3)
(X1 + X2 − 5)2 (X1 − X2 − 12)2
4 1− 30
− 30
X 1,2 ∼ N(5,0.3)
5 1− 80
X1 + 8X2 + 5
2 X 1,2 ∼ N(5,0.3)

Table 5.3 Estimated moments and errors relative to Monte Carlo simulation (MCS)
results.
# 1 2 3 4 5

µlin 200.0 3.6321 −5.25 −1.0333 −0.1428


µAMV 203.4 3.6029 −5.3495 −1.0380 −0.1454
µMCS 205.0 3.4921 −5.3114 −1.0404 −0.1448
lin [%] −2.44 4.00 −1.15 −0.68 −1.30
AMV [%] −0.78 3.17 0.71 −0.23 0.41
σlin 44.72 1.9386 0.8385 0.1166 0.00627
σAMV 45.20 0.9013 0.8423 0.1653 0.00631
σMCS 45.10 0.9327 0.8407 0.1653 0.00630
lin [%] −0.84 107.85 −0.26 29.46 −0.47
AMV [%] 0.22 −3.36 0.19 0 0.15
Nondeterministic formulations of analytical target cascading 125

and
min (E[R1 ] − µuR1 )2
µX11 ,µX12 ,x13 ,x14
subject to P[liner wear rate = G11 (X11 , X12 , x13 , x14 )
> 2.4 × 10−12 m3 /s] ≤ Pf
P[blow-by = G12 (X11 , X12 , x13 , x14 )
> 4.25 × 10−5 kg/s] ≤ Pf
P[oil consumption = G13 (X11 , X12 , x13 , x14 ) (11)
> 15.3 × 10−3 kg/hr] ≤ Pf
P[X11 < 1 µm] ≤ Pf , P[X11 > 10 µm] ≤ Pf
P[X12 < 1 µm] ≤ Pf , P[X12 > 10 µm] ≤ Pf
80 GPa ≤ x13 ≤ 340 GPa
150 BHV ≤ x14 ≤ 240 BHV
with R1 = f1 (X11 , X12 , x13 , x14 )

respectively. The standard deviation of the surface roughnesses was assumed to be


1.0 µm, and remained constant throughout the ATC process. The assigned probability
of failure Pf was 0.13%, which corresponds to the target reliability index β = 3. The
fuel consumption target T was simply set to zero to achieve the best fuel economy
possible.
Note that since the random variables are normally distributed, the associated linear
probabilistic bound constraints are reformulated as deterministic. For example,

P[X11 < 1 µm] ≤ Pf ⇔ P[X11 − 1 µm < 0] ≤ Pf ⇔



µX11 − 1 µm µX − 1 µm
 0− ≤ ( − β) ⇒ − 11 ≤ −β ⇔
σX11 σX11
µX11 − 1 µm
≥ β ⇔ µX11 − 1 µm ≥ βσX11 ⇔
σX11
µX11 ≥ 1 µm + βσX11 ⇔ µX11 ≥ 4 µm

Similarly, the other three probabilistic bound constraints in Problem (4) are reformu-
lated as

µX11 ≤ 7 µm; µX12 ≥ 4 µm; µX12 ≤ 7 µm

The obtained probabilistic optimal ring/liner subassembly design is shown in


Table 5.4. The ring surface roughness optimal value is at its probabilistic lower

Table 5.4 Probabilistic optimal ring/liner subassembly design.

Variable Description Value

µX11 Ring surface roughness, [µm] 4.00


µX12 Liner surface roughness, [µm] 6.15
x 13 Liner Young’s modulus, [GPa] 80
x 14 Liner hardness, [BHV] 240
126 Structural design optimization considering uncertainties

Table 5.5 Reliability analysis results.

Constraint Active Pf MCS Pf

Liner wear rate No <0.13% 0%


Blow-by No <0.13% 0%
Oil consumption Yes 0.13% 0.16%

Table 5.6 Estimated moments and errors relative to Monte Carlo simulation (MCS).

Response Power loss [kW] Fuel consumption [kg/kWhr]

µlin 0.3950 0.5341


µAMV 0.3922 0.5431
µMCS 0.3932 0.5432
lin [%] 0.45 −0.01
AMV [%] −0.25 −0.01
σlin 0.0481 0.00757
σAMV 0.0309 0.00760
σMCS 0.0311 0.00759
lin [%] 54.6 −0.25
AMV [%] −0.64 0.13

minimum, while the liner’s Young’s modulus and hardness optimal values are at
their deterministic lower and upper bounds, respectively. The liner surface roughness
variable has an interior optimal value because the oil consumption constraint is prob-
abilistically active. Constraint activity in probabilistic design optimization indicates
that the constraint’s MPP lies on the target reliability circle. The probabilistic optimal
values of the surface roughness optimization variables have changed relative to their
deterministic counterparts to accommodate the uncertainty, i.e., the optimum shown
in the two-dimensional projection of the design space (Figure 5.5) moved to the inside
(we cannot show the location of the probabilistic optimum in the same figure because
it lies in a different two-dimensional projection of the design space due to the change
in the liner hardness optimal value).
A Monte Carlo simulation was performed to assess the accuracy of the reliability
analyses of the probabilistic constraints. One million samples were generated using
the mean and standard deviation values of the design variables, and the constraints
were evaluated using these samples to calculate the probability of failure. Results
are summarized in Table 5.5. The obtained design is 0.03% less reliable than found
for the active probabilistic constraint. This error is due to the first-order reliability
approximation used in the probabilistic optimization problem.
Propagation of uncertainty was modeled using the AMV-based technique described
in Section 4.1. Table 5.6 summarizes the estimated moments for the two responses of
the bi-level hierarchy. Results obtained using the first-order approximation approach
(linearization) are included to illustrate the large error that may be introduced.
Specifically, it can be seen that the standard deviation estimate of the power loss
(necessary for solving the top-level probabilistic optimization problem) is 0.0481 kW
Nondeterministic formulations of analytical target cascading 127

20 20
18 18
16 16
14 14
12 12
10 10
8 8
6 6
4 4
2 2
0 0
0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
(a) (b)

Figure 5.6 Power loss uncertainty: (a) PDF obtained using the AMV-based technique and
(b) frequency diagram obtained using Monte Carlo simulation.

when using a first-order approximation. This value is 54.6% larger than the Monte
Carlo simulation estimate of 0.0311 kW. Such large errors will be propagated during
the ATC process and yield useless design results. Using the AMV-based approach, we
obtained an estimate of 0.0309 kW, which is only 0.64% smaller than the Monte Carlo
estimate.
Using the AMV-based technique is advantageous because CDFs and PDFs can be
generated with high efficiency. In our example, power loss (the subsystem response)
is a highly nonlinear function of the subsystem’s inputs. In fact, its PDF is multi-
modal, as shown in Figure 5.6. This figure depicts a) the PDF obtained using the
AMV-based technique and b) the frequency diagram generated from a histogram that
was obtained using Monte Carlo simulation with one million samples. The agree-
ment is quite satisfactory and illustrates the usefulness of the AMV-based approach to
propagate uncertainty for highly nonlinear functions.

5 The ATC formulation for interval uncertainty


quantification
The probabilistic approach is very useful and should be adopted when the designer
has sufficient data to model uncertain quantities as random variables with appropriate
probability distributions. When this is not the case, it is imperative to assume that
the uncertain quantities can take any value within a range. Note that this not equiv-
alent to assuming a uniform distribution as it does not imply that the probability of
taking a specific value in a range is equal to any other value within that range. We
view the interval analysis approach as a special case of possibility theory (Dubois and
Prade 1988), where information availability is limited to a minimum. Designs obtained
using possibility-based design optimization (PBDO) methods are typically conservative
compared to the ones obtained using probabilistic design optimization, also known
as reliability-based optimization (RBDO), methods. Possibility-based designs sacrifice
128 Structural design optimization considering uncertainties

additional optimality compared to RBDO designs to account for lack of uncertainty


information and avoid constraint violation.
According to possibility theory, the possibility π(A) of event A occurring provides an
upper bound on the probability P(A) of that event occurring, i.e., P(A) ≤ π(A). From
the design point of view, we can conclude that what is possible may not be probable,
and what is impossible is also improbable. If the possibility of violating a constraint is
zero, then the probability of violating the same constraint will also be zero. If feasibility
of a constraint g is formulated in negative null form (g ≤ 0), the constraint is always
satisfied if π(g > 0) = 0. By introducing the notion of membership functions and α-cuts,
we can relax this requirement as π(g > 0) ≤ α, provided that 0 < α  1 (Zadeh 1978).
It can be shown that if the maximum possibly attainable value of the constraint g
α
at the corresponding α-cut is less than or equal to zero, i.e., gmax ≤ 0, the possibility
of violating this constraint is less than α (Mourelatos and Zhou 2005). In general,
membership functions express how ranges of values that bound the uncertainty quan-
tities are decreased with increasing amount of information. The α-cuts denote levels of
information, starting at the lowest (α = 0), where the range is largest, and increasing to
the highest (α = 1), where the range is the smallest (possibly a crisp value). In this work,
we will assume that the lowest level of information is available, where α is equal to
zero. Therefore, we do not have to consider membership functions and higher α-cuts,
eliminating thus ad-hoc selections, but also maximizing the conservative nature of the
obtained designs.
Given an interval uncertainty in a design variable X, the process of identifying the
maximum attainable value gmax of a constraint g(X) requires the solution of an opti-
mization problem. Given a nominal value XN for the design variable X, we first identify
the uncertainty interval [(1 − δX )XN , (1 + δX )XN ], where δX denotes the relative devia-
tion from the nominal value XN . Then, we solve the simple bound-constrained problem
max g(x)
x

subject to (1 − δX )XN ≤ x ≤ (1 + δX )XN (12)


to compute gmax .
In a design optimization problem with many constraints where design variables are
subject to interval uncertainty, finding the optimal design involves a nested optimiza-
tion process known as robust optimization. An outer-loop optimization generates a
sequence of iterates of nominal value vectors XN for the uncertain design variables X.
For each iterate XN , an inner-loop optimization problem like the one formulated in
Equation (12) is solved for each constraint. These worst-case optimization problems
(also referred to as “anti-optimization’’ problems (Elishakoff et al. 1994)) may involve
a larger number of optimization variables, but are only bound-constrained.
The primary purpose of solving these problems is to obtain the maximal (worst)
value of each constraint g that may be attained due to the uncertainty in X. These
constraint values are used in the outer-loop optimization, where the worst objective
value is maximized and the worst constraint value must be feasible. Nevertheless, the

inner-loop optimal values XN can be used to attempt to control uncertainty, i.e., what
values to strive for and what values to avoid, if possible.
The ATC formulation for design variables and parameters that are subject to inter-
val uncertainty is a straightforward application of the robust optimization problem
Nondeterministic formulations of analytical target cascading 129

formulation. The implication of dealing with intervals is that two values must be
matched for each uncertain quantity that links two elements: the “worst-case’’ value
(computed solving a maximization problem of the form presented in Equation (12)),
and the “best-case’’ value (computed by solving a minimization problem). The ATC
formulation for interval uncertainties is

min Rijw − Rijuw 22 + Rijb − Rijub 22 +


Yijw − Yijuw 22 + Yijb − Yijub 22 +
nij nij
R(i+1)kw − R(i+1)k
l
22 + k=1 R(i+1)kb − R(i+1)k l
2 +
nij
k=1 w  n b 2
Y(i+1)kw − Y(i+1)k
l
2 + k=1 Y(i+1)kb − Y(i+1)k
ij l
2 (13)
k=1 w 2 b 2
with respect to R(i+1)1N , . . . , R(i+1)nij N XijN , YijN , Y(i+1)1N , . . . , Y(i+1)nij N
subject to gijmax (R(i+1)1N , . . . , R(i+1)nij N XijN , YijN ) ≤ 0
with {Rijw , Rijb } = fij (R(i+1)1N , . . . , R(i+1)nij N XijN , YijN )

5.1 ATC-based optimization res ults


The ATC process for design optimization problems with interval uncertainty variables
is illustrated in this section using the same engine design problem (Kokkolaras et al.
2006). As in the probabilistic case, the considered uncertain quantities are ring and
liner surface roughnesses; root mean square (RMS) of asperity height is used to repre-
sent and quantify surface roughness. Here, let us assume that we do not have sufficient
data to infer that surface roughness is normally distributed. Instead, we assume that it
exhibits deviations from nominal values that can be quantified by an interval. This sur-
face roughness interval uncertainty is propagated through the simulation hierarchy to
estimate intervals for power loss and fuel consumption. Since uncertainty information
is available at the bottom-level we first formulate and solve the bottom-level problem

min (R1w − Ru1w )2 + (R1b − Ru1b )2 (14)


X11N ,X12N ,x13 ,x14
subject to max. liner wear rate = G11max (X11N , X12N , x13 , x14 )
≤ 2.4 × 10−12 m3 /s
max. blow-by = G12max (X11N , X12N , x13 , x14 )
≤ 4.25 × 10−5 kg/s
max. oil
consumption = G13max (X11N , X12N , x13 , x14 ) ≤ 15.3 × 10−3 kg/hr
2 µm ≤ X11N ≤ 9 µm
2 µm ≤ X12N ≤ 9 µm
80 GPa ≤ x13 ≤ 340 GPa
150 BHV ≤ x14 ≤ 240 BHV

where X11 and X12 are (uncertain) ring and liner surface roughness design variables,
respectively, x13 and x14 are (deterministic) liner Young’s modulus and hardness design
variables, respectively, and R1 is power loss due to friction (subscripts w and b denote
worst and best possible values due to interval uncertainty, respectively, while super-
script u denotes target value from the upper level). According to the interval analysis
approach, at the outer-loop optimization we determine nominal values X11N and X12N
(as well as optimal values for x13 and x14 ), while solving five inner-loop optimization
130 Structural design optimization considering uncertainties

problems given the (assumed invariant) surface roughness interval uncertainty: one
best-case scenario for the power loss, one worst-case scenario for the power loss, and
one worst-case scenario each for oil consumption, blow-by, and wear rate. Since we
do not have information from the top-level problem yet, i.e., target values for Ru1w and
Ru1b , we assume these to be equal to zero.
Once the power loss uncertainty interval [R1b , R1w ] has been obtained, we compute
the midpoint and the percentage deviation from the endpoints to pass this uncertainty
information to the top-level problem, which is formulated as

min (R0w − Tw )2 + (R0w − Tw )2 + ω((R1w − Rl1w )2 + (R1b − Rl1b )2 ) (15)


R1N

where R0 denotes fuel consumption. The symbol T denotes fixed engine design target
values, while the superscript “l’’ denotes interval target values from the lower level,
so that the top-level problem does not consider solutions that are too far from what
the bottom-level can provide. The weight ω can be adjusted to emphasize consistency
rather than fuel consumption optimality.
At the outer-loop optimization of this problem we determine nominal values of
power loss while solving two inner-loop optimization problems given the quantified
(at the lower level) power loss interval uncertainty: one best-case scenario for the fuel
consumption and one worst-case scenario for the fuel consumption. After the top-level
problem is solved (note that the desired fuel consumption interval target values may
not be achieved), the power loss interval and the corresponding uncertainty is updated,
passed down to the bottom-level problem, which is then solved again and so on. We
assume that the ATC coordination process is converged when all quantities do not
change significantly anymore.
Table 5.7 reports the results obtained assuming δX = 0.1 (10%) for both the ring
and the liner surface roughness uncertainty. The power loss links the two problems.
In order to achieve the best (minimal) fuel consumption possible, we set the top-level
problem target values for both the worst and the best fuel consumption equal to zero.
Of course, these target values are unattainable. Therefore, the power loss interval
computed by solving the bottom-level problem ([0.277, 0.369]) cannot be matched
exactly when solving the top-level problem. By increasing the values of the weight
ω, we increase consistency, i.e., interval matching for the power loss ([0.263, 0.356]
for ω = 1000). It is interesting that while the power loss uncertainty is invariantly
quantified at 15% around the interval midpoint, the fuel consumption uncertainty

Table 5.7 Results of the ring/liner problem using the interval ATC formulation.

Bottom level X 11N X 12N x 13 x 14 R 1b R 1w δR1


[µm] [µm] [GPa] [BHV] [kW] [kW] %
2.06 5.87 80 40 0.277 0.369 15

Top-level R 1b R 1w δR1 R 0b R 0w δR0


[kW] [kW] [%] [kg/kWhr] [kg/kWhr] [%]
ω=1 0.176 0.238 15 0.486 0.499 1.3
ω = 10 0.253 0.343 15 0.502 0.522 2
ω = 1000 0.263 0.356 15 0.504 0.525 2
Nondeterministic formulations of analytical target cascading 131

changes for different weight values (from 1.3% to 2% around the interval midpoint).
This implies that uncertainty is not always invariant with respect to the design point,
as assumed in many design under uncertainty methodologies.

6 Conclusions
We presented how analytical target cascading (ATC), a methodology for design
optimization of hierarchically decomposed multilevel systems, can account for uncer-
tainties. We first assumed that we have sufficient information available to model
the uncertain quantities as random variables and used the popular and powerful
probabilistic framework to reformulate the ATC problems as reliability-based design
optimization (RBDO) problems. We used the moments of the random variables as opti-
mization variables. Recognizing that first-order approximations may yield inaccurate
estimates of standard deviations of propagated random variables, we developed an
uncertainty propagation technique that is based on the advanced mean value (AMV)
method. This technique can be used to generate approximate CDFs and PDFs that yield
sufficiently accurate estimations of means and standard deviations of propagated ran-
dom variables. A simple yet illustrative bi-level example was used to demonstrate the
probabilistic ATC methodology. The results showed that the probabilistic formulation
of the ATC process can be applied successfully using a bottom-up coordination. The
computationally efficient AMV-based technique for the required propagation of uncer-
tainties produced standard deviation estimates that were much more accurate relative
to the ones obtained using first-order approximations, ensuring the meaningfulness of
the ATC results.
We then considered the case where we have incomplete uncertainty information
available, and we assumed ranges for the uncertain quantities, adopting an inter-
val analysis approach to formulate and solve robust optimization ATC problems
(also known as worst-case optimization or anti-optimization). The interval analysis
approach yields design solutions that are conservative relative to the ones obtained
using a probabilistic design approach, especially as interval uncertainty increases.
However, the interval analysis approach ensures feasibility at all times. In terms of
computational cost, the nested optimization of the interval analysis approach seems to
be less expensive than the required reliability analysis (analytical or simulation-based)
in the probabilistic approach. It is also less challenging numerically since the inner-loop
optimization problems are simple bound-constrained problems. The main challenge is
that the inner-loop problems require global solutions to ensure consideration of the
worst-case scenario. One of the advantages of the interval analysis approach is that the
solution of the inner-loop problems provides information to the designer with respect
to the beneficial or adversary effects of uncertainty so that, if possible, resources can be
allocated to control critical uncertainty quantities. A significant finding is that interval
uncertainty does not necessarily propagate symmetrically or invariantly.

References

Chan, K.Y., Kokkolaras, M., Papalambros, P.Y., Skerlos, S.J. & Mourelatos, Z. 2004. Prop-
agation of uncertainty in optimal design of multilevel systems: Piston-ring/cylinder-liner
case study. In Proceedings of the SAE World Congress, Detroit, Michigan, Paper No.
2004-01-1559.
132 Structural design optimization considering uncertainties

Dubois, D. & Prade, H. 1988. Possibility Theory. New York: Plenum Press.
Elishakoff, I., Haftka, R.T. & Fang, J.J. 1994. Structural design under bounded uncertainty –
optimization with anti-optimization. International Journal of Computers and Structures
53(6):1401–1405.
Haimes, Y.Y., Tarvainen, K., Shima, T. & Thadathil, J. 1990. Hierarchical Multiobjective
Analysis of Large-Scale Systems. Hemisphere Publishing Corporation, pages 41–42.
Haldar, A. & Mahadevan, S. 2000. Probability, Reliability, and Statistical Methods in
Engineering Design. John Wiley & Sons, p. 205.
Kim, H.M. 2001. Target Cascading in Optimal System Design. PhD thesis, University of
Michigan.
Kim, H.M., Kokkolaras, M., Louca, L.S., Delagrammatikas, G.J., Michelena, N.F.,
Filipi, Z.S., Papalambros, P.Y., Stein, J.L. & Assanis, D.N. 2002. Target cascading in vehicle
redesign: A class VI truck study. International Journal of Vehicle Design 29(3):1–27.
Kim, H.M., Michelena, N.F., Papalambros, P.Y. & Jiang, T. 2003. Target cascading in optimal
system design. ASME Journal of Mechanical Design 125(3):474–480.
Kim, H.M., Rideout, D.G., Papalambros, P.Y. & Stein, J.L. 2003. Analytical target cascading
in automotive vehicle design. ASME Journal of Mechanical Design 125(3):481–489.
Kokkolaras, M., Fellini, R., Kim, H.M., Michelena, N.F. & Papalambros, P.Y. 2002. Exten-
sion of the target cascading formulation to the design of product families. Structural and
Multidisciplinary Optimization 24(4):293–301.
Kokkolaras, M., Louca, L.S., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S.,
Papalambros, P.Y., Stein, J.L. & Assanis, D.N. 2004. Simulation-based optimal design of
heavy trucks by model-based decomposition: An extensive analytical target cascading case
study. International Journal of Heavy Vehicle Systems 11(3-4):402–432.
Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2004. Design optimization of hierarchi-
cally decomposed multilevel systems under uncertainty. In Proceedings of the ASME Design
Engineering Technical Conferences, Salt Lake City, Utah, Paper No. DETC2004/DAC-57357.
Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2006. Design optimization of hierarchi-
cally decomposed multilevel systems under uncertainty. ASME Journal of Mechanical Design
128(2):503–508.
Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2006. Impact of uncertainty quantifica-
tion on design decisions for a hydraulic-hybrid powertrain engine. In Proceedings of the 47th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference,
Newport, Rhode Island, Paper No. AIAA-2006-2001.
Liu, H., Chen, W., Kokkolaras, M., Papalambros, P.Y. & Kimk H.M. 2006. Probabilistic
analytical target cascading – a moment matching formulation for multilevel optimization
under uncertainty. ASME Journal of Mechanical Design 128(4):991–1000.
Louca, S., Kokkolaras, M., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S.,
Papalambros, P.Y. & Assanis, D.N. 2002. Analytical target cascading for the design of an
advanced technology heavy truck. In Proceedings of the 2002 ASME International Mechanical
Engineering Congress and Exposition, New Orleans, LA. Paper No. IMECE-2002-32860.
Merriam-Webster on-line (www.m-w.com), accessed April 2007.
Michelena, N.F., Kim, H.M. & Papalambros, P.Y. 1999. A system partitioning and optimiza-
tion approach to target cascading. In Proceedings of the 12th International Conference on
Engineering Design, Munich, Germany.
Michelena, N.F., Louca, L., Kokkolaras, M., Lin, C.-C., Jung, D., Filipi Z., Assanis, D.,
Papalambros, P.Y., Peng, H., Stein, J. & Feury, M. 2001. Design of an advanced heavy
tactical truck: A target cascading case study. SAE 2001 Transactions – Journal of Commercial
Vehicles. Also appeared in the Proceedings of the 2001 SAE International Truck and Bus
Meeting and Exhibition, Chicago, IL, Paper No. 2001-01-2793.
Michelena, N.F., Park, H. & Papalambros, P.Y. 2003. Convergence properties of analytical
target cascading. AIAA Journal 41(5):897–905.
Nondeterministic formulations of analytical target cascading 133

Mourelatos, Z.P. & Zhou, J. 2005. Reliability estimation and design with insufficient data based
on possibility theory. AIAA Journal 43(8):1696–1705.
Papalambros, P.Y. 2001. Analytical target cascading in product development. In Proceedings
of the 3rd ASMO UK/ISSMO Conference on Engineering Design Optimization, Harrogate,
North Yorkshire, England.
Wu, Y.T. 1994. Computational methods for efficient structural reliability and reliability
sensitivity analysis. AIAA Journal 32(8):1717–1723.
Wu, Y.T., Millwater, H.R. & Cruse, T.A. 1990. Advanced probabilistic structural analysis
method of implicit performance functions. AIAA Journal 28(9):1663–1669.
Youn, B.D., Kokkolaras, M., Mourelatos, Z.P., Papalambros, P.Y., Choi, K.K. & Gorsich, D.
2004. Uncertainty propagation techniques for probabilistic design of multilevel systems.
In Proceedings of the 10th AIAA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Albany, New York, Paper No. AIAA-2004-4470.
Zadeh, L.A. 1978. Fuzzy sets as a basis for a theory of possibility. Fuzzy sets and systems 1:3–28.
Chapter 6

Design optimization of stochastic


dynamic systems by algebraic
reduced order models
Gary Weickum, Matt Allen & Kurt Maute
University of Colorado at Boulder, Boulder, CO, USA

Dan M. Frangopol
Lehigh University, Bethlehem, PA, USA

ABSTRACT: This chapter addresses the need for efficient numerical stochastic techniques in
the analysis and design optimization of dynamic systems. Most stochastic analysis techniques
result in a heavy computational burden, the cost of which is amplified if embedded into a design
optimization framework. This work seeks to alleviate the computational costs of analyzing
dynamic systems by reduced order modeling techniques. The key to utilizing reduced order
models for stochastic analysis and optimization lies in making them adaptable to design changes
and variations in random parameters. This chapter presents an extended reduced order modeling
method approximating the response of a dynamic system in the space of design and random
parameters. The extended reduced order modeling technique is embedded into a stochastic
analysis and design optimization framework. The accuracy and computational efficiency of
extended reduced order models are verified with the stochastic analysis and design optimization
of a linear structural dynamic system. Stochastic analyses are performed using Monte Carlo
simulation, the first-order reliability method, and polynomial chaos expansion. The utility of
the extended reduced order modeling method for design optimization purposes is illustrated
by solving deterministic and reliability-based design optimization problems. Comparing the
stochastic analyses and design optimization results using full and reduced order models show
that the overall computational costs can be significantly diminished by the extended reduced
order modeling method presented.

1 Introduction
Stochastic and reliability analyses of static structures are well explored, and mature
computational procedures have been developed (Bjerager 1990, Ghanem and Spanos
1991; Schuëller 1997; Schuëller 2001). The integration of these analysis tools into
design optimization processes has been widely accomplished in the design of static
structural systems, as shown by (Enevoldsen and Sørensen 1994a,b; Chandu and
Grandhi 1995; Yu, Choi, and Chang 1997a,b, 1998; Grandhi and Wang 1998; Luo
and Grandhi 1997), among others. However, limited work has been done on formal
methods to include reliability analyses in the design of dynamic systems. There has
been a considerable amount of work done on optimization of the harmonic response
of a dynamic system, such as optimization with eigenvalue criteria (Haug and Choi
1982; Masur 1984; Diaz and Kikuchi 1992). The dynamic system of interest in this
136 Structural design optimization considering uncertainties

work is those that require a time integration to find the transient response of the sys-
tem. The computational costs associated with dynamic analyses of realistic models
with a large number of degrees of freedom hinder their inclusion into stochastic-
based optimization methods. Stochastic analysis techniques, such as Monte Carlo
simulation and the first-order reliability method, attempt to characterize a system’s
probabilistic response due to random or uncertain inputs. A deterministic analysis
of a system, assuming no input uncertainty, requires one system analysis. In con-
trast, the common factor to all stochastic-based analysis methods is the requirement
of multiple deterministic analyses of the system at various points in the uncertain or
random variable space. Design optimization seeks to find the optimal system within
a design space satisfying a set of constraints. The common thread of all optimiza-
tion algorithms is the necessity of multiple system analyses within the design variable
space. One link between stochastic analysis and optimization is the requirement of
multiple analyses of altered system configurations in a parameter space. This require-
ment is emphasized in a stochastic-based design optimization framework, as design
criteria in the optimization procedure are now stochastic in nature. Therefore, the
key to incorporating any computationally expensive system into a stochastic design
framework is to decrease the expense of analyzing systems altered in a parameter
space.
Surrogate models have been developed allowing the approximation of the system
response as a function of the design parameters based on performance predictions
from high-fidelity simulation models. Surrogate models may be broadly character-
ized as data fit (local, multi-point, or global approximations), multi-fidelity (omitted
physics, coarsened discretization or tolerances), or reduced order model (ROM) sur-
rogates. A ROM mathematically reduces the system modeled, while still capturing
the physics of the governing partial differential equations (PDEs), by projecting the
original system response using a computed set of basis functions. For example, the
projection reduces the number of degrees of freedom (DOFs) in a large finite ele-
ment or finite volume model (O(104 to 109 ) DOF) down to a handful of basis
coordinates (typically O(100 to 102 )). Thus, the ROM case is distinguished from
the data fit case in that it is still intimately tied to the original PDEs and retains
the physics, and is distinguished from the multi-fidelity case in that it is derived
directly from the original high fidelity model and does not require multiple mod-
els of differing fidelity. ROM models have proven a successful means of reducing
the computational costs of a system’s response in time (Ravindran 1999; Thomas,
Dowell, and Hall 2001; Legresley and Alonso 2001; Willcox and Peraire 2001).
However, ROMs typically approximate the response of only one particular config-
uration and are therefore of limited use for design optimization and stochastic analysis
purposes.
The utility of these ROMs lies in a particular system’s time integration, and any
changes in the design may render the ROM inaccurate. The key missing component
for the application of a ROM implemented into a reliability-based design optimization
(RBDO) framework and, the focus of this work, is the extension of ROMs into the
space of the design and uncertainty parameters. To date, most approximation methods
used in this field consider the physical analysis as a black-box tool, and build a response
surface on the results. In contrast, the objective of this work is to build an approxi-
mation technique capturing the physical nature of a system through the inclusion of
Design optimization of stochastic dynamic systems 137

the partial differential equations governing the system response. The feasibility and
differences, in terms of both computational cost and accuracy, of the approximation
methods will be studied herein.

1.1 Reduced order models


The construction of the ROM surrogate model from the dynamic analysis of the finite
element model is discussed. The governing equation of interest is a structural dynamic
response

M ü + F int (u, u̇) = f ext (u, u̇, t) (1)

where M is the mass matrix, F int is the internal force, u is the displacement, and
the external force is f ext (u, u̇, t). The dynamic response is either linear or nonlinear
depending on the internal forces F int and the external forcing function, f ext (u, u̇, t),
which depend on the time, t, as well as the displacements, u, and velocities, u̇.
For large systems, the calculation of the linear dynamic response is costly and the
cost is further increased for nonlinear systems. The cost of the dynamic response
is reduced using a reduced order model, which is a low dimensional approxima-
tion. Following a Galerkin type projection scheme, the displacements of the system
response are approximated by k basis vectors () and generalized variables η as
follows:


k
u(t) = ηj (t)φj = η(t) (2)
j=1

The reduction of the dynamic response is performed using the approximation of (2)
in (1) and premultiplying by T as shown below.

T Mη̈(t) + T F int (η(t), η̇(t)) = T f ext (u, u̇, t) (3)

The system, originally n × n, is reduced to a k × k system, where k < n. The force


vectors are reduced from n × 1 for the full system to k × 1 for the reduced system. The
reduced system may be written as:

MR η̈ + FRint (η, η̇) = fRext (η, η̇, t) (4)

where MR , FRint , and fRext are dependent upon the basis vectors and the system matri-
ces for a particular design. For linear systems, FRint (η, η̇) and fRext (η, t) are only linear
functions of η and η̇ are calculated as follows:

FRint (η, η̇) = T K int  η(t) + T D η̇(t) (5)


fRext (η, t) = fT ext
(t) +  K
T ext
 η(t) (6)

where K int and K ext are the stiffness matrices associated with the internal and external
forces, and D is the damping matrix accounting for viscous damping.
138 Structural design optimization considering uncertainties

In nonlinear systems, the internal and external forces are dependent on the displace-
ments and the reduced force vectors are updated by either one of the following two
methods. “On-line’’ approximation evaluates the forces in the full order model using
the approximation of the displacements from (2). The second method approximates
the forces FRint (η, η̇) and fRext (η, η̇, t) by explicit functions of η and t apriori, that is “off-
line’’, and does not require any computations involving the full order model when
using the ROM. The advantage of using an on-line approximation is that the system
response is calculated more accurately. Using an off-line approximation, the CPU time
is decreased at the cost of accuracy. In the following, only linear dynamic problems
will be considered and both off and on-line approaches studied.
The key to building an effective ROM is to identify a set of basis vectors capturing
the physics of a system. Two of the most common methods of identifying appropri-
ate vectors are eigen analyses and snapshot methods, such as the proper orthogonal
decomposition (POD). This chapter uses eigenmodes as the basis.

1.2 Ei g en m od e s
In structural dynamics, eigenmodes are the most common means of reducing a system.
The eigenmodes φ and eigenvalues ω2 of an undamped system are the results of the
following eigenvalue problem:
! "
K − ωi2 M φi = 0 (7)

where ωi is the frequency (rad/sec) of the eigenmode φi . The greatest benefit of


using eigenmodes as a reduced basis is the un-coupling of equation (3) due to the
orthogonality with respect to the system matrices. The projected mass matrix is
φiT Mφj = δij where δij is the Kronecker symbol. The projected stiffness matrix is
φiT Kφj = δij ωi2 . Using all the modes yields the same response as the full order model.
In most cases, higher modes tend not to contribute much to the system response so
only low modes are considered. For large-scale numerical models, computing eigen-
modes can be prohibitively costly. In this case as well as for nonlinear problems, other
approaches for constructing basis vectors must be used, such as proper orthogonal
decomposition.

2 Extended ROM in structural dynamics


The goal of this work is to use an “extended reduced order model’’ (E-ROM) to
approximate the system response with respect to changes in the design and uncer-
tainty parameters. Using a full order model tends to be computational expensive for
optimization and more so for an uncertainty analysis.
The proposed method involves building a ROM, using eigenmodes as a basis, for
the original design (in optimization cases) or mean design (in uncertainty analyses).
The ROM is only applicable to the original design. Performing an uncertainty analysis
or optimization, the E-ROM must be capable of capturing the system response in
the spaces of the uncertainty parameters (r) and design variables (p). An extension
of a ROM into a physical parameter space has been considered by (Kirsch 2002) as a
reanalysis approach. This work is an extension of these ideas into the field of stochastic
Design optimization of stochastic dynamic systems 139

Modal approximation Modal approximation

FCA

FCA
SCA

SCA
TS1

TS1
Full

Full
Ini

Ini
Matrix approximation
Ini Ini

TS1 TS1

TS2 TS2

Full Full

Computational cost Accuracy

Figure 6.1 Comparison of matrix and eigenmode approximation methods. “Ini’’ is the initial design,
“TS1 ’’ and“TS2 ’’ are the first and second orderTaylor Series approximation respectively,
“SCA’’ and“FCA’’ a single and full CA and“Full’’ is a full recomputed basis and matrices.

analysis and design optimization. To formulate an effective E-ROM, the set of basis
functions (φ) and system matrices (M, C and K) are considered.
Five methods are studied to deal with the changes in the eigenmodes and four for
updating the system matrices. Using the initial modes or recomputing the modes are
obvious options. Following classical perturbation methods, the eigenmodes at the
altered design can be approximated by Taylor Series expansion (Kleiber and Hien 1992;
Nieuwenhof and Coyotte 2002) and a combined approximation (CA) method (Kirsch
2002, Kirsch 2003; Kirsch 2001; Kirsch 1999; Kirsch 2000). There are two methods
analyzed within CA, a single and full CA. The difference between single and full CA
is explained in Section 2.4. Figure 6.1 shows the approximate computational cost
and accuracy of each combination of updating the system matrices and eigenmodes.
The reader may note utilizing updated eigenmodes is not a practical option due to
the computational costs of an eigen analysis. The main costs for approximating the
system matrices and the eigenmodes are due to the gradient computations. The first and
second order sensitivities of M and K can be evaluated either based on the analytically
derived finite element formulations or by finite differencing at comparable costs. The
sensitivities of the eigenmodes are discussed in the next section.

2.1 Eigenmode s ens itivity analys is


In an effort to approximate the change of a system, the derivatives of the eigenmodes of
the initial system with respect to the design variables are needed to build a first order
approximation of the new design. Since finite difference methods would drastically
increase the computational costs, by requiring the solution of additional eigensystems
for each design variable, an analytical approach is used (Adelman and Haftka 1986;
Mills-Curran 1988; Dailey 1989).
140 Structural design optimization considering uncertainties

The derivative of the eigensystem (7) with respect to a system parameter pj , expanded
using the product rule results in:
) *
∂K dωi 2 ∂M
! " dφi d T
− 2ωi M − ωi φi + K − ωi2 M = 0; (φ Mφi ) = 0 (8)
∂pj dpj ∂pj dpj dpj i

Premultiplying (8) by φiT eliminates the second term and the gradients of the eigenvalues
with respect to the system parameter are found,

∂K ∂M
φiT − ωi2 φi
dωi ∂pj ∂pj
= (9)
dpj 2ωi φiT Mφi
The solution for the sensitivities of the eigenvalues is substituted into dωi /dpj of the
first term in (8), resulting in the following system:
) *
! " dφ i ∂K dωi ∂M d T
K − ωi2 M =− − 2ωi M − ωi2 φi ; (φ Mφi ) = 0 (10)
dpj ∂pj dpj ∂pj dpj i

This system is solved for dφi /dpj considering the norm of the eigen vector by
) *
dφi d φ̃i d φ̃i 1 T ∂M
= − φi M
T
+ φi φi φi ;
dpj dpj dpj 2 ∂pj

d φ̃i + ∂K dωi 2 ∂M
= −K̃ − 2ωi M − ωi φi (11)
dpj ∂pj dpj ∂pj

where K̃ + is the generalized inverse (Adelman and Haftka 1986; Dailey 1989) of the
singular matrix K̃ = (K − ωi2 M).

2.2 Num eri c al mo d e l: c o nne c t ing r o d


The application of the aforementioned methodology is tested on a rod, shown in Fig-
ure 6.2, used in various past studies (Bennet and Botkin 1985; Zhang, Beckers, and
Fleury 1995). The rod is clamped at the inner circumference of the left hole, and a
transient force is applied to the inner circumference of the right hole. The rod is lightly
damped using Raleigh damping, with α = 10−5 and β = 10−5 . The beam is modeled
using 400 isoparametric 4 node elements, resulting in a total of 936 degrees of free-
dom. All computations are performed within MATLAB utilizing the CALFEM finite

Applied force

p3 p4
Design parameters (p)
p1 p2
Times [s]
0 50 1200
p5 Thickness = 3 mm

Figure 6.2 Finite element model of connecting rod.


Design optimization of stochastic dynamic systems 141

element toolbox (Austrell, Dahblom, Lindemann, Olsson, Olsson, Persson, Petersson,


Ristinmaa, Sandberg, and Wernbergk 1999). Two geometric parameters, p1 and p2 ,
control the horizontal positions of the center points of the left and right circular seg-
ments of the center hole, as depicted in Figure 6.2. The radii of the circular segments
are kept constant. The rod has an overall length of 51 mm, a thickness of 3 mm, a
Poisson ratio of 0.3, and a Young’s modulus of E = 7.2 × 105 N/mm2 . Modal and sen-
sitivity analyses are performed on the initial system, and are used to approximate the
response associated with different design changes. The ROM is based on four eigen-
modes, resulting in a decrease from 936 degrees of freedom to 4. The greatest benefit
of this reduction lies within the decreased computational cost of the time integration
reduced by a factor of 110.

2.3 First order Taylor s eries approxima ti o n o f e i g e nmo de s


Once the geometry of the rod is altered, for optimization or uncertainty analysis, the
eigenmodes themselves change. In an effort to track the change, a first order Taylor
Series approximation is used about the original system to describe the basis at any set
of system parameters p.
∂T0
(p) ≈ 0 + (p − p0 ) (12)
∂p
The derivatives of the basis are found following the method described in Section 2.1.
The utility of the approximation is studied in the following example. The reduced sys-
tem is built at the original system. The parameter p1 in Figure 6.2, is altered in the design
of the rod, and the eigenvalues are approximated at the design change. The design
change represents a shift in the left circular segment of the center hole by 0.75 mm, full
range −4 ≤ p2 ≤ 4. To isolate the effects of the approximated eigenvalues on the design
change, the actual mass and stiffness matrices of the altered system are used. The plot
on the left of Figure 6.3 uses the eigenmodes from the initial design to approximate
the system response at the design change. The plot on the right of Figure 6.3

0.5 0.5
0.4 0.4
0.3 0.3
Displacement (mm)

0.2 0.2
0.1 0.1
0 0
0.1 0.1
0.2 0.2
0.3 0.3 Initial design – FA
0.4 0.4 Altered design – E-ROM
0.5
Altered design – FA
0.5
0 1 2 3 4 5 6 0 1 2 3 4 5 6

Time (ms) Time (ms)

Figure 6.3 Actual and approximated responses: vertical displacement at right end of the rod over
time for initial and altered design. “FA’’ and “E-ROM’’ is an analysis through a full model
analysis and a E-ROM using the no update (left) and update (right) of the basis.
142 Structural design optimization considering uncertainties

demonstrates the accuracy in the first order Taylor Series approximation of the eigen-
modes. Therefore, using the eigenmodes from the initial design is not sufficiently
accurate, but using a Taylor Series approximation leads to acceptable approximation
errors.

2.4 C o m b i ned appr o ximat io n o f eig e n m o d e s


Combined approximation (CA) (Kirsch 2002; Kirsch 2003; Kirsch 2001; Kirsch 1999;
Kirsch 2000) is a reanalysis method used to approximate the basis vectors due to a
change in system parameters. The new basis is approximated as a linear combination
of another basis:

φ̃i (p) = y1 r1 + y2 r2 + · · · + yn rn (13)

where yi are constants and r is the basis used for CA. A binomial series expansion about
the original design is often chosen as the reduced basis (Kirsch 2002; Kirsch 2003).
In this study, two different methods are used both of which require eigenmodes and
derivatives with respect to the design/random variables. The first method approximates
the ith eigenmode φ̃i through the corresponding mode φi and its derivatives ∂φi /∂pj :

∂φi ∂φi
φ̃i (p) ≈ y1 φi + y2 + · · · + yn+1 (14)
∂p1 ∂pn

This approach is labeled “single CA’’ and is equivalent to a first order Taylor series
expansion if y1 = 1 and yi > 1 =
pi−1 . The second method uses all modes and its
derivatives for approximating ith eigenmode φ̃i .

∂φi ∂φi ∂φm ∂φm


φ̃i (p) ≈ y1 φi + y2 + · · · + yn+1 + · · · + ym φm + ym+1 + · · · + yk (15)
∂p1 ∂pn ∂p1 ∂pn

This approach is labeled “full CA’’.


To find the coefficients y, the newly assembled system matrices are reduced by the
basis r. Once the reduced matrix MCA and KCA are found, where MCA = rT Mr and
KCA = rT Kr, the following eigenvalue problem is solved to find y:

KCA y = λ MCA y (16)

where y are the eigenmodes of (16). In a single CA, only the first eigenmodes from
(16) is used in (14) to approximate the modes at the design change. This is done for
each of the i modes, needing i separate eigen analyses. A full CA, requires only one
eigen analysis, but is as large as there are modes and derivatives. The modes obtained
from a single CA returns the same number of modes (i) where a full CA will return the
same number of modes as there are modes and derivatives.
A study is performed to see the approximation (Taylor Series, Single and Full CA)
technique yielding a better approximation of the eigenmodes with respect to the full
order response. To do this, the E-ROM is calibrated at an initial design and then the
Design optimization of stochastic dynamic systems 143

 104
1.66 2
Full
1.68
1.5 TS

1.7 SCA
1 FCA
1.72

0.5
1.74
Dissipation energy [mJ]

1.76 0

Percent error
1 1.5 2 1 1.5 2

 104
1.9 3

1.95 2

2 1

2.05 0

2.1 1
1 ∆p 1.5 2 1 ∆p 1.5 2

Figure 6.4 Dissipation energy approximating eigenmodes by Taylor Series and CA.

system parameter p1 from Figure 6.2 is varied to a


p of 1 mm. The energy dissipation
from t = 0.4 ms to t = 1.0 ms is used as the performance metric. The results of the
study are illustrated in Figure 6.4. The top and bottom figures represent two different
studies, started at different values of p0 . The figures on the left represent the recorded
energy dissipation values for the three different approximation methods compared
with the full order updated model. The figures on the right represent the percent error
of the three approximation methods with respect to the full system analysis. In the top
two graphs of Figure 6.4, both CA techniques are able to approximate the dissipation
energy better for the entire
p. In the bottom two figures, Taylor Series captures the
response well up to a
p of approximately 0.85 but then diverges rapidly, where both
CA techniques are more consistent in approximating the response. In this example, the
full CA leads to better approximations over the parameter intervals considered than
the single CA. For the remainder of this study, both CA approximation techniques will
be used. The reader may note a linear approximation of the eigenmodes captures well
the nonlinear behavior of the system response with respect to system parameters. This
illustrates the idea that an approximation before the physical solution of the system,
i.e. the eigenmodes, is better than a similar approximation of the output of the system.
144 Structural design optimization considering uncertainties

2.5 Appro x i m a t io n o f mas s and s t iffne s s m a t r i ce s


While the computation of eigenmodes after a parameter change is not feasible, cal-
culating the altered mass and stiffness matrices is a possibility, as the computational
costs of doing so does not increase as drastically with respect to degrees of freedom
as do the time integration and eigen-analysis. Therefore, a first-order and second-
order approximation of the system matrices are compared against the actual mass and
stiffness matrices in the analysis of an altered system. In comparing these, the approx-
imated eigenvalues will be used for all altered responses, as their utility has already
been illustrated.
The positive effect of using the actual M and K is illustrated in Figure 6.5. Therefore,
the difference between not updating M and K, a first order approximation, and a
second order approximation is established. The negative effect of using the initial
matrices for the analysis of an altered system is illustrated in the top left of Figure 6.5,
where the poor approximation has drastically smaller displacements. The attempt
to approximate M and K with a first-order approximation results in an even worse
description of the altered system, as shown in the top right of Figure 6.5. The reader
may note the first-order approximation results in a drastic reduction in stiffness and a
large erroneous displacement.
A linear approximation for the stiffness matrix is therefore not an option. The
efforts to use a second order approximation for M and K are shown in bottom left
of Figure 6.5. The approximated model fails to deviate significantly from the original
model and capture the change of the structural response due to the design change.
However, some utility in the second order approximation is demonstrated if the design
change between the altered and initial system is small. The results for a system change
1/5 the size used in the other examples (original change of 0.75 mm) are illustrated in
the bottom right of Figure 6.5. Although the change in the response of the system is
significantly smaller, the second order approximation effectively captures the change.
The study of approximating the mass and stiffness matrices illustrates updating the
system matrices is versatile and an effective method. However, there is still utility
in a second-order estimation of the matrices for small system changes, due to the
computational savings in forgoing additional assembly computations. The E-ROM
approximation recommended is a full CA and recalculation of M and K, with the accep-
tance of a second order approximation of M and K for small design changes. Further
application of the suggested E-ROM will be discussed in the subsequent sections.

3 E-ROMs for uncertainty analysis


To illustrate the utility of the E-ROM in uncertainty analysis, the rod in Figure 6.2 is
used, and the horizontal locations of the centers of the circular segments of the center
hole are modeled as a manufacturing uncertainty by the two uncertainty variables r1
and r2 . The radii of the circular segments are kept constant. A normal distribution
is assigned to the horizontal positions of the centers of both circular segments with a
standard deviation of 0.2 mm. As a performance measure, the amount of energy dissi-
pated from the system between 0.4 ms and 1.0 ms is measured and utilized to evaluate
altered design configurations. The E-ROM is used in three different uncertainty anal-
ysis methods in the subsequent sections, and the results compared against using the
full order system.
Design optimization of stochastic dynamic systems 145

Original matrices (dp  0.75) First order (dp  0.75)


0.5 0.5
0.4 0.4
0.3 0.3
Displacement (mm)

0.2 0.2
0.1 0.1
0 0
0.1 0.1
0.2 0.2
0.3 0.3
0.4 0.4
0.5 0.5
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time (ms) Time (ms)
Second order (dp  0.75) Second order (dp  0.15)
0.5 0.2
0.4
0.3
Displacement (mm)

0.2
0.1
0 0.1
0.1
0.2
0.3
0.4
0.5 0
0 1 2 3 4 5 6 3 3.5 4 4.5 5 5.5 6
Time (ms) Time (ms)

Initial design – FA Altered design E-ROM Altered design – FA

Figure 6.5 Approximated response for first and second order approximations of the system
matrices M and K.

3.1 Monte Carlo Simulation


The most general uncertainty analysis is Monte Carlo Simulation (MCS). In general,
MCS is impossible for most realistic dynamic systems due to the computational costs
of each simulation, and the high number of computations required for an accurate
solution. This cost is significantly reduced by utilizing the E-ROM due to the reduced
cost of time integration. However, the E-ROM proposed still requires assembly of the
altered system matrices, which is expensive for a large number of samples. MCS is
performed here not as a proposed solution of alleviating the computational burden,
but as a means of demonstrating the effectiveness of the E-ROM in the uncertainty
space.
A Monte Carlo analysis is performed on both the full order and the E-ROM system.
10,000 samples are taken in all, and the same sample points are used for both the full
146 Structural design optimization considering uncertainties

order model and E-ROM. Each sample point represents one particular realization of
the uncertainty parameters, for which a dynamic analysis is carried out and the energy
dissipated is recorded. To test the framework in terms of an uncertainty analysis, a
failure surface is created by picking a critical energy dissipation level of −16.5 mJ.
If a design failed to dissipate at least 16.5 mJ of energy, that is Ed ≥ −16.5 mJ, then
the design is considered unsafe. The E-ROM, calibrated at the initial design, was first
tested within the range of possible sample points, the mean design plus or minus 4
standard deviations, and the results were sufficiently accurate.
The MCS using the full order system analysis returned a probability of failure of
19.72%, and the MCS utilizing the E-ROM returned a probability of failure of 19.4%
for a single CA and 19.8% for full CA. The difference between these predictions is
0.3% for a single CA and 0.05% for the full CA. This is the error between the full
order model and E-ROM analyses since the same samples points were used.

3.2 F i rst Ord er R e liab ilit y Me t ho d


To address the computational burden of Monte Carlo analysis for uncertainty quan-
tification, the First Order Reliability Method (FORM) is studied with the E-ROM
approximation. FORM employs an approximation of the limit state function at the
most probable point (MPP) of failure (Hasofer and Lind 1974). FORM requires the
first-order derivatives to linearize the failure surface at the MPP, and therefore it is con-
sidered accurate as long as the curvature of the failure surface in the space of the random
variables is not too large at the MPP. The MPP is determined by solving an optimiza-
tion process in the standard normal space of the random parameters. FORM based
RBDO methods are often used within the structural design community (Enevoldsen
and Sørensen 1994b; Enevoldsen and Sørensen 1994a; Frangopol and Corotis 1996;
Yu, Choi, and Chang 1997a; Grandhi and Wang 1998; Luo and Grandhi 1997). How-
ever, FORM based RBDO methodologies are still in their infancy for multi-physics and
dynamic systems due to the additional cost of RBDO being magnified by high analysis
costs (Allen, Raulli, Maute, and Frangopol 2004; Allen and Maute 2004).
The major expense of FORM lies in the MPP search. The MPP search has certain
characteristics suitable for the E-ROMs. The search is conducted in the standard nor-
mal space of the random parameters centered about the mean design. Therefore, the
E-ROM is calibrated at the mean design. Also, the objective of the optimization pro-
cess is to minimize the distance to the origin, aiming to keep the process close to the
calibration point. For low reliability requirements the MPP lies close to the origin and
no recalibration is required. However, for high reliability requirements a recalibration
and trust region framework are typically required.
FORM analysis were performed using both the full order system and E-ROM. The
analyses were performed on the initial configuration of the rod, and the failure criteria
was the dissipation energy below −16.5 mJ.
The quantitative results of the FORM analyses are summarized in Table 6.1. The
two optimization problems converged to slightly different MPPs leading in turn to a
change in the reliability index and the calculated probability of failure. However, the
difference between the two model results is small. To improve the convergence of the
E-ROM based MPP search towards the solution of the full order model, the E-ROM
could be recalibrated at the MPP initially found and the MPP search continued.
Design optimization of stochastic dynamic systems 147

Table 6.1 FORM results of full and E-ROM model analysis.

Analysis method MPP s1 MPP s2 Beta Pf

Full model 0.837 0.194 0.859 19.52%


E-ROM
Single CA 0.848 0.179 0.867 19.30%
Full CA 0.839 0.192 0.861 19.46%

Table 6.2 PCE based MCS results of full and E-ROM model analysis.

Analysis method MSC FORM PCE


(%) (%)
1st order 2nd order 3rd order
(%) (%) (%)

Full model 19.7 20.38 20.8 20.1 19.8


E-ROM
Single CA 19.4 20.42 21.8 19.8 19.5
Full CA 19.8 20.32 20.7 20.1 19.7

3.3 Polynomial chaos expans ion bas ed M o nte Carl o Si mul ati o n
The obvious drawback of running MCS is the number of simulations to obtain an
error low enough to accurately represent the PDF of the model. The major drawbacks
of FORM are the inability to capture nonlinearities in the failure surface, and the
limitation of having only one particular probability measure instead of a complete
PDF. The third uncertainty analysis technique discussed utilizes a polynomial chaos
expansion (PCE) on the system output to allow a MCS to be used at a relatively low
computational cost compared to a full MCS (Xiu, Lucor, Su, and Karniadakis 2002;
Field, Red-Horse, and Paez 2000; Field 2002; Nurdin 2002).
PCE uses system analyses at collocation points to build a polynomial approximation
of the model response. Depending on the accuracy of the PCE needed, a different
number of collocation points are used. The more collocation points, the better the PCE
to accurately represent the model response. The computational cost of this method lies
within the system analyses at the collocation points. There, the E-ROM is utilized to
reduce the computational costs of these analyses. Once the model is sampled and the
PCE is built, a MCS can be conducted on the PCE approximation to get the PDF of
the model at a low computation cost. If a sufficiently large number of samples are
taken for the MCS, the majority of the error is due to the PCE approximation, and not
the MCS.
Considering the same limit state function as used previously, a first, second, and third
order PCE is built and the probability of failure is calculated. The results are shown
in Table 6.2. The probability of failure for MCS, FORM, and PCE are all within one
percent of each other. Between the two different methods of E-ROM, the full CA is
able to predict the stochastic response more accurately, where the error between each
of the methods is at most 0.1%.
148 Structural design optimization considering uncertainties

4 Deterministic optimization with E-ROMs


While the intended use of the proposed method is to alleviate the computational costs
of stochastic-based optimization, its utility is demonstrated first with a deterministic
optimization problem. The E-ROM approach sufficiently models a system within a cer-
tain region around the design point for which the E-ROM was calibrated. Most design
optimization problems, unlike FORM optimization problems, contain variables with
bounds larger than the trust region of the E-ROM. Therefore, an adaptation strat-
egy for updating the trust region is used to incorporate the E-ROM into optimization
problems with large bounds. In this study, the trust region framework of (Giunta and
Eldred 2000 and Eldred, Giunta, and Collis 2004) is used. The initial bounds of the
trust region are from −2 to 2 for both design variables while the global bound range
from −4 to 4 for both design variables.
For optimization, the connecting rod’s energy dissipation is used as the objective to
maximize in the design problem. The two design variables are the horizontal positions
of the center points of the left and right circular segments of the center hole, as depicted
in their initial configuration in Figure 6.2. The deterministic optimization problem is
as follows:

min (−Ediss )
s (17)
subject to −4 ≤ si ≤ 4

where Ediss is the energy dissipated and si are the optimization variables describing
the center hole geometry. The contour of the dissipation energy is seen on the right
of Figure 6.6. The contour lines in the figure are for illustrative purposes only, and
obtained by sampling the design space to compare the optimization results. The results
from the deterministic optimum are shown in Table 6.3.
All three models converged to the same deterministic optimum. Each recalibration
is more expensive to obtain than a function evaluation of the full model which is twice
the cost of the full model. This is due to obtaining the basis which includes derivatives
with respect to the design variables. The function evaluations of single and full CA
also accrue computational costs, but not as much as a function evaluation of the full
model. For large systems, the function evaluations of the full order model would be
significantly more costly than a function evaluation of the E-ROM. Comparing Single
and Full CA, each recalibration requires the same costs. Performing a full CA is more
costly than a single CA because more basis vectors are used with full CA.

Table 6.3 Full and E-ROM model deterministic optimization results.

Analysis method s1 s2 Function Equivalent


evaluations recalibration cost

Full model 1.548 4.0 27


E-ROM
Single CA 1.551 4.0 122 16
Full CA 1.549 4.0 144 16
Design optimization of stochastic dynamic systems 149

5 RBDO with E-ROMs


In general, the solution of optimization problems with stochastic criteria is signifi-
cantly more expensive than the solution of problems with purely deterministic criteria.
Virtually all stochastic analysis procedures require additional analyses of the system
for various points within the uncertainty space around the mean design. These anal-
yses are required for each evaluation of a stochastic criterion within each iteration of
the design optimization. If these analyses are solved using an E-ROM instead of a full
system model, with similar convergence results, then significant computational savings
are realized.
The framework is tested on a RBDO of the rod in Figure 6.2. The energy dissipated
is again used as the objective to be maximized in the design problem. However, a
reliability-based constraint is imposed on the system. The constraint limits the standard
deviation of the dissipation energy less than 300 µJ, making the RBDO problem as
follows:

min (−Ediss )
s (18)
subject to σE − 300 ≤ 0 and −4 ≤ si ≤ 4

where Ediss is the energy dissipated and σE is the standard deviation of the energy
dissipated. The constraint on the standard deviation forces the optimization to a more
robust design, limiting the sensitivity of the system performance to uncertainties.
The standard deviation is found by a Monte Carlo simulation based on PCE, as
highlighted in Section 3. This method was chosen due to its computational efficiency
and its ability to obtain the entire PDF of the output response. The derivatives of
the standard deviation are obtained by finite differencing. The positions of the cen-
ters of the circular segments of the center hole are now treated as both the design
variables and the random variables. The design variables represent the mean or
intended design, and a normal distribution is assigned to the horizontal positions
of the center points of the left and right circular segments of the center hole, each
with a standard deviation of 0.2 mm. The radii of the circular segments are kept
constant.
To illustrate this academic example problem, the constraint is explored in the design
and uncertainty space by sampling uniformly throughout. The results of the sampling
are illustrated in the contour plot in Figure 6.6. The plot on the left of Figure 6.6
represents the contour plots for the standard deviation of the energy dissipated in
µJ. The reader may note the constraint boundary has been highlighted in the figure,
and the feasible and infeasible regions identified. The initial design is feasible, but
the deterministic design is infeasible. Figure 6.6 on the right overlays the constraint
boundaries onto the contour plot of the objective, to give the reader the general idea
of the RBDO problem.

5.1 Results
The RBDO problem is solved using the full order system analyses and two E-ROM
approximation methods, single and full CA. The optimization results are summar-
ized in Table 6.4. Each of the methods has similar search directions, step sizes and
150 Structural design optimization considering uncertainties

Standard Deviation of Energy Energy and Standard Deviation of Energy


4 4
900
p2 0
90

0
75
200 00

0
0

1400
45

00
0
450
150

18000
750

00
12
3 3

140
900

60
0
0
60

600
450
300

30
0
450
2 2

0
750

1600
30
0
30

0
50

300
1

0
30
1 1
0
45

14000
450

16000
00

0
0 0

30
14
0
30

600
0
15

1 1
450

1600
0
15

300

0
300

2 300 2
300
150

300
150

14000

30 0
14000
450

3 3

00
120
0
15

4 4
4
p1 3 2 1 0 1 2
p 3 4 4 3 2 1 0 1 2 3 4

1
Deterministic optimum RBDO optimum
Infeasible solution Initial design (starting point)

Figure 6.6 Contour plots of objective and stochastic constraint of RBDO problem in the space
of the optimization variables.

Table 6.4 Full and E-ROM model RBDO results.

Analysis method s1 s2 Iterations Time (min)

Full model 1.31 2.81 16 130


E-ROM
Single CA 1.35 2.84 25 50.8
Full CA 1.27 2.86 13 21.5

converged solutions. The E-ROM is recalibrated within each trust region step. Within
each trust region the stochastic analyses and function evaluation are computed using
the E-ROM. The E-ROM framework converges quickly to the general solution of the
RBDO problem as did the deterministic optimum. However, it is recommend imposing
a weak convergence criterion on the E-ROM optimization and switching to the full
model to fine tune the optimization variables in the vicinity of the solution.
The benefit of reliability-based design optimization is demonstrated through the
standard deviation of the dissipation energy. The standard deviation at the determinis-
tic optimum is 579 mJ, 299 mJ at the RBDO optimum and 171 mJ at the initial design.
The stochastic response is characterized by a probability density function with a mean
and standard deviation, as opposed to a deterministic formulation where the output
is characterized by a single value. The objective of the optimization is to maximize the
energy dissipated by the system, or move the mean of the design to a greater stand-
ard deviation in dissipation energy. With no stochastic constraint, the deterministic
Design optimization of stochastic dynamic systems 151

Table 6.5 Computational costs of E-ROM and full model analysis.

Full model E-ROM

Assembly 1.11 sec 1.11 sec


Time integration 5.59 sec 0.051 sec

optimization achieves a large level of energy dissipation, but results in a design whose
performance is highly susceptible to uncertainties.

5.2 Computational cos t


The computational cost measured by CPU time of the above examples are reported in
Tables 6.4 and 6.5. Since the overall analysis and optimization times are proportional
to the computational time required to analyze one altered design configuration, the
computational costs associated with each individual analysis are reported in Table 6.5.
The cost for each analysis consists of two main components: the cost to assemble
the mass and stiffness matrices for the new design and the cost to perform the time
integration for the transient analysis of the new design.
The computational savings of the reduced time integration over the full time integra-
tion is a factor a 110. The overall computational savings per analysis is significantly
smaller, approximately 43%, due to the relatively large assembly time. As the number
of degrees of freedom of a system increase, the full integration time generally increases
at a higher rate than the required assemble time, thus increasing the savings with the
E-ROM approach. The overall computational savings of the E-ROM in an optimiza-
tion framework is dependent upon the costs of recalibration, and the frequency of
recalibration required. Again, the E-ROM is most beneficial in the RBDO framework
requiring many analyses about a mean design used as the recalibration point. Table 6.4
demonstrates the effectiveness of the E-ROM to save CPU time of the RBDO problem.
The single CA E-ROM saves approximately 39% of the time it takes the full order
model to run and the full CA E-ROM saved 71%. When moving to larger models, the
time saved running an E-ROM will be more significant.

6 Conclusions
A computational framework has been presented allowing reliability-based design opti-
mization of dynamic systems by reducing the associated computational costs. The
framework utilizes reduced order models extended into the parameter space of the
design and random variables. This extension allows for the analysis of altered designs
by the E-ROM at a significant reduction in computational cost. Various approaches for
constructing an E-ROM were studied. Numerical studies showed the system matrices
need recomputing at each altered design and can not be approximated by a Taylor series
expansion. In contrast, the reduced basis can be well approximated with a first order
Taylor series expansion, using the full combined approximation approach. The utility
of the approach was tested on a linear elastic rod subjected to a time-varying load.
The E-ROM was used for various stochastic analysis techniques compared against the
full model analyses. The E-ROMs ability to capture the essential characteristics of
152 Structural design optimization considering uncertainties

the system was demonstrated by both deterministic and reliability-based design opti-
mization examples. The E-ROM framework proved to converge to the solution with
significantly less computational effort than the full system model.

Acknowledgments
The authors acknowledge the support by the National Science Foundation under grant
DMI-0300539. The opinions and conclusions presented are those of the authors and
do not necessarily reflect the views of the sponsoring organizations.

References

Adelman, H. & Haftka, R. 1986. Sensitivity analysis of discrete structural systems. AIAA
Journal 24:823–832.
Allen, M. & Maute, K. 2004. Reliability-based design optimization of aeroelastic structures.
Structural and Multidisciplinary Optimization 27(4):228–242.
Allen, M., Raulli, M., Maute, K. & Frangopol, D. 2004. Reliability-based analysis and design
optimization of electrostatically actuated MEMS. Computers and Structures 82(13–14):
1007–1020.
Austrell, P.-E., Dahblom, O., Lindemann, J., Olsson, A., Olsson, K.-G., Persson, K.,
Petersson, H., Ristinmaa, M., Sandberg, G. & Wernbergk, P.-A. CALFEM: A finite ele-
ment toolbox to MATLAB, Version 3.3., Structural Mechanics, LTH, Sweden, 1999.
http://www.byggmek.lth.se/Calfem/index.htm.
Bennet, J. & Botkin, M. 1985. Structural shape optimization with geometric description and
adaptive mesh refinement. AIAA Journal 23:458–464.
Bjerager, P. 1990. On computational methods for structural reliability analysis. Structural
Safety 9(2):79–96.
Chandu, S. & Grandhi, R. 1995. General purpose procedure for reliability based structural
optimization under parametric uncertainties. Advances in Engineering Software 23:7–14.
Dailey, R. 1989. Eigenvector derivatives with repeated eigenvalues. AIAA Journal 27:
486–491.
Diaz, A. & Kikuchi, N. 1992. Solution to shape and topology eigenvalue optimization prob-
lems using a homogenization method. International Journal for Numerical Methods in
Engineering 35:1487–1502.
Eldred, M.S., Giunta, A.A. & Collis, S. 2004. Second-order corrections for surrogate-based opti-
mization with model hierarchies. In AIAA 2004-44570, 10th AIAA/ISSMO Multidisciplinary
Analysis and Optimization Conference, 30 August – 1 September 2004, Albany, NY.
Enevoldsen, I. & Sørensen, J.D. 1994a. Reliability-based optimization as an information tool.
Mechanics of Structures & Machines 22(1):117–135.
Enevoldsen, I. & Sørensen, J.D. 1994b. Reliability-based optimization in structural engineering.
Structural Safety 15:169–196.
Field, R. 2002. Numerical methods to estimate the coefficients of the polynomial chaos
expansion. In 15th Engineering Mechanics Conference, Columbia University, NY. ASCE.
Field, R., Red-Horse, J. & Paez, T. 2000. A nondeterministic shock and vibration application
using polynomial chaos expansions. In 8th ASCE Joint Specialty Conference on Probabilistic
Mechanics and Structural Reliability, South Bend, IN.
Frangopol, D. & Corotis, R. 1996. Reliability-based structural system optimization: State-
of-the-art verse state-of-the-practice. In Cheng (ed.), Analysis and Computation: Pro-
ceedings of the Twelfth Conference held in Conjunction with Structures Congress XIV,
pp. 67–78.
Design optimization of stochastic dynamic systems 153

Ghanem, R.G. & Spanos, P.D. 1991. Stochastic finite element: a spectral approach,
Springer.
Giunta, A.A. & Eldred, M.S. 2000. Implementation of a trust region model management
strategy in the dakota optimization toolkit. In AIAA/USA/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Long Beach, CA.
Grandhi, R. & Wang, L. 1998. Reliability-based structural optimization using improved
two-point adaptive nonlinear approximations. Finite Elements in Analysis and Design,
35–48.
Hasofer, A. & Lind, N. 1974. Exact and invariant second-moment code format. J. of
Engineering Mechanics 100:111–121.
Haug, E.J. & Choi, K.K. 1982. Systematic occurrence of repeated eigenvalues in structural
optimization. Journal of Optimization Theory and Applications 38:251–274.
Kirsch, U. 1999. Efficient, accurate reanalysis for structural optimization. AIAA Jour-
nal 37(12):1663–1669.
Kirsch, U. 2000. Combined approximations – a general reanalysis approach for structural
optimization. Structural and Multidisciplinary Optimization 20(2):97–106.
Kirsch, U. 2001. Exact and accurate solutions in the approximate reanalysis of structures. AIAA
Journal 39(11):2198–2205.
Kirsch, U. 2002. A unified reanalysis approach for structural analysis, design, and optimization.
Structural and Multidisciplinary Optimization 25(1):67–85.
Kirsch, U. 2003. Approximate vibration reanalysis of structures. AIAA Journal 41(3):
504–511.
Kleiber, M. & Hien, T. 1992. The Stochastic Finite Element Method, Basic Perturbation
Technique and Computer Implementation. Wiley.
Legresley, P. & Alonso, J. 2001. Investigation of nonlinear projection for POD based reduced
order models for aerodynamics. In AIAA 2001-16737, 39th Aerospace Sciences Meeting &
Exhibit, January 8–11, 2001, Reno, NV.
Luo, X. & Grandhi, R. 1997. Astros for reliability-based multidisciplinary structural analysis
and optimization. Computers and Structures 62:737–745.
Masur, E. 1984. Optimal structural design under multiple eigenvalue constraints. International
Journal of Solids and Structures 20:117–120.
Mills-Curran, W. 1988. Calculation of eigenvector derivatives for structures with repeated
eigenvalues. AIAA Journal 26(7):867–871.
Nieuwenhof, B. & Coyotte, J. 2002. A perturbation stochastic finite element method for
the time-harmonic analysis of structures with random mechanical properties. In 5th World
Congress on Computational Mechanics, Vienna, Austria. WCCM.
Nurdin, H. 2002. Mathematical modeling of bias and uncertainty in accident risk assessment.
Mathematical Sciences, University of Twente, The Netherlands.
Ravindran, S. 1999. Proper orthogonal decomposition in optimal control of fluids. Technical
report, NASA TM-1999-209113.
Schuëller, G. 1997. A state-of-the-art report on computational stochastic mechanics. Proba-
bilistic Engineering Mechanics 12:197–321.
Schuëller, G. 2001. Computational stochastic mechanics – recent advances. Computers &
Structures 79:2225–2234.
Thomas, J., Dowell, E. & Hall, K. 2001. Three-dimensional transonic aeroelastic-
ity using proper orthogonal decomposition based reduced order models. In 42nd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials (SDM) Con-
ference, April 2001, Seattle, WA, AIAA Paper 2001-1526.
Willcox, K. & Peraire, J. 2001. Balanced model reduction via the proper orthogonal decom-
position. In 15th AIAA Computational Fluid Dynamics Conference, June 11–14, Anaheim,
CA, AIAA 2001-2611.
154 Structural design optimization considering uncertainties

Xiu, D., Lucor, D., Su, C.-H. & Karniadakis, G. 2002. Stochastic modeling of flow-structure
interactions using generalized polynomial chaos. Journal of Fluids Engineering 124:51–59.
Yu, X., Chang, K. & Choi, K. 1998. Probabilistic structural durability prediction. AIAA
Journal 36(4):628–637.
Yu, X., Choi, K. & Chang, K. 1997a. A mixed design approach for probabilistic structural
durability. Journal of Structural Optimization 14(2–3):81–90.
Yu, X., Choi, K. & Chang, K. 1997b, January. Reliability and durability based design sensitiv-
ity analysis and optimization. Technical Report R97-01, Center for Computer Aided Design
University of Iowa.
Zhang, W.-H., Beckers, P. & Fleury, C. 1995. A unified parametric design approach to structural
shape optimization. International Journal for Numerical Methods in Engineering 38:2283–
2292.
Chapter 7

Stochastic system design optimization


using stochastic simulation
Alexandros A. Taflanidis & James L. Beck
California Institute of Technology, CA, USA

ABSTRACT: Engineering design in the presence of uncertainties often involves optimization


problems that include as objective function the expected value of a system performance mea-
sure, such as expected life-cycle cost or failure probability. For complex systems, this expected
value can rarely be evaluated analytically. In this study, it is calculated using stochastic simu-
lation techniques which allow explicit consideration of nonlinear characteristics of the system
and excitation models, as well as complex failure modes. At the same time, though, these tech-
niques involve an unavoidable estimation error and significant computational cost which make
the associated optimization challenging. An efficient framework, consisting of two stages, is
presented here for such optimal system design problems. The first stage implements a novel
approach, called Stochastic Subset Optimization, for iteratively identifying a subset of the origi-
nal design space that has high plausibility of containing the optimal design variables. The second
stage adopts some stochastic optimization algorithm to pinpoint, if needed, the optimal design
variables within that subset. Topics related to the combination of the two different stages for
overall enhanced efficiency are discussed. An illustrative example is presented that shows the
efficiency of the proposed methodology; it considers the retrofitting of a four-story structure
with viscous dampers. The minimization of the expected lifetime cost is adopted as the design
objective. The expected cost associated with damage caused by future earthquakes is calculated
by stochastic simulation using a realistic probabilistic model for the structure and the ground
motion.

1 Introduction
In engineering design, the knowledge about a planned system is never complete. First it
is not known in advance which design will lead to the best system performance in terms
of the specified metric; it is therefore desirable to optimize the performance measure
over the space of design variables that define the set of acceptable designs. Second,
modeling uncertainty arises because no mathematical model can capture perfectly the
behavior of a real system and its future excitation. In practice, the designer chooses a
model that he or she feels will adequately represent the behavior of the system when
built; however, there is always uncertainty about which values of the model parameters
will give the best representation of the system, so this parameter uncertainty should be
quantified. Furthermore, whatever model is chosen, there will always be an uncertain
prediction error between the model and system responses. For an efficient engineering
design, all uncertainties, involving future events as well as the modeling of the system,
must be explicitly accounted for. A probability logic approach provides a rational and
156 Structural design optimization considering uncertainties

consistent framework for this purpose (Jaynes 2003). In this case, this process is often
called stochastic system design.
In this context, consider some controllable parameters that define the system design,
referred to herein as design variables, ϕ = [ϕ1 , ϕ2 , . . . , ϕnϕ ] ∈ Φ ⊂ Rnϕ , where Φ denotes
the bounded admissible design space. Also consider a model class that is chosen to
represent a system design and its future excitation, where each model in the class is
specified by a nθ -dimensional vector θ = [θ1 , θ2 , . . . , θnθ ] lying in Θ ⊂ Rnθ , the set of
possible values for the model parameters. Because there is uncertainty in which model
best represents the system behavior, a PDF (probability density function) p(θ|ϕ), which
incorporates available knowledge about the system, is assigned to these parameters.
The objective function for a robust-to-uncertainties design is, then, expressed as:

Eθ [h(ϕ, θ)] = h(ϕ, θ)p(θ|ϕ)dθ (1)
Θ

where Eθ [·] denotes expectation with respect to the PDF for θ and h(ϕ, θ) : Rnϕ × Rnθ →
R denotes the performance measure of the system, referred to also as the loss function;
possible examples for h(ϕ, θ) are the life-cycle cost (see (37) later) or the indicator
function for system failure so that (1) gives the failure probability (see (6) later). We
then have the optimal stochastic design problem:

Minimize Eθ [h(ϕ, θ)]


(2)
subject to f c (ϕ) ≥ 0

where f c (ϕ) corresponds to a vector of constraints. Such optimization problems,


arising in decision making under uncertainty, are typically referred to as stochastic
optimization problems (e.g. Ruszczynski & Shapiro 2003, Spall 2003).
In structural engineering, stochastic design problems are usually related to the
expected life-cycle cost of a structure (e.g. Ang & Lee 2001) or to its reliability,
quantified in terms of the probability of failure P(F|ϕ) (e.g. Enevoldsen & Sørensen
1994, Gasser & Schuëller 1997). Many variants of such problems have been posed,
typically expressed in one of three following forms: (a) optimization of the system
reliability given deterministic constraints, (related, for example, to construction cost),
(b) optimization of the cost of the structure given reliability constraints, or (c) optimiza-
tion of the expected life-cycle cost of the structure. Approaches have been suggested
for transforming the latter problem to one of the former two. This is established by
approximating the cost related to future damages to the structure in terms of its failure
probability (Sørensen et al. 1994). In this setting, Reliability-Based Design Optimiza-
tion (RBDO), i.e. design considering reliability measures in the objective function or
the design constraints, has emerged as one of the standard tools for robust and cost-
effective design of structural systems. An alternative design methodology that also
accounts for probabilistic system response is the Robust Design Optimization (RDO).
RDO primarily seeks to minimize the influence of stochastic variations on the mean
design; as such, it focuses on reduction of the mean performance rather than looking
at optimizing the response that exceeds some acceptable thresholds, as RBDO does.
Still, RBDO and RDO represent only a portion of the potential problems encountered
in robust-to-uncertainties system design optimization.
Stochastic system design optimization using stochastic simulation 157

In this study we discuss general stochastic system design problems that involve as
objective function the expected value of a system performance measure. Some special
attention is given to problems with reliability objectives, i.e. when that expected value
corresponds to a failure probability. This class of problems, which belongs to RBDO,
will be referred to herein as ROP (reliability objective problems). We focus on analysis
methods that are applicable to complex systems, involving, for example, nonlinear
models with high-dimensional uncertainties. These types of problems appear often in
the study of dynamic systems when the excitation is modeled as a stochastic process,
for example, in the field of dynamic reliability (Au & Beck 2003b). An efficient frame-
work for analysis and optimization is presented here for such design problems using
stochastic simulation techniques to evaluate the system performance.

2 Optimal stochastic system design using stochastic


simulation

2.1 G eneral cas e


We consider the optimization described by (2), which may be equivalently formu-
lated as:

ϕ∗ = arg min Eθ [h(ϕ, θ)] (3)


ϕ∈Φ

where the deterministic constraints are taken into account by appropriate definition
of the admissible design space Φ. In the probabilistic setting described earlier, model
uncertainties may be incorporated in the system description as a model prediction error,
i.e. an error between the response of the actual system and that of the assumed model.
This error can be model probabilistically as a random variable (Beck & Katafygiotis
(1998)) and augmented into θ to form an uncertain parameter vector, comprised of
both the uncertain model parameters and the model prediction error.
For optimization (3), the integral in (1) must be evaluated. A particular source of
difficulty for structural design when complex systems are considered is the high com-
putational cost associated with this evaluation. To reduce this computational effort,
many specialized approximate approaches have been proposed for structural opti-
mizations (e.g. Enevoldsen & Sørensen 1994, Gasser & Schuëller 1997, Jensen 2005).
These approaches include using response surface methods to approximate the struc-
tural behavior or using some proxy for the structural reliability in ROP (for example,
a reliability index obtained through FORM or SORM). These specialized approaches
may work satisfactorily under certain conditions, but are not proved to converge to the
solution of the original design problem. For this reason such approaches are avoided in
this study. Instead, evaluation of the integral in (1) through stochastic simulation tech-
niques is considered. In this setting, an unbiased estimate of the expected value in (1)
can be obtained using a finite number, N, of random samples of θ, drawn from p(θ|ϕ):

1
N
Êθ,N [h(ϕ, ΩN )] = h(ϕ, θi ) (4)
N
i=1
158 Structural design optimization considering uncertainties

where ΩN = [θ1 . . . θN ], and vector θi denotes the sample of the uncertain parameters
used in the ith simulation. This estimate involves an unavoidable error eN (ϕ, ΩN ). The
optimization in (3) is, then, only approximately equivalent to:

ϕ∗N = arg min Êθ [h(ϕ, ΩN )] (5)


ϕ∈Φ

However, if the stochastic simulation procedure is a consistent one, then as N → ∞,


Êθ,N [h(ϕ, ΩN )] → Eθ [h(ϕ,θ)] and ϕ∗N → ϕ∗ . The existence of the estimation error
eN (ϕ, ΩN ), which may be considered as noise in the objective function, contrasts
with classical deterministic optimization where it is assumed that one has perfect
information. Figure 7.1a illustrates the difficulties associated with eN (ΩN , ϕ). The
curves corresponding to simulation-based evaluation of the objective function have
non-smooth characteristics, a feature which makes application of gradient-based algo-
rithms challenging. Also, the estimated optimum depends on the exact influence of
the estimation error, which is not the same for all evaluations; different runs of the
algorithm converge to different solutions, which do not necessarily correspond to the
true optimum.
An efficient framework, consisting of two stages, is discussed in the following sec-
tions for such optimizations. The first stage implements a novel approach, called
Stochastic Subset Optimization (SSO) (Taflanidis & Beck 2007a, Taflanidis & Beck
2007b) for efficiently exploring the sensitivity of the objective function to the design
variables and iteratively identifying a subset of the original design space that has high
plausibility of containing the optimal design variables. The second stage adopts some
appropriate stochastic optimization algorithm to pinpoint the optimal design variables
using information from the first stage. Topics related to the combination of the two
different stages for enhanced overall efficiency are discussed. Before presenting this
framework some special characteristics of ROP are considered.

(a) 0.09 (b)


analytical 1
0.08 sim N  1000 ~ ␪))
sim N  4000 0.8 Pε(g(␸,
0.07
E [h(␸, ␪)]

h(␸, ␪)

0.6
0.06 0.4

0.05 0.2
IF(␸, ␪)
0.04 0
10 12 14 16 18 20 0.1 0.05 ~ 0 0.05 0.1
␸ g(␸, ␪)

Figure 7.1 (a) Analytical and simulation-based (sim) evaluation of objective function and
(b) comparison between the two candidate loss functions for reliability objective
problems.
Stochastic system design optimization using stochastic simulation 159

2.2 Reliability objective problems


In a reliability context, the robust probability of failure (Papadimitriou et al. 2001)
can be employed to include probabilistic model uncertainties when evaluating the
performance of a system. This probability quantifies the performance by giving a
measure of the plausibility of the occurrence of system failure, based on all available
information, and it is expressed as:

P(F|ϕ) = Eθ [IF (ϕ, θ)] = IF (ϕ, θ)p(θ|ϕ)dθ (6)
Θ

where IF (ϕ, θ) is the indicator function of failure, which is 1 if the system that cor-
responds to (ϕ, θ) fails, i.e. its response departs from the acceptable performance set,
and 0 if it does not.
An equivalent expression can also be used for the robust failure probability when
a model prediction error, ε(ϕ, θ), is assumed. Let g(ϕ) > 0 and g̃(ϕ, θ) > 0 be the limit
state quantities defining the system’s and model’s failure respectively, and let the model
prediction error be defined in such a way that the relationship ε(ϕ, θ) = g̃(ϕ, θ) − g(ϕ)
holds; then, if Pε (·) is the conditional on (ϕ, θ) cumulative distribution function for the
model prediction error ε(ϕ, θ) and noting that g(ϕ) > 0 is equal to ε(ϕ, θ) < g̃(ϕ, θ), the
robust failure probability can be equivalently expressed as (Taflanidis & Beck 2007a):

P(F|ϕ) = Pε (g̃(ϕ, θ))p(θ|ϕ)dθ (7)
Θ

where in this case the vector θ corresponds solely to the uncertain parameters for
the system and excitation model, i.e. excluding the prediction error. Thus, the loss
function in ROP corresponds either to (a) h(ϕ, θ) = IF (ϕ, θ) or (b) h(ϕ, θ) = Pε (g̃(ϕ, θ)),
depending on which formulation is adopted, (6) or (7). Both of these formulations
are used in the two stages of the framework suggested. In Figure 7.1b these two loss
functions are compared when ε is Normally distributed with mean 0 and standard
deviation 0.01.

3 Stochastic subset optimization


SSO is an efficient algorithm for exploring the sensitivity of stochastic design opti-
mization problems using a small number of system analyses (Taflanidis & Beck 2007a,
Taflanidis & Beck 2007b).

3.1 Augmented problem


Consider, initially, the modified positive loss function hs (ϕ, θ) : Rnϕ × Rnθ → R+ defined
for a constant s as:

hs (ϕ, θ) = h(ϕ, θ) − s where s < min h(ϕ, θ) (8)


ϕ,θ

and note that Eθ [hs (ϕ, θ)] = Eθ [h(ϕ, θ)] − s. Since the two expected values differ only
by a constant, optimization of the expected value of h(·) is equivalent, in terms of the
160 Structural design optimization considering uncertainties

optimal design choice, to optimization of the expected value of hs (·). In the SSO setting
we focus on the latter optimization.
The basic idea in SSO is the formulation of an augmented problem where the design
variables are artificially considered as uncertain with distribution p(ϕ) over the design
space Φ. In the setting of this augmented stochastic design problem, define the auxiliary
PDF π(ϕ, θ) as:

hs (ϕ, θ)p(ϕ, θ)
π(ϕ, θ) = (9)
Eϕ,θ [hs (ϕ, θ)]

where p(ϕ, θ) = p(ϕ)p(θ|ϕ) and the normalizing integral in the denominator corre-
sponds to the expected value in the augmented uncertain space:
 
Eϕ,θ [hs (ϕ, θ)] = hs (ϕ, θ)p(ϕ, θ)dθdϕ (10)
Φ Θ

This expected value will not be explicitly needed, but it can be obtained though
stochastic simulation, which leads to an expression similar to (4) but with the pair
[ϕ, θ] defining the uncertain parameters. The transformation of the loss function in
(8) may be necessary to ensure that π(ϕ, θ) ≥ 0. For most structural design problems
h(ϕ, θ) ≥ 0 and the transformation in (8) is usually unnecessary, which is always the
case for ROP. However, in some cases it may be advantageous to choose s near the
minimum of h(ϕ, θ) to increase efficiency of SSO (see later).
In terms of the auxiliary PDF, the objective function Eθ [hs (ϕ, θ)] can be expressed as:

π(ϕ)
Eθ [hs (ϕ, θ)] = Eϕ,θ [hs (ϕ, θ)] (11)
p(ϕ)

where the marginal PDF π(ϕ) is equal to:



π(ϕ) = π(ϕ, θ)dθ (12)
Θ

Define, now, J(ϕ) as:

Eθ [hs (ϕ, θ)] π(ϕ)


J(ϕ) = = (13)
Eϕ,θ [hs (ϕ, θ)] p(ϕ)

Since Eϕ,θ [hs (ϕ, θ)] can be viewed simply as a normalizing constant, minimization of
Eθ [hs (ϕ, θ)] is equivalent to the minimization of the quotient J(ϕ) = π(ϕ)/p(ϕ). For this
purpose the marginal PDF π(ϕ) in the numerator must be evaluated. Samples of this
PDF can be obtained through stochastic sampling/simulation techniques (Robert &
Casella (2004)). These techniques will give sample pairs [ϕ, θ] that are distributed
according to π(ϕ, θ). Their ϕ component corresponds to samples from the marginal
distribution π(ϕ). Appendix A briefly discusses two appropriate sampling algorithms,
one using a direct approach to Monte Carlo (MC) simulation and one using Markov
Chain Monte Carlo (MCMC) simulation. SSO is based on exploiting the information
in these samples.
Stochastic system design optimization using stochastic simulation 161

3.2 Subset analys is


An analytical approximation for π(ϕ) based on these samples for ϕ can be established
using, for example, the maximum entropy method (Ching & Hsieh 2007), histograms
(Au 2005) or kernel density estimators. Experience indicates that for challenging prob-
lems, including, for example, cases where the dimension nϕ is not small (e.g. larger
than two) or the sensitivity for a design variable is complex, such methods may be
problematic and are generally unreliable as means of approximating π(ϕ) (Taflanidis
& Beck 2007a). In the SSO framework, such an approximation for π(ϕ) is avoided.
The sensitivity analysis is performed by looking at the average value of J(ϕ) over I,
H(I), which for any subset of the design space I ⊂ Φ with volume VI is defined as:
( ( 
1 I Eθ [hs (ϕ, θ)]dϕ J(ϕ)dϕ 1 π(ϕ)
H(I) = = I = dϕ (14)
Eϕ,θ [hs (ϕ, θ)] VI VI VI I p(ϕ)

To simplify the evaluation of H(I), a uniform distribution is chosen for p(ϕ). Note
that p(ϕ) does not reflect the uncertainty in ϕ but is simply a device for formulating the
augmented problem and thus can be selected according to convenience. Finally H(I)
and an estimate of it based on the samples from π(ϕ), obtained as described previously,
are given, respectively, by:

VΦ NI /VI
H(I) = π(ϕ)dϕ and Ĥ(I) = (15)
VI I NΦ /VΦ

where NI and NΦ denote the number of samples from π(ϕ) belonging to the sets I and
Φ, respectively, and VΦ is the volume of the design space Φ. The estimate for H(I) is
equal to the ratio of the volume density of samples from π(ϕ) in sets I and Φ. The
coefficient of variation (c.o.v.) for this estimate depends on the simulation technique
used for obtaining the samples from π(ϕ). For a broad class of sampling algorithms
this c.o.v. may be expressed as:
  
1 − P(ϕ ∈ I) 1 − NI /NΦ

c.o.v. Ĥ(I) = ≈ , P(ϕ ∈ I) = π(ϕ)dϕ (16)


N · P(ϕ ∈ I) N · NI /NΦ I

where N = NΦ /(1 + γ), γ ≥ 0, is the effective number of independent samples. If direct


MC techniques are used then γ = 0, but if MCMC sampling is selected then γ > 0
because of the correlation of the generated samples. Ultimately, the value of γ depends
on the characteristics of the algorithm used. See (Au & Beck 2003b) for a formula for
γ when the Metropolis-Hasting algorithm is used.
For the uniform PDF for p(ϕ), note that H(I) is equal to the ratio:
(
Eθ [hs (ϕ, θ)]dϕ/VI
H(I) = ( I (17)
Φ Eθ [hs (ϕ, θ)]dϕ/VΦ

where the integrals in the numerator and denominator could be considered as the “aver-
age set content’’ in I and Φ respectively. Thus H(I) expresses the average sensitivity of
Eθ [hs (ϕ, θ)] to ϕ within the set I ⊂ Φ.
162 Structural design optimization considering uncertainties

3.3 Sub set o pt imizat io n


Consider a set of admissible subsets A in Φ that have some predetermined shape and
some size constraint, for example related to the set volume, and define optimization:

I ∗ = arg min H(I) (18)


I∈A

This definition is motivated by the fact that, as explained above, minimization of


Eθ [hs (ϕ, θ)] is equivalent to minimization of J(ϕ) and that H(I) corresponds to the
volume average integral of J(ϕ) over subset I.
Based on the estimate in (15), optimization (18) is approximately equal to
identification of the subset that contains the smallest volume density NI /VI of samples:

NI
Î = arg min Ĥ(I) = arg min (19)
I∈A I∈A VI

Note that the relationship of the position in the design space of a set I ∈ A and
the number of sample points in it is non-differentiable. Thus, methods appropriate
for non-smooth optimization problems, such as genetic algorithms, should be cho-
sen for optimization (19). The evaluation of the objective function for this problem
involves small computational effort. Thus, the optimization can be efficiently solved
if an appropriate algorithm is chosen.
If set A is properly chosen, for example if its shape is “close’’ to the contours of
Eθ [hs (ϕ, θ)] in the vicinity of ϕ∗ , then ϕ∗ ∈ I ∗ for the optimization in (18). This argument
is not necessarily true for the optimization in (19) because only estimates of H(I)
are used. Î is simply the set, within the admissible subsets A, that has the largest
likelihood, in terms of the information available through the obtained samples, of
including ϕ∗ . This likelihood defines the quality of the identification and ultimately
depends (Taflanidis & Beck 2007b) on the ratio of average set content, given by H(I)
(see (17)). Taking into account the fact that the set content in the neighborhood of
the optimal solution is the smallest in Φ, it is evident that smaller values of H(Î)
correspond to greater plausibility for the set Î to include ϕ∗ . Since only estimates of
H(Î) are available in the stochastic identification, the quality depends, ultimately, on
both: (a) the estimate Ĥ(Î) and (b) its coefficient of variation (defining the accuracy
of that estimate). Smaller values for these parameters correspond to better quality of
identification. Both of them should be used as a measure of this quality.

3.4 D eta i l s f or R O P
When SSO is implemented for ROP, selection of IF (ϕ, θ) as the loss function is ben-
eficial because it simplifies the task of simulating samples from π(ϕ, θ). In this case
these samples correspond simply to failed samples, i.e. samples that lead to failure of
the system (IF (ϕ, θ) = 1), and the auxiliary PDF π(ϕ, θ) is simply the PDF for the aug-
mented uncertain parameter vector conditioned on failure of the system, i.e. p(ϕ, θ|F).
Similarly, the marginal π(ϕ) corresponds to p(ϕ|F). Monte Carlo simulation can then
be used for simulating samples from p(ϕ|F). For design problems that involve small
failure probabilities this approach may be inefficient because 1/PF trials are needed on
Stochastic system design optimization using stochastic simulation 163

the average in order to simulate one failed sample, where PF is the failure probability
in the augmented design problem, defined, similarly to (10), as:
 
PF = I(ϕ, θ)p(ϕ, θ) dθdθϕ (20)
Φ Θ

Other stochastic simulation methods, such as Subset Simulation (Au & Beck 2001)
should be preferred in such cases.

3.5 Implementation is s ues

3.5.1 Resolu tion for desi gn v ari abl es and i te ra t i v e i d e n t i fi ca t i o n


The size of the admissible subsets I define (a) the resolution of ϕ∗ and (b) the informa-
tion about the accuracy of Ĥ(I) that is extracted from the samples from π(ϕ). Selecting
smaller size for the admissible sets leads to better resolution for ϕ∗ . At the same time,
though, this selection leads to smaller values for the ratio NI /NΦ (since smaller number
of samples are included in smaller sets) and thus it increases the c.o.v. (reduces accu-
racy) of the estimation, as seen from (16). In order to maintain the same quality for
the estimate, the effective number of independent samples must be increased, which
means that more simulations must be performed. Since we are interested in subsets in
Φ with small average set content, the required number of simulations to gather accu-
rate information for subsets with small size is large. To account for this characteristic
and to increase the efficiency of the identification process, an iterative approach can be
adopted. At iteration k, additional samples in Îk−1 (where I0 = Φ) that are distributed
according to π(ϕ) are obtained. A region Îk ⊂ Îk−1 for the optimal design parameters
is then identified as above. The quality of the identification is improved by applying
such an iterative scheme, since the ratio of the samples in Îk−1 to the one in Îk is larger
(and thus the c.o.v. for Ĥ(Îk ) smaller) than the equivalent ratio when comparing Îk and
the original design space Φ. The samples [ϕ, θ] available from the previous iteration,
whose ϕ component lies inside set Îk−1 , can be exploited for improving the efficiency
of the sampling process. In terms of the algorithms described in Appendix A this may
be established for example by (a) forming better proposal PDFs and/or (b) using the
samples already available as seeds for MCMC simulation. Some guidelines for the
MCMC simulation are given later on, in the context of the example considered.
Another way to improve the efficiency in this iterative process is to continually
update hs (ϕ, θ) in (8) by re-defining s:

hs,k (ϕ, θ) = h(ϕ, θ) − sk where sk = min h(ϕ, θ) (21)


ϕ∈Îk ,θ

Figure 7.2 illustrates this concept. For choice hs,2 (ϕ, θ), which corresponds to a larger
value of s, the sensitivity of the objective function, in the SSO setting, is larger and
a candidate region for the optimal choice is more easily discernible (better quality
is established) based on samples from π(ϕ). If hs (ϕ, θ) is reformulated, though, the
ancillary density π(ϕ, θ) changes and the samples from the previous iteration cannot
provide useful information for the next iteration unless the previous and the next loss
164 Structural design optimization considering uncertainties

0.3 s0 0.3 s  0.10


hs,1(␸, ␪)  h(␸, ␪) hs,2(␸, ␪)  h(␸, ␪)0.10
E [hs(␸, ␪)]

E [hs(␸,␪)]
0.2 0.2

0.1 0.1

0 0
10 15 20 10 15 20
(a) ␸ ␸

100 100

80 80

60 60
N1

N1
40 40

20 20

0 0
10 15 20 10 15 20
(b) ␸ ␸

Figure 7.2 Influence of selection of s in SSO (a) objective function Eθ [hs (ϕ, θ)] and (b) histograms
of samples from π(ϕ) obtained through MC simulation.

functions hs (ϕ, θ) are similar. For cases where the sensitivity of the objective function
is small, our experience indicates that the re-formulation of the loss function can be
beneficial (assuming that s can be set to a larger value). When the sensitivity is quite
high, though, it is preferable to keep the same loss function and use the samples
available to improve the efficiency when generating new samples.

3.5.2 S el ec tion o f a d m is s ib le s u b s e t s
Proper selection of the geometrical shape and size of the admissible sets is important
for the efficiency of SSO.
The geometrical shape should be chosen so that the challenging, non-smooth opti-
mization (19) can be efficiently solved and still the sensitivity of the objective function
to each design variable is fully explored. For example, a hyper-rectangle or a hyper-
ellipse can be appropriate choices, with the latter expected to be closer to the objective
function contours but requiring more computational complexity, especially in high
dimensions.
The size of admissible subsets is related to the quality of identification as discussed
earlier. Selection of size for the admissible subsets can be determined by incorporating
a constraint for either (i) the volume ratio δ = VI /VΦ or (ii) the number of samples
ratio ρ = NI /NΦ . The first choice cannot be directly related to any of the measures of
quality of identification; thus proper selection of δ is not straightforward. The second
choice allows for directly controlling the coefficient of variation (see (16)) and thus one
of the parameters influencing the quality of identification. This characterization for
Stochastic system design optimization using stochastic simulation 165

the admissible subsets is adopted here. The subset optimization in (19) corresponds,
finally, to identification of a subset that has smallest estimated average value, within
the class of subsets that guarantee a specific c.o.v. for that estimate:

Î = arg min NI /VI = arg max VI , Aρ = {I ⊂ Φ : ρ = NI /NΦ } (22)


I∈Aρ I∈Aρ

The same comment as for the optimization in (19) applies in this case; an algorithm
appropriate for non-smooth optimizations should be selected.
The volume (size) of the admissible subsets in this scheme is adaptively chosen so
that the ratio of samples in the identified set is equals to ρ. The choice of the value
for ρ affects the efficiency of the identification. If ρ is large, fewer number of samples
is required for the same level of accuracy (c.o.v. in (16)). However, a large value of ρ
means that the size of the identified subsets will decreases slowly (larger size sets are
identified). A slow sequence requires more steps to converge to the optimal solution. It
can thus be seen that the choice of the constraint ρ is a trade-off between the number
of samples required in each step and the number of steps required to converge to the
optimal design choice. In the applications we have investigated so far it was found that
choosing ρ = 0.1–0.2 yields good efficiency.

3.5.3 Influence of di mensi on of desi gn v ari a b l e s v e ct o r


For a specific reduction of the volume of the search space in some step of the set
identification, δk = VIk /VIk−1 , the mean reduction of the length for each design variable

is nϕ δk . The mean total length reduction over all variables in niter iterations is:
+
, niter . /
,
n-
δk ≈ ϕ δnmean = (δmean )niter /nϕ
n
ϕ iter
(23)
k=1

where δmean is the geometric mean of the volume reductions over all of the iterations.
It is evident from (23) that for the same mean total length reduction, the number of
iterations is proportional to the dimension of the design space. This proportionality
relationship has been verified in the examples considered in (Taflanidis & Beck 2007a;
Taflanidis & Beck 2007b). Assuming that the mean total length reduction over all vari-
ables describes adequately the computational efficiency of SSO, this argument shows
that this efficiency decreases linearly with the dimension of the design space, so SSO
should be considered appropriate for problems that involve a large number of design
variables.

3.5.4 SSO algori thm and sampl e i mpl ement a t i o n e x a m p l e


The SSO algorithm is summarized as follows (Figure 7.3 illustrates some of the key
steps of the algorithm).
Initialization: Define the desired geometrical shape for the subsets I. Decide on the
constraint value ρ in (22) and the desired number of samples N. The latter should be
166 Structural design optimization considering uncertainties

(a) Initial samples through MC (b) Identification of set Î1


8 8
6 6
w2 4
4
2 2
100 200 300 400 500 600 100 200 300 400 500 600

(c) Retained samples in set Î1 (d) Magnification of previous plot


8
6 4
w2
4 3
2 2
100 200 300 400 500 600 100 200 300 400 500

(e) New samples through MCMC and set Î2 (f) Last step, stopping criteria are satisfied
4.5
4 4
w2 3.5
3
3
2 2.5
100 200 300 400 500 150 200 250
w1 w1

Figure 7.3 Illustration of some key steps in SSO for an example with a two-dimensional design
space. The X in the plots corresponds to the optimal solution.

chosen such that the c.o.v. for Ĥ(I) in (16) is equal to some pre-specified value for the
given value of ρ. For example, for c.o.v. 5% and choice of ρ = 0.2, N should be 1600
if direct MC techniques are used.
Step 1: Simulate, for example using MC simulation, NΦ = N samples from π(ϕ, θ).
Identify subset Î1 as the solution of the optimization problem (22) and keep only the
NÎ1 samples whose ϕ component belongs to the subset Î1 .
Step k: Use some sampling technique, such as Metropolis-Hastings algorithm, to
obtain in total N samples from π(ϕ, θ) inside the subset Îk−1 . Identify subset Îk :

Îk = arg max VIk , Aρ,k = {I ⊂ Îk−1 : ρ = NI /N} (24)


I∈Aρ,k

Keep only the NÎk samples whose ϕ component belongs to the subset Îk and exploit
them in the next iteration.
Stopping criteria: At each step, estimate ratio

NÎk VÎk−1
Ĥ(Îk ) = (25)
NÎk−1 VÎk

and its coefficient of variation according to the appropriate expression (depending


on the algorithm used). Based on these two quantities and the desired quality of
the identification (see next section), decide on whether to stop or to proceed to
step k + 1.
Table 7.1 presents the results for a sample run of the SSO algorithm for the design
problems considered in (Taflanidis & Beck 2007a). Problem D1 involves 2 design
Stochastic system design optimization using stochastic simulation 167

Table 7.1 Results from a sample run of the SSO algorithm for two design problems.
Problem Iteration of SSO Problem Iteration of SSO
D1 (nϕ = 2) D2 (nϕ = 4)
1 2 3 1 2 3 4 5 6
δk =VÎk /VÎk−1 0.370 0.314 0.270 δk =VÎk /VÎk−1 0.442 0.378 0.346 0.323 0.269 0.223
√ √

δk 0.608 0.561 0.489 nϕ
δk 0.815 0.784 0.767 0.754 0.720 0.687
Ĥ(
 Îk ) 0.541 0.636 0.835 Ĥ(
 Îk ) 0.453 0.529 0.577 0.619 0.743 0.896
nϕ V /V 0.167 nϕ V /V 0.183
Î3 Φ Î6 Φ

variables, nϕ = 2, whereas problem D2 involves 2 more design variables, in total nϕ = 4.


Figure 7.3 presents graphically the results for the sample run in problem D1 . In these
examples the shape of the subsets I was selected as hyper-rectangles and ρ was chosen
equal to 0.2.
Figure 7.3 clearly illustrates the dependence of the quality of the identification on
Ĥ(Îk ) which expresses (see (15)) the difference in volume density of samples from π(ϕ)
inside and outside of the identified set Îk (corresponding to the interior rectangle in these
plots). In the first iteration, this difference is clearly visible. As SSO evolves, the differ-
ence becomes smaller and by the last iteration (Figure 7.3f), it is difficult to visually dis-
criminate which region in the set has smaller volume density of samples from π(ϕ). This
corresponds to a decrease in the quality of the identification. In order to maintain plau-
sibility for the identified set to contain the optimal solution, the iterative process stops.
This figure also shows the capability of SSO to explore the sensitivity of the objective
function with respect to each design variable. Within the initial design space (Fig-
ure 7.3a) the sensitivity with respect to design variable ϕ2 appears to be significantly
higher, based on the density of failed samples. The set identified by SSO corresponds
to larger size reduction for that variable (Figure 7.3b), thus it efficiently captures the
difference in sensitivities. Note that in order to take advantage of this capability, no
proportionality relationship should be enforced for the dimensions of the admissible
subsets in different directions.
Looking now at the results in Table 7.1, it is clear that as the identification process
in SSO evolves, the reduction in the size of the identified subsets, expressed by δk ,
becomes larger and δk approaches the value for ρ (selected here as 0.2). Also the value
of Ĥ(Îk ) increases, which corresponds to reduction in the quality of identification.
All these patterns can be theoretically justified (Taflanidis & Beck 2007a) assuming
that as the SSO identification progresses, regions of the design space with smaller
sensitivity to the objective function are approached. The influence of the number of
design variables in the efficiency of√ SSO is also evident from the results in Table 7.1.
As mentioned earlier, the quantity nϕ δk corresponds to the mean length reduction per
design variable. For D2 that reduction is much smaller, even though the values for δk
are of the same level for the two design problems, which leads to more iterations until
a set with small sensitivity to the objective function is identified. The average total
length reduction is similar (look at the last row in Table 7.1 which is equivalent to the
expression (23)) but the number of required iterations for D2 is double. Since design
168 Structural design optimization considering uncertainties

case D2 has double the number of design variables, this verifies the proportionality
dependence (argued in Section 3.5.3) between required number of iterations and the
number of design variables to establish the same average reduction per design variable.
Note also that the mean reduction in size per design variable (last row of Table 7.1)
is significant. This means that the size of the subset identified by SSO is considerably
smaller than the original design space. Since this identification requires a small number
of iterations it verifies the efficiency of the SSO algorithm.

3.6 C o nv ergen c e t o o pt imal s o lut io n


The algorithm described above can adaptively identify a relatively small sub-region
for the optimal design variables ϕ∗ within the original design space. The efficiency of
convergence to ϕ∗ depends a lot on the sensitivity of the objective function around the
optimal point. If that sensitivity is large then SSO will ultimately converge to a “small’’
set Î, satisfying at the same time the accuracy requirements that make it highly likely
that ϕ∗ is in Î. The center point of this set, denoted herein as ϕSSO , gives the estimate
for the optimal design variables. In cases that this sensitivity is not large enough, such
convergence will be problematic and will require increasing the number of samples in
order to satisfy the requirement for the quality of identification. Another important
topic related to the identification in such cases is that there is no measure of the quality
of the identified solution, i.e. how close ϕSSO is to ϕ∗ , that can be directly established
through the SSO results. If the identification is performed multiple times and a sequence
{ϕSSO,i } is obtained, the c.o.v. of {Êθ [h(ϕSSO,i ,θ)]} could be considered a good candidate
for characterizing this quality. This might not be always a good measure though. For
example, if the choice for admissible subsets is inappropriate for the problem consid-
ered, it could be the case that consistent results are obtained for ϕSSO (small c.o.v.) that
are far from the optimal design choice ϕ∗ . Also, this approach involves higher compu-
tational cost because of the need to perform the identification multiple times. For such
cases, it would be more computationally efficient (instead of increasing N in SSO and
performing the identification multiple times) and more accurate (in terms of identify-
ing the true optimum), to combine SSO with some other optimization algorithm for
pinpointing ϕ∗ . A discussion of topics related to such algorithms is presented next.

4 Stochastic optimization algorithms


We go back to the original formulation of the objective function, i.e. (1). In principle,
though, the techniques discussed here are applicable to the case that the loss function
h(θ, ϕ) is replaced by hs (θ, ϕ) used in the SSO setting (given by (8)).

4.1 C om m o n r and o m numb e r s


The efficiency of stochastic optimizations such as (5) can be enhanced by the reduction
of the absolute and/or relative importance of the estimation error eN (ϕ, ΩN ). The
absolute importance may be reduced by obtaining more accurate estimates of the
objective function, i.e. by reducing the variance of the estimates. This can be established
in various ways, for example by using importance sampling or by selecting a larger
sample size N in (4), but these typically involve extra computational effort. It is, thus,
Stochastic system design optimization using stochastic simulation 169

more efficient to seek a reduction in the relative importance of the estimation error.
This means reducing the variance of the difference of the estimates Êθ,N [h(ϕ1 , Ω1N )]
and Êθ,N [h(ϕ2 , Ω2N )] that correspond to two different design choices ϕ1 and ϕ2 . This
variance can be decomposed as:

var(Êθ,N [h(ϕ1 , Ω1N )] − Êθ,N [h(ϕ2 , Ω2N )]) = var(Êθ,N [h(ϕ1 , Ω1N )])
+ var(Êθ,N [h(ϕ2 , Ω2N )]) − 2cov(Êθ,N [h(ϕ1 , Ω1N )], Êθ,N [h(ϕ2 , Ω2N )]) (26)

If Êθ,N [h(ϕ1 , Ω1N )] and Êθ,N [h(ϕ2 , Ω2N )] are evaluated independently their covariance
is zero; deliberately introducing dependence, increases the covariance (i.e. increases
their correlation) and thus decreases their variability (the variance on the left). This
decrease in the variance improves the efficiency of the comparison of the two estimates;
it may be considered as creating a consistent estimation error. In a simulation-based
context this task is achieved by adopting common random number streams (CRN), i.e.
Ω1N = Ω2N , in the simulations generating the two different estimates. Figure 7.4a shows
the influence of such a selection: the curves that correspond to CRN are characterized
by consistent estimation error and are smoother. Also note that the absolute influence
of the estimation error for the case that corresponds to larger N (curve (iii)) is, as
expected, smaller.
Two important questions regarding the use of CRN are: will the variance be reduced
(efficiency)? Is this the best one can do (optimality)? The answer to both these questions
depends on the way the random sample θ (input) influences the sample value of the loss
function h(ϕ, θ) (output) in each simulation. Optimality can be proved only in special
cases but efficiency can be guaranteed under mild conditions (Glasserman & Yao
1992). Continuity and monotonicity of the output with respect to the random number
input are key issues for establishing efficiency. If h(ϕ, θ) is sufficiently smooth then the
two aforementioned requirements for CRN-based comparisons can be guaranteed,

(a) 0.09 (b) 0.14


(i) analytical
(ii) sim N 4000 (i) analytical
0.08 (iii) CRN sim N 4000 0.12 (ii) CRN sim using IF
(iv) CRN sim N 1000 (iii) CRN sim using Pε
E [h(␸,␪)]

0.07 0.1
P(F|␸)

0.06 0.08

0.05 0.06

0.04 0.04
10 12 14 16 18 20 10 12 14 16 18 20
␸ ␸

Figure 7.4 Illustration of some points in CRN-based evaluation of the objective function for
(a) general stochastic design problems and (b) reliability objective problem.
170 Structural design optimization considering uncertainties

as long as the design choices compared are not too far apart in the design variable
space. In such cases it is expected that use of CRN will at least be advantageous (if not
optimal). If the systems compared are significantly different, i.e. correspond to ϕ that
are not close, then CRN does not necessarily guarantee a consistent estimation error.
This might occur if the regions of Θ that contribute most to the integral of the expected
value for the two systems are drastically different and the CRN streams selected do
not efficiently represent both of these regions. This feature is also indicated in Figure
7.4a; the estimation error is not consistent along the whole range of ϕ for the CRN
curves (compare the objective function for curve (iv) for large and small values of ϕ)
but for local comparisons it appears to be consistent.
For ROP, CRN does not necessarily have a similar effect on the calculated output
if formulation (6) is adopted since the indicator function IF (ϕ, θ) is discontinuous.
Thus the aforementioned requirements for establishing efficiency of CRN cannot be
guaranteed. It is thus beneficial to use the formulation (7) for the probability of failure
in CRN-based optimizations. For design problems where no prediction error in the
model response is actually assumed, a small fictitious error should be chosen so that
the optimization problems with and without the model prediction error are practically
equivalent, i.e. correspond to the same optimum. Figure 7.4b illustrates this concept;
the influence on P(F|ϕ) of the two different loss functions in Figure 7.1b and the
advantage of selecting Pε (g̃(ϕ, θ)) is clearly demonstrated.

4.2 Ex teri or sampling appr o ximat io n


Solution approaches to optimization problems using stochastic simulation are based
on either interior or exterior sampling techniques (Ruszczynski & Shapiro 2003). Inte-
rior sampling methods resample ΩN at each iteration of the optimization algorithm.
On the other hand, exterior sampling approximations (ESA) adopt the same stream
of random numbers throughout all iterations in the optimization process, thus trans-
forming problem (5) into a deterministic one, which can be solved by any appropriate
routine. These methods are also commonly referred to as sample average approxima-
tions (Royset & Polak 2004) and they are closely related to CRN. The CRN cases in
Figure 7.4 correspond actually to ESA. Several asymptotic results are available for ESA
and their rate of convergence under weak assumptions. For finite-dimensional sample
sizes, the optimal solution depends on the sample ΩN selected. Figure 7.4a clearly
demonstrates this issue (compare the optimum values in the CRN curves (iii) and (iv)).
Usually ESA is implemented by selecting N “large enough’’, typically much higher than
it would be for interior sampling methods, in order to get better quality estimates for
the objective function and thus more accurate solutions to the optimization problem.
See (Ruszczynski & Shapiro 2003) for more details and (Royset & Polak 2007) for a
computationally efficient iterative approach that adaptively implements higher accu-
racy estimates as the algorithm converges to the optimal solution. The quality of the
solution obtained through ESA is commonly assessed by solving the optimization prob-
lem multiple times, for different independent random sample streams. Even though the
computational cost for the ESA deterministic optimization is typically smaller than that
of the original stochastic search problem, the overall efficiency may be worse because
of the requirement to perform the optimization multiple times.
Stochastic system design optimization using stochastic simulation 171

4.3 Appropriate s tochas tic optimizatio n al g o ri thms


Both gradient-based and gradient-free algorithms can be used in conjunction with CRN
or ESA and can be appropriate for stochastic optimizations.
Gradient-based algorithms use derivative information to iterate in the direction of
steepest descent for the objective function. Only local designs are compared in each
iteration, which makes the implementation of CRN efficient and allows for applica-
tion of stochastic approximation which can significantly improve the computational
efficiency of stochastic search methods (Kushner & Yin 2003). The latter can be estab-
lished by applying an equivalent averaging across the iterations of the algorithm instead
of establishing higher accuracy estimates at each iteration. In simple examples, the loss
function h(ϕ, θ) (or even the limit state function g̃(ϕ, θ) in ROP) are such that the gra-
dient of the objective function with respect to ϕ can be obtained through a single
stochastic simulation analysis (Royset & Polak 2004, Spall 2003). In many structural
design problems though, the models used are generally complex, and it is difficult,
or impractical, to develop an analytical relationship between the design variables and
the objective function. Finite difference numerical differentiation is often the only
possibility for obtaining information about the gradient vector but this involves com-
putational cost which increases linearly with the dimension of the design parameters.
Simultaneous-perturbation stochastic approximation (SPSA) (Kleinmann et al. 1999,
Spall 2003) is an efficient alternative search method. It is based on the observation
that one properly chosen simultaneous random perturbation in all components of ϕ
provides as much information for optimization purposes in the long run as a full set
of one at a time changes of each component. Thus, it uses only two evaluations of
the objective function, in a direction randomly chosen at each iteration, to form an
approximation to the gradient vector.
Gradient-free optimization methods, such as evolutionary algorithms, direct search
and objective function approximation methods are based on comparisons of design
choices that are distributed in large regions of the design space. They require informa-
tion only for the objective function which makes them highly appropriate for stochastic
optimizations (Beck et al. 1999, Lagaros et al. 2002) because they avoid the difficulty
of obtaining derivative information. They involve, though, significant computational
effort if the dimension of the design variables is high. Use of CRN in these algorithms
may only improve the efficiency of the comparisons in special cases; for example, if
the size (volume) of the design space is “relatively small’’ and thus the design variables
being compared are always close to each other.
More detailed discussion of algorithms for stochastic optimization can be found
in (Spall 2003; Ruszczynski & Shapiro 2003). Only SPSA is briefly summarized
here.

4.3.1 Sim u lt a n eous-perturbati on stochasti c a p p ro x i m a t i o n u s i n g co m m o n


ra n d om numbers
The implementation of SPSA using CRN takes the iterative form:

ϕk+1 = ϕk − αk gk (ϕk , ΩN ) ϕk+1 ∈ Φ (27)


172 Structural design optimization considering uncertainties

where ϕ1 is the chosen point to initiate the algorithm and the jth component for
the CRN simultaneous perturbation approximation to the gradient vector in the kth
iteration, gk (ϕ, ΩkN ), is given by:

Êθ,N (ϕk + ck ∆k , ΩkN ) − Êθ,N (ϕk − ck ∆k , ΩkN )


gk,j = (28)
2ck
κ,j

where ∆k ∈ Rnϕ is a vector of mutually independent random variables that defines the
random direction of simultaneous perturbation for ϕk and that satisfies the statistical
properties given in (Spall 2003). A symmetric Bernoulli ±1 distribution is typically cho-
sen for the components of ∆k . The selection for the sequences {ck } and {αk } is discussed
in detail in (Kleinmann et al. 1999). A choice that guarantees asymptotic convergence to
ϕ∗ is αk = α/(k + w)β and ck = c1 /kζ , where 4ζ − β > 0, 2β − 2ζ > 1, with w, ζ > 0 and
0 < β < 1. This selection leads to a rate of convergence that asymptotically approaches
k−β/2 when CRN is used (Kleinmann et al. 1999). The asymptotically optimal choice
for β is, thus, 1. In applications where efficiency using a small number of iterations
is sought after, use of smaller values for β are suggested in (Spall 2003). For complex
structural design optimizations, where the computational cost for each iteration of the
algorithm is high, the latter suggestion should be adopted. Implementation of CRN
contributes to reducing the variance of the gradient approximation in (28) and thus
the variability in estimating ϕκ ; for example, the rate of convergence is k−β/3 when
CRN is not used.
Regarding the rest of the parameters for the sequences {ck } and {αk }: w is typically
set to 10% of the number of iterations selected for the algorithm and the initial step
c1 is chosen “close’’ to the standard deviation of the measurement error eN (ΩN , ϕ1 ).
This last selection prevents the finite difference gradient from getting excessively large
in magnitude but might be inefficient if the standard deviation of the error changes
dramatically with ϕ. The value of α can be determined based on the estimate of g1 and
the desired step size for the first iteration. Some initial trials are generally needed in
order to make a good selection for α, especially when little prior insight is available for
the sensitivity of the objective function to each of the design variables. Typically SPSA
is implemented adopting interior sampling techniques. Convergence of the iterative
process is judged based on the value ϕk+1 − ϕk  in the last few steps, for an appro-
priate selected vector norm. Note that since the progress of the algorithm at each step
depends on the sample ΩkN and the randomly chosen perturbation direction, conver-
gence cannot be judged based on the value of |Êθ,N [h(ϕk+1 , Ωk+1 N )] − Êθ,N [h(ϕk , ΩN )]|
k

(because the two estimates are evaluated using different steams of random samples and
thus include different estimation error) or the value of ϕk+1 − ϕk  at the last step only
(because this value depends on the random search direction chosen). This notion of
convergence, though, depends on the selection of the sequence {αk }; for example, selec-
tion of small step sizes might in some cases give a false impression that convergence has
been established, even though this is not true. Such problems can be avoided by restart-
ing the SPSA algorithm at the converged optimal solution to monitor the behavior for
some small number of iterations. Blocking rules can also be applied in order to avoid
potential divergence of the algorithm, especially in the first iterations (Spall 2003).
Stochastic system design optimization using stochastic simulation 173

5 Framework for stochastic optimization using stochastic


simulation

5.1 Outline of the framework


As already mentioned, a two-stage framework for stochastic system design may be
established by combining the algorithms presented in the previous two sections. In
the first stage, SSO is implemented in order to efficiently explore the sensitivity of
the objective function and adaptively identify a subset ISSO ⊂ Φ containing the opti-
mal design variables. In the second stage, any appropriate stochastic optimization
algorithm is implemented in order to pinpoint the optimal solution within ISSO . The
specific algorithm selected for the second stage determines the level of quality that
should be established in the SSO identification. If a method is chosen that is restricted
to search only within ISSO (typically characteristic of gradient-free methods), then bet-
ter quality is needed. The iterations of SSO should stop at a larger size set, and establish
greater plausibility that the identified set includes the optimal design point. If, on the
other hand, a method is selected that allows for convergence to points outside the
identified set, lower quality may be acceptable in the identification. Our experience
indicates that a value around 0.75–0.80 for Ĥ(Îk ) with a c.o.v. of 4% for that estimate,
indicates high certainty that Îk includes the optimal solution. Of course, this depends
on the characteristics of the problem too and particularly on the selection of the shape
of admissible subsets, but this guideline has proved to be robust in the applications we
have considered so far.
The efficiency of the stochastic optimization considered in the second stage is influ-
enced by (a) the size of the design space Φ defined by its volume VΦ , and, depending
on the characteristics of the algorithm chosen, by (b) the initial point ϕ1 at which the
algorithm is initiated, and (c) the knowledge about the local behavior of the objec-
tive function in Φ. For example, topic (b) is important for gradient-based algorithms
whereas topic (c) is relevant for iterative algorithms that require user insight for select-
ing appropriate step sizes (like SPSA). The SSO stage gives valuable insight for all these
topics and can, therefore, contribute to increasing the efficiency of convergence to the
optimal solution ϕ∗ . The set ISSO has smaller size (volume) than the original design
space Φ. Also, it is established that the sensitivity of the objective function with respect
to all components of ϕ is small. This allows for efficient normalization of the design
space (in selecting step sizes) and choice of approximating functions. In Taflanidis &
Beck (2007a), 60% reduction of the overall computational cost for convergence to
the optimal solution was reported when comparing the combined framework dis-
cussed here to SPSA optimization (without the SSO stage). In that study, the following
guidelines were suggested for tuning of the SPSA parameters using information from
SSO: ϕ1 should be selected as the center of the set ISSO and parameter α chosen so
that the initial step for each component of ϕ is smaller than a certain fraction (chosen
as 1/10) of the respective size of ISSO , based on the estimate for g1 from (28). This
estimate should be averaged over ng (chosen as 6) evaluations because of its impor-
tance in the efficiency of the algorithm. Also, no movement in any direction should be
allowed that is greater than a quarter of the size of the respective dimension of ISSO
(blocking rule).
174 Structural design optimization considering uncertainties

The information from the SSO stage can also be exploited in order to reduce the
variance of the estimate Eθ [h(ϕ, ΩN )] by using importance sampling. This choice is
discussed next.

5.2 Im po rtanc e s ampling


Importance sampling (IS) is an efficient variance reduction technique. It is based on
choosing an importance sampling density pis (θ|ϕ) to generate samples in regions of Θ
that contribute more to the integral of Eθ [h(ϕ, θ)]. The estimate for Eθ [h(ϕ, θ)] is given
in this case by:

1
N
Êθ,N [h(ϕ, ΩN )] = h(ϕ, θi )R(θi |ϕ) (29)
N
i=1

where the samples θi are simulated according to pis (θ|ϕ) and

p(θi |ϕ)
R(θi |ϕ) = (30)
pis (θi |ϕ)

is the importance sampling quotient. The main problem is how to choose a good IS
density. The optimal density is simply the PDF that is proportional to the absolute
value of the integrand of (1) |h(ϕ, θ)|p(θ|ϕ) (Robert & Casella (2004)) leading to a
selection:
|h(ϕ, θ)|p(θ|ϕ)
pis,opt (θ|ϕ) = (31)
Eθ [|h(ϕ, θ)|]

Samples for θ that are distributed proportional to hs (ϕ, θ)p(θ|ϕ) when ϕ ∈ ISSO are
available from the last iteration of the SSO stage. They simply correspond to the θ
component of the available sample pairs [ϕ, θ]. Re-sampling can be performed within
these samples, using weighting factors |h(ϕi , θi )|/hs (ϕi , θi ) for each sample, in order
to approximately simulate samples proportional to |h(ϕ, θ)|p(θ|ϕ) when ϕ ∈ ISSO . The
efficiency of this re-sampling procedure depends on how different hs (ϕi , θi ) and h(ϕi , θi )
are. In most cases the difference will not be significant and good efficiency can be
established. Alternatively, hs (ϕi , θi ) can be used as loss function in the second stage of
the optimization. In this case there is no need to modify the samples from SSO. This
choice would be inappropriate if s was negative because it makes the loss function
less sensitive to the uncertain parameters θ, thus possibly reducing the efficiency of IS.
In such design problems it is better to use the original loss function h(ϕ, θ).
The samples simulated proportional to |h(ϕ, θ)|p(θ|ϕ) can be finally used to create
an importance sampling density pis (θ|ϕ) to use in (30), since the set ISSO is small.
Various strategies have been discussed in the literature for such an adaptive importance
sampling (see for example (Au & Beck 1999)).
For problems with high-dimensional vector θ, the efficiency of IS can be guaranteed
only under strict conditions (Au & Beck (2003a)). An alternative approach can be
applied for such cases: the uncertain parameter vector is partitioned into two sets, Θ1
and Θ2 . Θ1 is comprised of parameters that individually do not significantly influence
the loss function (they have significant influence only when viewed as a group), for
Stochastic system design optimization using stochastic simulation 175

example, the white noise sequence modeling the stochastic excitation in dynamic reli-
ability problems, while Θ2 is comprised of parameters that have individually a strong
influence on h(ϕ, θ). The latter set typically corresponds to a low-dimensional vector. IS
is applied for the elements of Θ2 only. This approach is similar to the one discussed in
(Pradlwater et al. 2007) and circumvents the problems that may appear when applying
IS to design problems involving a large number of uncertain parameters.

6 Illustrative example: optimization of the life-cycle


cost of an office building
The retrofitting of a symmetric, four-story, office building with linear viscous dampers
is considered. The building is a non-ductile reinforced concrete, perimeter moment-
frame structure. The dimension of the building is 45 m × 45 m and the height of each
story is 3.9 m. The perimeter frames in the two building directions are separated from
each other, which allows structural analysis in each direction to be done separately.
Because of the symmetry of the building, analysis of only one of the directions is
necessary.

6.1 Probabilistic s tructural model


A class of shear-frame models (illustrated in Figure 7.5) with hysteretic behav-
ior and deteriorating stiffness and strength is assumed (using a distributed element
model assumption for the deteriorating part (Iwan & Cifuentes 1986). The lumped
mass of the top story is 935 ton while it is 1215 ton for the bottom three. The
initial inter-story stiffnesses ki of all the stories are parameterized by ki = k̂i θk,i ,
i = 1, 2, 3, where [k̂i ] = [700.0, 616.1, 463.6, 281.8] MN/m are the most probable
values and θk,i are nondimensional uncertain parameters, assumed to be corre-
lated Gaussian variables with mean value one and covariance matrix with elements
ij = (0.1)2 exp[−(i − j)2 /22 ]. For each story, the post-yield stiffness coefficient αi , stiff-
ness deterioration coefficient βi , over-strength factor γi , yield displacement δy,i and

Fi
m4 Fu, i(1gi )Fy,i
viscous damper aiki
Fy,i ki
di
m3 d4 dp,i i-th story
dy,i du,i restoring force
m3 –Fu,i
Fi
m2 Fu,i
biki Fr
retrofitting m1 dp,i du,i
i
scheme dy,i
–Fr du,i  dp,ihidy,i
d1 Fu,i
Illustration of deteriorating stiffness
and strength characteristics

Figure 7.5 Structural model assumed in the study.


176 Structural design optimization considering uncertainties

displacement coefficient ηi have mean values 0.1, 0.2 0.3, 0.22% of story height and
2, respectively (see Figure 7.5 for proper definition of some of these parameters).
All these parameters are treated as independent Gaussian variables with c.o.v. 10%.
The structure is assumed to be modally damped. The damping ratios for all modes
are treated similarly as Gaussian variables with mean values 5% and coefficients of
variation 10%.

6.2 Prob ab i l i s t ic s it e s eis mic hazar d a n d gr o u n d m o t i o n m o d e l


In order to estimate the earthquake losses, probability models are established for the
seismic hazard at the structural site and for the ground motion, as in (Au & Beck
2003b). Seismic events are assumed to occur following a Poisson distribution and so
are independent of previous occurrences. The uncertainty in moment magnitude M
is modeled by the Gutenberg-Richter relationship (Kramer 2003) truncated on the
interval [Mmin , Mmax ] = [5.5, 8], leading to a PDF:

b exp( − b · M)
p(M) = (32)
exp( − b · Mmin ) − exp( − b · Mmax )

and expected number of events per year

v = exp (a − bMmin ) − exp (a − bMmax ) (33)

The regional seismicity factors are selected as b = 0.9 loge (10) and a = 4.35 loge (10),
leading to v = 0.25. For the uncertainty in the event location, the epicentral distance, r,
for the earthquake events is assumed to follow a lognormal distribution with median
20 km and logarithmic standard deviation 0.4. Figure 7.7a illustrates the PDFs for M
and r.
For modeling the ground motion, the methodology described in Boore (2003) is
adopted (also characterized as the “stochastic method’’). This methodology, which
was initially developed for generating synthetic ground motions, is reinterpreted here
to form a probabilistic model for the earthquake excitation. According to this model,
the time-history (output) for a specific event magnitude, M, and source distance, r,
is obtained by modulating a white-noise sequence Zw (input) through the following
steps: (i) the sequence Zw is multiplied by an envelope function e(t; M, r); (ii) this mod-
ified sequence is then transformed to the frequency domain; (iii) it is normalized by the
square root of the mean square of the amplitude spectrum; (iv) the normalized sequence
is multiplied by a radiation spectrum A(f ; M, r); and finally (v) it is transformed back
to the time domain to yield the desired acceleration time history. The characteristics
for A(f ; M, r) and e(t; M, r) are presented in Appendix B. Figure 7.6a shows functions
A(f ; M, r) and e(t; M, r) for different values of M and r = 15 km. It can be seen that
as the moment magnitude increases, the duration of the envelope function also
increases and the spectral amplitude becomes larger at all frequencies with a shift of
dominant frequency content towards the lower-frequency regime. Figure 7.6b shows
a sample ground motion for M = 6.7 and r = 15 km.
Stochastic system design optimization using stochastic simulation 177

1
(a) M 6 (b)
102
0.8 M 6.7 200
A(f; M,r) (cm/sec)

101 M 7.5
100

cm/sec2
e(t; M,r)
0.6
100 0
M 6 0.4
101 M 6.7 100
M 7.5 0.2
102 200
103 0
101 100 101 0 10 20 30 0 5 10 15
f(rad/sec) t(sec) t(sec)

Figure 7.6 (a) Radiation spectrum and envelope function for various M and r = 15 km and
(b) sample ground motion for M = 6.7, r = 15 km.

6.3 Expected life-cycle cos t


The objective function in the stochastic design problem is the expected life-cycle cost
of the structure for a life-time of tlife = 60 years after the retrofit. This cost, C(ϕ), as a
function of the design variables is given by (Porter et al. 2004):

 
1 − e−rd tlife
C(ϕ) = Cd (ϕ) + L(ϕ, θ) vtlife p(θ)dθ (34)
Θ rd tlife

where Cd (ϕ) is the cost of the viscous dampers, rd equals the discount rate (taken here
as 2.5%) and L(ϕ, θ) is the expected cost given the earthquake event and the system
specified by the pair [ϕ, θ]. The uncertain parameter vector in this design problem
consists of the structural model parameters, θs , the seismological parameter θg = [M, r]
and the white noise sequence, Zw , so θ = [θs , θg , Zw ].
The term in the brackets in (34) is the present worth factor, which is used in
order to calculate the present value of the expected future earthquake losses (Porter
et al. (2004)). The earthquake damage and loss are calculated assuming that after
each event the building is quickly restored to its undamaged state. The cost of the
dampers at each floor is estimated based on their maximum force capacity Fud,i as
Cd,i (ϕ) = $80(Fud,i )0.8 . This simplified relationship comes from fitting a curve to the
cost of some commercially-available dampers. The viscosity of the dampers is selected
assuming that the maximum force capacity is established at a velocity of 0.2 m/sec.
The earthquake losses are estimated adopting the methodology described in (Goulet
et al. 2007, Porter et al. 2004). The components of the structure are grouped into nas
damageable assemblies. For each assembly j, nd,j different damage states are desig-
nated and a fragility function is established for each damage state dk,j . These functions
quantify the probability that the component has reached or exceeded that damage state
conditional on some engineering demand parameter (EDPj ). Damage state 0 always
corresponds to an undamaged condition. Each fragility function is a conditional cumu-
lative log-normal distribution with median xm and logarithm standard deviation bm , as
178 Structural design optimization considering uncertainties

Table 7.2 Characteristics of fragility functions and expected repair costs for each story.
d k,j xm bm nel $/nel d k,j xm bm nel $/nel
Structural Components Partitions
1 (light) 1.4δy,i 0.2 22 2000 1 (patch) 0.33% 0.2 500 180
2 (moderate) (δy,i + δp,i )/2 0.35 22 9625 2 (replace) 0.7% 0.25 500 800
3 (significant) δp,i 0.4 22 18200
4 (severe) δu,i 0.4 22 21600 Acoustical Ceiling
5 (collapse) 3% 0.5 22 34300 1 (damage) 1g 0.7 103 m2 25
Contents Paint
1 (damage) 0.6g 0.3 100 3000 1 (damage) 0.33% 0.2 3500 m2 25

presented in Table 7.2. Indirect losses because of (a) fatalities and (b) building down-
time, i.e. loss of revenue while the building is being repaired, are ignored in this study.
The expected losses in the event of the earthquake are given by:
n

nas 
d,j

L(ϕ, θ) = P[dk,j |ϕ, θ]Ck,j (35)


j=1 k=1

where P[dk,j |ϕ, θ] is the probability that the assembly j will be in its kth damage state
and Ck,j is the corresponding expected repair cost. Table 7.2 summarizes the charac-
teristics for the fragility functions (xm , βm ) and the expected cost $/nel . The nel in this
table corresponds to the number of elements that belong to each damageable assembly
in each direction of each floor. For the structural contents and the acoustical ceiling,
the maximum story absolute acceleration is used as EDP and for all other assemblies
the maximum inter-story drift ratio is used. For estimating the total wall area requir-
ing a fresh coat of paint, the simplified formula developed in (Goulet et al. 2007)
is adopted. According to this formula a fraction of the undamaged wall area is also
repainted, considering the desire of the owner to achieve a uniform appearance. This
fraction depends on the extent of the damaged area and is chosen here based on a
lognormal distribution with median 0.25 and logarithmic standard deviation 0.5.

6.4 Opti m a l d ampe r d e s ig n


The maximum force capacities of the dampers in each floor are the four design variables
ϕ = [Fud,i : i = 1, . . . , 4]. The initial design space for each variable is set to [0, 13000] kN
for Fud,1 and Fud,2 and [0, 8000] kN for Fud,3 and Fud,4 . Results for a sample run of the
optimization algorithm are presented in Table 7.3. For the SSO stage the sets I3 and
ISSO are reported here only. Also lIi denotes the length of set I in the direction of the
ith design variable.

6.4.1 S toc h a stic s u b s e t o p t im iza t io n


The objective function (34) can be written as:
  
1 − e−rd tlife
C(ϕ) = Eθ [hs (ϕ, θ)] = Cd (ϕ) + L(ϕ, θ) vtlife p(θ)dθ (36)
Θ rd tlife
Stochastic system design optimization using stochastic simulation 179

Table 7.3 Results from the optimization (sample run).



ϕ I3 (kN) ISSO (kN) ϕSSO ϕ∗ Êθ [h(ϕ∗ , θ)] l iSSO /liΦ VISSO /VΦ

(kN) (kN) Êθ [h(ϕSSO , θ)]

F ud,1 [3610, 7657] [5857, 6980] 6418 6420 0.430 × 106 $ 0.094
F ud,2 [3557, 7756] [4539, 6045] 5292 5195 0.126 0.131
F ud,3 [4034, 7095] [4085, 5517] 4801 4481 0.438 × 106 $ 0.179
F ud,4 [1566, 4751] [1841, 2959] 2400 2060 0.139

Thus, the loss function used in the SSO stage of the optimization is:

1 − e−rd tlife
hs (ϕ, θ) = Cd (ϕ) + L(ϕ, θ) vtlife (37)
rd tlife

The parameter selections for SSO are: ρ = 0.2, N = 2000, s = 0. The shape for the
sets I is selected as a hyper-rectangle and the adaptive identification is stopped when
Ĥ(Îk ) becomes larger than 0.80. The optimization in (24) is performed using a genetic
algorithm. In total, 6 iterations of the SSO algorithm are performed. After 3 itera-
tions the loss functions hs (ϕ, θ) is reformulated by choosing s = $200.000. Algorithm
1 (Appendix A) is used for sampling in the 1st and 4th iterations and Algorithm 2 in
all others. For the MCMC simulation (Algorithm 2) a global proposal PDF equal to
p(Zw ) is chosen for the white noise sequence, to avoid the problems with the high-
dimensionality of the uncertain parameter vector, and local random walk proposal
PDFs for all other parameters. A uniform PDF centered at the current sample, with
wide spread (covering 0.7 of the current subset Îk−1 at each iteration k), is chosen for
the proposal PDF for ϕ. This is a proposal PDF that is easy to sample from and still
approximates the form of π(ϕ), which is expected to look like a convex function with
small sensitivity as the identification converges to a set near the optimal design vari-
ables. A global uniform proposal PDF could also be chosen for ϕ, as regions with small
sensitivity are approached. Such a global proposal PDF avoids rejecting samples due
to their ϕ component, in the candidate sampling step, falling outside the given subset
Îk−1 at iteration k, which can occur with a local uniform PDF and which increases
the correlation in the generated Markov Chain. For the rest of the uncertain param-
eters, θs and θg , independent conditional Gaussian distributions are chosen, centered
at the current sample with standard deviation equal to ½ the standard deviation of
the samples retained from the previous step. Ultimately, the efficiency of the MCMC
simulation (Algorithm 2) depends a lot on the quality of the selected proposal PDFs.
In cases that such PDFs cannot be easily chosen then MC simulation can be used.
The results in Table 7.3 show that SSO efficiently identifies a subset for the optimal
design variables and leads to a significant reduction of the size (volume) of the search
space (look at the last two columns of Table 7.3). The converged optimal solution in
the second stage, ϕ∗ , is close to the center ϕSSO of the set that is identified by SSO; also
the objective function at that center point Eθ [h(ϕSSO , θ)] is not significantly different
from the optimal value Eθ [h(ϕ∗ , θ)]. Thus, selection of ϕSSO as the design choice leads
to a sub-optimal design that is, though, close to the true optimum in terms of both
the design vector selection and its corresponding performance. This agrees with the
180 Structural design optimization considering uncertainties

findings of all of our other studies and indicates that the sole use of SSO might be
adequate for many problems (see Taflanidis & Beck 2007a) for a more thorough
comparison and discussion).

6.4.2 S i m u lta n eo u s-p e r t u r b a t io n s t o ch a s t ic a ppr o x i mat i o n wi t h


c ommon ran d o m n u m b e r s
For the second stage of the optimization framework the formulation of the objective
function in (34) is adopted. Stochastic simulation is used in order to estimate only the
second part, since the cost of the dampers can be deterministically evaluated, so:

1 − e−rd tlife
h(ϕ, θ) = L(ϕ, θ) vtlife (38)
rd tlife

Following the discussion in Section 5.2, importance sampling densities are estab-
lished for the structural model parameters and the seismological parameters, M and r,
but not for the high-dimensional white-noise sequence. Figure 7.7b illustrates this con-
cept for M and r. A truncated lognormal distribution is selected for the IS PDF for M
(with median 7 and logarithmic standard deviation 0.1) and a lognormal for r (with
median 15 and logarithmic standard deviation 0.4). Note that the IS PDF for M is
significantly different from its initial distribution; since M is expected to have a strong
influence on h(θ, ϕ), the difference between the distributions is expected to have a big
effect on the accuracy of the estimation. The respective difference between the PDFs for
r is much smaller. For the structural model parameters this difference is negligible, and
the IS PDFs were approximated to be Normal distributions, like p(θs ), with a slightly
shifted mean value but the same variance. The c.o.v. for Êθ,N [h(ϕ, ΩN )] for a sample

(a)

1500 2 1500
0.04
1000 p(M) 1000 p(r)
N 1 N 0.02
500 500

0 0 0 0
5.5 6 6.5 7 7.5 8 0 20 40 60
M r
(b)
80 100
0.6 0.06
60
N 40 0.4 50 0.04
pis(M) pis(r)
N
20 0.2 0.02
0 0 0 0
5.5 6 6.5 7 7.5 8 0 20 40 60
M r

Figure 7.7 Details about importance sampling densities formulation for M and r (a) Initial PDF
and samples and (b) samples from SSO stage and IS PDF.
Stochastic system design optimization using stochastic simulation 181

size N = 1000 is 16% without using IS and 4% when IS is used. This c.o.v. is of the
same level for all values of ϕ√ ∈ ISSO , since the ISSO set is relatively small. Note that the
c.o.v. varies according to 1/ N (Robert & Casella 2004); thus, the sample size for
direct estimation (i.e. without use of IS) of the objective function with the same level
of accuracy as in the case when IS is applied would be approximately 16 times larger.
This illustrates the efficiency increase that can be established by the IS scheme discussed
earlier. The converged optimal solution in the second stage is included in Table 7.3.
Forty iterations were needed in the second stage of the framework using a sample size
of N = 1000. This computational cost can be characterized as small. Convergence is
judged by looking at the norm ϕk+1 − ϕk ∞ for each of the 5 last iterations. If that
norm is less than 0.2% (normalized by the dimension of the initial design space), then
we assume that convergence to the optimal solution has been established.

6.4.3 Effic ien c y of the tw o-stage opti mi zati on fra m e wo rk


To evaluate the efficiency of the optimization framework, the same optimization was
performed without the use of SSO in the first stage. In this case the starting point for
SPSA was selected as the center of the design space Φ and α was chosen so that in the first
iteration the movement for any design variable is not larger than 5% of the respective
dimension of the design space. In this case IS is not implemented; since search inside
the whole design space Φ is considered, it is unclear how samples of θ can be obtained
to form the IS densities and separately establishing an IS density for each design choice
ϕ is too computationally expensive. The larger variability of the estimates caused the
gradient-based algorithm to diverge in the first couple of iterations. Thus a larger value
for the sample size N = 3000 used. The required number of iterations for convergence
of the algorithm in a sample run (and the total number of system simulations) was
102 (612,000). When the combined framework was used the corresponding numbers
were 40 (80,000). This comparison illustrates the efficiency of the proposed two-stage
optimization framework. The better starting point of the algorithm, as well as the
smaller size of the search space, which allows for better normalization, that the SSO
subset identification can provide, are the features that contribute to this improvement
of efficiency.

6.5 Ef f icienc y of s eis mic protection s ys te m


The expected lifetime cost for the structure in each direction without the dampers is esti-
mated as $1.1 million. The expected lifetime cost of the retrofitted system is $430,000,
so the addition of the viscous dampers improves significantly the performance of the
structural system. Of this amount, $267,000 corresponds to the cost for the installa-
tion of the viscous dampers and $163,00 to the present worth of the expected repair
cost for damage from future earthquakes. Figure 7.8 shows the decomposition of the
expected lifetime repair cost into its different components for both the initial structure
and the retrofitted structure. Only minor changes occur in the distribution of the total
cost over its different components. Note that the relative importance of the repair
cost for acceleration-sensitive assemblies increases by the addition of the dampers, as
expected, but still the importance of this cost remains small, practically negligible.
182 Structural design optimization considering uncertainties

(a) Structure with no dampers (b) Structure with dampers

structural 36% structural 33%

paint 29% paint 33%

Contents 2%
partitions 32% partitions 30% Contents 3%
Ceiling 1%
Ceiling 1%

Figure 7.8 Breakdown of expected lifetime repair costs for (a) initial and (b) retrofitted structures.

7 Conclusions
The robust-to-uncertainties design of engineering systems is of great importance. In
this study, we discussed stochastic optimization problems that entail as objective func-
tion the expected value of a general system performance measure. We focused on
problems that involve complex models and high-dimensional uncertain parameter vec-
tors. Stochastic simulation was considered for evaluation of the system performance.
This simulation-based approach allows explicit consideration of (a) nonlinearities in
the models assumed for the system and its future excitation and (b) complex failure
modes. The only constraint in the complexity of the system description stems from
the accessible computational power, since a large number of simulations of the system
response is needed. The constant advances in computer technology (hardware and
software related) are continuously reducing the significance of this constraint.
A two-stage framework for the associated optimization problem was discussed. The
first stage implements an innovative approach, called Stochastic Subset Optimization
(SSO), for efficient exploring the sensitivity of the objective function to the design
variables and adaptively identifying sub-regions within the original design space that
(a) have high likelihood of including the optimal design variables and (b) are char-
acterized by small sensitivity with respect to each design variable. SSO is combined
in the second stage with some other stochastic optimization algorithms for overall
enhanced efficiency and accuracy of the optimization process. Simultaneous pertur-
bation stochastic approximation was considered for this purpose in this study and
suggestions for enhanced efficiency of the overall framework were given. With respect
to SSO, guidelines for establishing good quality in the identification and stopping cri-
teria for the iterative process were suggested. Topics related to the use of common
random numbers for the second stage of the optimization framework were extensively
discussed. Implementation of importance sampling for this stage was also considered
by using the information available in the last iteration of SSO. In all discussions, special
attention was given to optimization problems that involve the reliability of a system
as the objective function.
Stochastic system design optimization using stochastic simulation 183

An example was presented that shows the efficiency of the proposed methodology
and illustrates a systematic way to design structural systems under stochastic earth-
quake excitation considering all important probabilistic information. The example
considered the retrofitting of a four-story non-ductile reinforced concrete office build-
ing with viscous dampers. The minimization of the expected lifetime cost was adopted
as the design objective. A realistic probabilistic model was presented for represent-
ing future ground motions. An efficient and accurate methodology for estimating the
damages caused by earthquake events was adopted. The structural performance was
evaluated by nonlinear simulation that incorporates all important characteristics of
the behavior of the structural system and all available information about the struc-
tural model and the expected future earthquakes. In this example, SSO was shown to
efficiently identify a set that contains the optimal design variables and to improve the
efficiency of SPSA when combined in the context of the suggested optimization frame-
work. The center of the set identified by SSO was found to be close to the true optimal
values in terms of both the design variables and the corresponding performance. Thus,
use of SSO solely would lead to a sub-optimal design that is close, though, to the
optimal one. For better resolution and accuracy the combined two-stage framework
should be preferred.

Appendix A
Two algorithms that can be used for simulating samples from π(ϕ, θ) are discussed
here.
Algorithm 1: Accept-reject method, which can be considered a direct Monte Carlo
approach. First, choose an appropriate proposal PDF f (ϕ, θ) and then generate a
sequence of independent samples as follows:

(1) Randomly simulate candidate sample [ϕc , θc ] from f (ϕ, θ) and u from uniform
(0,1).
(2) Accept [ϕ, θ] = [ϕc , θc ] if

p(ϕc , θc ) p(ϕ, θ)
hs (ϕc , θc ) > u, where M > max hs (ϕ, θ) (39)
Mf (ϕc , θc ) ϕ,θ f (ϕ, θ)

(3) Return to (1) otherwise.

Algorithm 2: Metropolis-Hastings algorithm, which belongs to Markov Chain


Monte Carlo methods (MCMC) and is expressed through the iterative form:

(1) Randomly simulate a candidate sample [ϕ̃k+1 , θ̃k+1 ] from a proposal PDF
q(ϕ̃k+1 , θ̃k+1 |ϕk , θk ).
(2) Compute acceptance ratio:

hs (ϕ̃k+1 , θ̃k+1 )p(ϕ̃k+1 , θ̃k+1 )q(ϕk , θk |ϕ̃k+1 , θ̃k+1 )


rk+1 = (40)
hs (ϕk+1 , θk+1 )p(ϕk+1 , θk+1 )q(ϕ̃k+1 , θ̃k+1 |ϕk , θk )
184 Structural design optimization considering uncertainties

(3) Simulate u from uniform (0,1) and set



[ϕ̃k+1 , θ̃k+1 ] if rk+1 ≥ u
[ϕk+1 , θk+1 ] = (41)
[ϕk , θk ] otherwise

In this case the samples are correlated (the next sample depends on the previous one)
but follow the target distribution after a burn-in period, i.e. after the Markov chain
reaches stationarity. The algorithm is particularly efficient when samples that follow
the target distribution are already available since then no burn-in period is needed.
Assume, in this setting, that there are Na samples [ϕ, θ] and a total N > Na are desired.
Starting from each of the Na original samples, [N/Na ] samples are generated by the
above process. Since the initial samples are distributed according to π(ϕ, θ) the Markov
Chain generated in this way is always in its stationary state and all samples simulated
follow the target distribution. Note that knowledge of the normalizing constant in the
denominator of π(ϕ, θ) is not needed for any of the two algorithms.
The efficiency of both these sampling algorithms depends on the proposal PDFs
f (ϕ, θ) and q(ϕ, θ). These PDFs should be chosen to closely resemble hs (ϕ, θ)p(ϕ, θ)
and still be easy to sample from. If the first feature is established then most samples
are accepted and the efficiency of the algorithm is high. For Metropolis-Hastings the
proposal PDFs can either be global (independent), i.e. q(·) = q(ϕ̃k+1 , θ̃k+1 ), or establish
a local random walk, i.e. q = q(ϕ̃k+1 , θ̃k+1 |ϕk , θk ). In the latter case, the spread of the
proposal PDFs is particularly important because it affects the size of the region covered
by the Markov Chain samples. Excessively large spread may reduce the acceptance
rate, increasing the number of repeated samples and thus slowing down convergence
and creating correlation between samples. Small spread does not allow for efficient
investigation of the whole region of the uncertain parameters and creates correlation
between samples because of their proximity.
If the dimension of the uncertain parameter vector is high, a typical character-
istic for dynamic problems where the excitation is modeled using a white-noise
sequence, the efficiency of the MCMC simulation process might be reduced (Au &
Beck (2001)) because high correlation might exist between the current and the next
chain state. For ROP the modified Metropolis-Hastings algorithm, discussed in detail
in (Au & Beck 2003b) can be used in these cases (assuming that the loss function is
described by the indicator function of failure). For general stochastic design problems, a
global PDF should be chosen for parameters that individually do not significantly influ-
ence the objective function, but have significant influence only when viewed as a group.
The white-noise sequence in dynamic problems typically belongs to this category.

Appendix B
According to the stochastic method (Boore 2003), the total amplitude spectrum
A(f ; M, r) for the acceleration time history is expressed as a product of the source,
path and site contributions:

1 exp (−πko f )
A(f ; M, r) = (2πf )2 S(f ; M) exp [−πfR/(Q(f )βs )]  8 A m (42)
R f
1+
fmax
Stochastic system design optimization using stochastic simulation 185

Here S(f ; M) is the “equivalent two point-source spectrum’’ given by (Atkinson & Silva
2000):

1−e e
S(f ; M) = CMw + (43)
1 + (f /fa ) 2 1 + (f /fb )2

where Mw is the seismic moment (in dyn-cm) which is connected to the moment mag-
nitude, M, by the relationship log10 Mw = 1.5(M + 10.7) and the constant C √ is given
by C = RΦ VF/(4πRo ρs βs ); R = 0.55 is the average radiation pattern, V = 1/ 2 rep-
resents the partition of total shear-wave velocity into horizontal components, F = 2
is the free surface amplification, ρs = 2.8 g/cm3 and βs = 3.5 km/s are the density and
shear-wave velocity in the vicinity of the source, and Ro = 1 is a reference distance.
The frequencies fa and fb in (43) are given by log10 fa = 2.181 − 0.496 M and log10 fb =
2.41 − 0.408 M, respectively, and e is a weighting parameter described by the expres-
sion log10 e = 0.605 − 0.255 M. For the rest of the parameters in (42) the term 1/R
is the geometric spread factor, where R = [h2 + r2 ]1/2 is the radial distance from the
earthquake source to the site, with log10 h = 0.15M − 0.05 representing a moment
dependent, nominal “pseudo-depth’’. The term exp [−πfR/(Q(f )βs )] accounts for elas-
tic attenuation through the earth’s crust with Q(f ) = 180f 0.45 a regional quality factor.
The quotient factor in (42) is related to near-surface attenuation with parameters
fmax = 10 and ko = 0.03. Finally Am is a near-surface amplification factor which is
described through the empirical curves for generic rock sites given by (Boore & Joyner
1997). An alternative approach suggested by (Au & Beck 2003b) (instead of using the
empirical curves) would be to set Am to an average constant value equal to 2.
The envelope function for the earthquake excitation is represented by (Boore 2003):

e(t; M, R) = a(t/tn )b exp (−c(t/tn )) (44)

where b = −λ ln (η)/[1 + λ( ln (λ) − 1)], c = b/λ, a = [ exp (1)/λ]b and tn = 0.1R + 1/fa
with λ = 0.2, η = 0.05.

References

Ang, H.-S.A. & Lee, J.-C. 2001. Cost optimal design of R/C buildings. Reliability Engineering
and System Safety 73:233–238.
Atkinson, G.M. & Silva, W. 2000. Stochastic modeling of California ground motions. Bulletin
of the Seismological Society of America 90(2):255–274.
Au, S.K. 2005. Reliability-based design sensitivity by efficient simulation. Computers and
Structures 83:1048–1061.
Au, S.K. & Beck, J.L. 1999. A new adaptive importance sampling scheme. Structural Safety
21:135–158.
Au, S.K. & Beck, J.L. 2001. Estimation of small failure probabilities in high dimensions by
subset simulation. Probabilistic Engineering Mechanics 16:263–277.
Au, S.K. & Beck, J.L. 2003a. Importance sampling in high dimensions. Structural Safety 25(2):
139–163.
Au, S.K. & Beck, J.L. 2003b. Subset simulation and its applications to seismic risk based on
dynamic analysis. Journal of Engineering Mechanics 129(8):901–917.
186 Structural design optimization considering uncertainties

Beck, J.L., Chan, E., Irfanoglu, A. & Papadimitriou, C. 1999. Multi-criteria optimal structural
design under uncertainty. Earthquake Engineering and Structural Dynamics 28(7):741–761.
Beck, J.L. & Katafygiotis, L.S. 1998. Updating models and their uncertainties. I: Bayesian
statistical framework. Journal of Engineering Mechanics 124(4):455–461.
Boore, D.M. 2003. Simulation of ground motion using the stochastic method. Pure applied
Geophysics 160:635–676.
Boore, D.M. & Joyner, W.B. 1997. Site amplifications for generic rock sites. Bulletin of the
Seismological Society of America 87:327–341.
Ching, J. & Hsieh, Y.-H. 2007. Local estimation of failure probability function and its confidence
interval with maximum entropy principle. Probabilistic Engineering Mechanics 22:39–49.
Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering.
Structural Safety 15(3):169–196.
Gasser, M. & Schuëller, G.I. 1997. Reliability-based optimization of structural systems.
Mathematical Methods of Operations Research 46:287–307.
Glasserman, P. & Yao, D.D. 1992. Some guidelines and guarantees for common random
numbers. Management Science 38:884–908.
Goulet, C.A., Haselton, C.B., Mitrani-Reiser, J., Beck, J.L., Deierlein, G., Porter, K.A. &
Stewart, J.P. 2007. Evaluation of the seismic performance of code-conforming reinforced-
concrete frame building-From seismic hazard to collapse safety and economic losses.
Earthquake Engineering and Structural Dynamics 36(13):1973–1997.
Iwan, W.D. & Cifuentes, A.O. 1986. A model for system identification of degrading structures.
Earthquake Engineering and Structural Dynamics 14:877–890.
Jaynes, E.T. 2003. Probability theory: the logic of science. Cambridge, UK: Cambridge
University Press.
Jensen, H.A. 2005. Structural optimization of linear dynamical systems under stochastic excita-
tion: a moving reliability database approach. Computer Methods in Applied Mechanics and
Engineering 194:1757–1778.
Kleinmann, N.L., Spall, J.C. & Naiman, D.C. 1999. Simulation-based optimization with
stochastic approximation using common random numbers. Management Science 45(11):
1570–1578.
Kramer, S.L. 2003. Geotechnical earthquake engineering. New Jersey: Prentice Hall.
Kushner, H.J. & Yin, G.G. 2003. Stochastic approximation and recursive algorithms and
applications. New York: Springer.
Lagaros, N.D., Papadrakakis, M. & Kokossalakis, G. 2002. Structural optimization using
evolutionary algorithms. Computers and Structures 80(7–8):571–589.
Papadimitriou, C., Beck, J.L. & Katafygiotis, L.S. 2001. Updating robust reliability using
structural test data. Probabilistic Engineering Mechanics 16:103–113.
Porter, K.A., Beck, J.L., Shaikhutdinov, R.V., Au, S.K., Mizukoshi, K., Miyamura, M.,
Ishida, H., Moroi, T., Tsukada, Y. & Masuda, M. 2004. Effect of seismic risk on lifetime
property values. Earthquake Spectra 20:1211–1237.
Pradlwater, H.J., Schuëller, G.I., Koutsourelakis, P.S. & Champris, D.C. 2007. Application
of line sampling simulation method to reliability benchmark problems. Structural Safety
29(3):208–221.
Robert, C.P. & Casella, G. 2004. Monte Carlo Statistical Methods. New York, NY: Springer.
Royset, J.O. & Polak, E. 2004. Reliability-based optimal design using sample average
approximations. Probabilistic Engineering Mechanics 19:331–343.
Royset, J.O. & Polak, E. 2007. Efficient sample size in stochastic nonlinear programming.
Journal of Computational and Applied Mathematics (in press).
Ruszczynski, A. & Shapiro, A. 2003. Stochastic Programming. New York: Elsevier.
Sørensen, J.D., Kroon, I.B. & Faber, M.H. 1994. Optimal reliability-based code calibration.
Structural Safety 15:197–208.
Stochastic system design optimization using stochastic simulation 187

Spall, J.C. 2003. Introduction to stochastic search and optimization. New York: Wiley-
Interscience.
Taflanidis, A.A. & Beck, J.L. 2007a. Stochastic subset optimization for optimal reliability
problems. Journal of Probabilistic Engineering Mechanics (In press).
Taflanidis, A.A. & Beck, J.L. 2007b. Stochastic subset optimization for stochastic design.
ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and
Earthquake Engineering, Rethymno, Greece, 13–16 June.
Chapter 8

Numerical and semi-numerical


methods for reliability-based
design optimization
Ghias Kharmanda
Aleppo University, Aleppo, Syria

ABSTRACT: In the Reliability-Based Design Optimization (RBDO) model for robust system
design, the mean values of uncertain system variables are usually used as design variables, and
the cost is optimized subject to prescribed probabilistic constraints as defined by a nonlinear
mathematical programming problem. Therefore, a RBDO solution that reduces the structural
weight in uncritical regions does not only provide an improved design but also a higher level
of confidence in the design. In this work, we present recent developments for the RBDO model
relative to two points of view: reliability and optimization. Next, we present our recent devel-
opments for reliability-based design optimization model. Finally, we demonstrate the efficiency
of our methods on different applications.

1 Introduction
When Deterministic Design Optimization (DDO) methods are used, deterministic opti-
mum designs are frequently pushed to the design constraint boundary, leaving little
or no room for tolerances (or uncertainties) in design, manufacture, and operating
processes. So deterministic optimum designs obtained without consideration of uncer-
tainties could lead to unreliable designs, therefore calling for Reliability-Based Design
Optimization (RBDO). It is the objective of Reliability-Based Design Optimization
(RBDO) to design structures that should be both economic and reliable. However, the
coupling between the mechanical modeling, the reliability analyses and the optimiza-
tion methods leads to very high computational cost and weak convergence stability
(Feng & Moses 1986). To overcome these difficulties, two points of view have been
considered. From a reliability view point, RBDO involves the evaluation of probabilis-
tic constraints, which can be executed in two different ways: either using the Reliability
Index Approach (RIA) or the Performance Measure Approach (PMA) (see Tu et al.
1999, Youn et al. 2003). The major difficulty lies in the evaluation of the probabilistic
constraints, which is prohibitively expensive and even diverges for many applications.
However, from an optimization view point, we have two categories of methods: numer-
ical and semi-numerical methods. For the first category, a hybrid method based on
simultaneous solution of the reliability and the optimization problem has successfully
reduced the computational time problem (Kharmanda et al. 2002). Next, an improved
hybrid method has been recently proposed to improve the optimum value of the objec-
tive function more than the resulting value by the hybrid method (Mohsine et al. 2005).
However, the hybrid and improved hybrid RBDO problems are more complex than
190 Structural design optimization considering uncertainties

that of deterministic design and may not lead to local optima. For the second category,
an Optimum Safety Factor (OSF) method has been proposed to compute safety factors
satisfying a required reliability level without demanding additional computing cost for
the reliability evaluation (Kharmanda et al. 2004b). However, the OSF method can-
not be used for all cases such as modal analysis. So a safest point method has been
proposed to deal with these problems (Kharmanda et al. 2006). We finally note that
the developments based on the reliability view point is less efficient than those based
on the optimization view point because the latter provides us with reliability-based
optimum designs without additional computing cost for probabilistic (reliability) con-
straints and leads, at least, to local optima. The numerical methods need a much higher
computing time than the semi-numerical ones but to improve the optimum values, we
need to use the numerical method with very expensive operations.

2 Two points of view for developing the RBDO model

2.1 Rel i a b i l i ty vie w po int


The work of (Tu et al. 1999, Youn et al. 2003) depends on the development of sev-
eral approaches based on a reliability view point. Here, two design requirements are
coupled for each probabilistic constraint: the performance requirement is described
implicitly by the performance measure, and the reliability requirement is approximated
explicitly by the first- or second-order reliability index. The conventional Reliability
Index Approach (RIA) for RBDO has been developed and applied to design for against
fatigue crack initiation of a road arm of the M1A1 tank to successfully obtain an opti-
mum shape design of the component, see (Youn et al. 2003). However, it was found that
the computational requirement of RIA is extremely intensive because the evaluation
of each probabilistic constraint during an overall RBDO iteration is quite expensive.
To alleviate this computational burden, it was proposed to develop a Performance
Measure Approach (PMA) for RBDO. In this approach, the reliability constraint is
defined from the design perspective (rather than from the reliability analysis perspec-
tive) to measure the design constraint violation. The prescribed reliability requirement
(such as six-sigma design, see Koch et al. 2004) was assumed to be satisfied, and the
probabilistic performance measure (the value of the limit-state function) that satisfies
this prescribed reliability requirement was used to measure the degree of the reliability
constraint violation. Using PMA, an inverse reliability analysis problem was associ-
ated with the evaluation of the reliability constraint and a nonlinear ball constraint
optimization problem was proposed for this inverse reliability analysis problem. The
inverse reliability analysis problem in the proposed PMA was solved in a far more
efficient and stable way than the conventional RIA. From a broader perspective, it
was shown that the probabilistic constraints can be evaluated using either RIA or the
PMA. However, there are two major advantages in PMA compared to RIA, see (Youn
et al. 2004). First, it is found that the performance measure approach is inherently
robust and is more effective when the reliability constraint is inactive. This fact is
not surprising, since it is easier to minimize a complicated cost function subject to a
simple constraint function than minimizing a simple cost function subject to a com-
plicated constraint function. The inverse reliability analysis problem of PMA provides
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 191

this benefit. Secondly, which is more significant, the PMA always yields a solution,
whereas RIA does not yield solutions for certain types of distributions. The major dif-
ficulty is associated with the reliability evaluation. So, we found that it is more efficient
to select the optimality criterion as a point of view for the developments.

2.2 Optimization view point


Not surprisingly, efforts were directed towards the development of efficient techniques
and general purpose programs to perform the reliability analysis. These programs and
procedures compute the reliability index of a structure for a defined failure mode, but
do not provide an optimum set of design parameters for improving the reliability of a
structure for defined reliability information. Since the reliability index is computed iter-
atively, an enormous amount of computer time is involved in the whole design process.
Two categories of methods have been developed. For the first category, called numer-
ical methods, a hybrid method based on simultaneous solution of the reliability and
the optimization problem has successfully reduced the computational time problem
(Kharmanda et al. 2002). Next, an improved hybrid method has been recently pro-
posed to improve the optimum value of the objective function more than the resulting
value by the hybrid method (Mohsine et al. 2005). However, the hybrid and improved
hybrid RBDO problems are more complex than that of deterministic design and may
not lead to local optima. For the second category, called semi-numerical methods, an
optimum safety factor (OSF) method has been proposed to compute safety factors sat-
isfying a required reliability level without demanding additional computing cost for the
reliability evaluation (Kharmanda et al. 2004a). However, the OSF method cannot be
used for all cases such as modal analysis. So a safest point method has been proposed
to deal with these problems (Kharmanda et al. 2006). In the next sections, we present
our developments and some applications.

3 Numerical RBDO methods

3.1 Classical method (CM)

3.1.1 Basic formul ati on


The classical reliability-based optimization is performed by nesting the following two
problems:

1. Optimization problem:

min f (x)
x
subject to gk (x) ≤ 0, k = 1, . . . , K (1)
β(x, u) ≥ βt

where f (x) is the objective function, gk (x) ≤ 0 are the associated constraints,
β(x, u) is the reliability index of the structure, and βt is the target reliability.
192 Structural design optimization considering uncertainties

x2 Failure domain Failure domain


u2
H(x, u)0
G(x, y)0
H(x, u)0
G(x, y)0 P*

x1
u1
0 Physical space 0 Normalized space

Figure 8.1 (a) Physical space or X-Space and (b) normalized space or U-Space.

2. Reliability analysis: the reliability index β(x, u) is equal to the minimum distance
between the limit state function H(x, u) and the origin, see Figure 8.1b. This index
is determined by solving the minimization problem:


min d(u) = u2i
u
(2)
subject to H(x, u) ≤ 0

where d(u) is the distance in the normalized random space, defined as above,
and H(x, u) is the performance function (or limit state function) in the normal-
ized space, defined such that H(x, u) ≤ 0 implies failure, see Figure 8.1b. In
the physical space, the image of H(x, u) is the limit state function G(x, y), see
Figure 8.1a.

3.1.2 F u rth er d e v e lo p m e n t
According to the sub-problems (1) and (2), the classical solution consists in minimizing
two Lagrangians:


min L1 (x, u, λk , λβ ) = f (x) + λβ [βt − β(x, u)] + λk gk (x) (3a)
k

min L2 (u, λH ) = d(u) + λH H(x, u) (3b)

where λk , λβ and λH are, respectively, the Lagrangian multipliers for the constraints, the
reliability index and the limit state function; (λk ≥ 0, λβ ≥ 0 and λH ≥ 0). The optimality
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 193

conditions of these two Lagrangians are, respectively,

∂L1 ∂f ∂β  ∂gk
= − λβ + λk =0 (4a)
∂xi ∂xi ∂xi k ∂xi
∂L1
= βt − β(x, u) = 0 (4b)
∂λβ
∂L1
= gk (x) = 0 (4c)
∂λk

and
∂L2 ∂d ∂H
= + λH =0 (5a)
∂uj ∂uj ∂uj
∂L2
= H(x, u) = 0 (5b)
∂λH

It has been demonstrated that the classical approach needs a high computational
time and may lead to weak convergence stability. Furthermore, it is very difficult to
implement of the machine (see Kharmanda et al. 2001, 2002).

3.2 Hybrid method (HM)

3.2.1 Basic formul ati on


The solution procedure in two separate spaces requires large computational time,
especially for large-scale structures (Feng & Moses 1986). In order to improve the
numerical performance, the hybrid approach consists in minimizing a new form of the
objective function F(x, y) subject to a limit state as well as deterministic and reliability
constraints, i.e.,

min F(x, y) = f (x) · dβ (x, y)


x,y
subject to G(x, y) ≤ 0
(6)
gk (x) ≤ 0, k = 1, . . . , K
dβ (x, y) ≥ βt

The minimization of the function F(x, y) is carried out in the Hybrid Design Space
(HDS) of deterministic variables x and random variables y. Here, dβ (x, y) is the distance
in the hybrid space between the optimum point and the design point, dβ (x, y) = d(u).
Since the random variables and the deterministic ones are treated in the same space
(HDS), it is very important to know the types of the used random variables (continuous
and/or discrete) and the distribution law that has been used.
The normalized variable u is used to evaluate the reliability index (2). However, the
reliability index can be obtained in terms of the probability of failure as:

β = −−1 (Pf ) (7)


194 Structural design optimization considering uncertainties

Hybrid Design Space Hybrid Design Space


x2, y2 x2, y2
Limit state decreasing Limit state decreasing
Safe domain

G(x, y)0
G(x, y)0
G(x, y)0

G(x, y)0

G(x, y)0
G(x, y)0
db  bt db bt
db bt db bt
P*x P*x
db db

Objective function levels


Objective function levels
ng

ng
easi

easi
P*y P*y
decr

decr
Failure
f(x)

f(x)
domain
x1, y1 x1, y1

(a) (b)

Hybrid Design Space


x2, y2 Limit state decreasing
G(x, y)0

G(x, y)0

Safe domain
G(x, y)0

db bt
dbbt
p*x
db
Objective function levels
ng
easi

P*y
decr

Failure
f(x)

domain
x1, y1

(c)

Figure 8.2 Hybrid design spaces: (a) normal law, (b) lognormal law and (c) uniform law.

where  is the cumulative density function and Pf is the probability of failure. In many
engineering applications, the evaluation of the failure probability can be carried out
in several ways (Ditlevsen & Madsen 1996).

3.2.2 F u rth er d e v e lo p m e n t
The hybrid Lagrangian is written as

LH (x, y, λ) = f (x) · dβ (x, y) + λβ [βt − dβ (x, y)] + λG G(x, y) + λk gk (x) (8)
k
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 195

The optimality conditions of this Lagrangian are

∂LH ∂f ∂dβ ∂G  ∂gk


= dβ (x, y) + [f (x) − λβ ] + λG + λk =0 (9a)
∂xi ∂xi ∂xi ∂xi ∂xi
k

∂LH ∂dβ ∂G
= [f (x) − λβ ] + λG =0 (9b)
∂yi ∂yi ∂yi
∂LH
= βt − dβ (x, y) = 0 (9c)
∂λβ
∂LH
= G(x, y) = 0 (9d)
∂λG
∂LH
= gk (x) = 0 (9e)
∂λk

The optimality conditions (9) represent the optimal solution by a linear combination
of different gradients of f , dβ , G and gk . At the convergence, the distance dβ stretches
toward the reliability index β, which next stretches toward βt when the associated
constraint is active. By comparing the conditions (9) with the optimality conditions
of the classical formulation (see (4 and 5)); we can note that the only difference in
the search direction lies in the coupled term: ∂G/∂xi . In fact, two cases may occur in
function of the type of the optimization variables xi .

Case 1: xi is a deterministic mechanical parameter (e.g. xi is a parameter of the limit


state). In this case, the limit state sensitivity takes the form (Ditlevsen & Madsen 1996)

∂G ∂dβ
=η (10)
∂xi ∂xi

with the norm η


0 0 0 0
0 ∂H 0 0 0 ∂G ∂Tj−1 (x, u) 0
0
η=0 0
0 ∂u 0 = 0 0 (11)
j 0 ∂yj ∂u j 0

Case 2: xi is a probability distribution parameter of the random variable yi (e.g. xi


is the mean of yi ). In this case, xi is a pure probability variable and has no effect on
the limit state function, leading to: ∂G/∂xi = 0. In this case, we obtain

∂H ⎨ ∂G for i=j
= ∂yj (12a)
∂xi ⎩
0 for i = j

where

∂G ∂dβ
=η (12b)
∂yi ∂yi
196 Structural design optimization considering uncertainties

From (10) and (12), we can see that the gradient vectors of G and dβ are
co-directional. It means that there is no modification of the search direction. The
introduction of this result in the first optimality condition of the hybrid Lagrangian
(9a) leads to

∂LH ∂f ∂dβ  ∂gk


= dβ (x, y) + [f (x) − λβ + ηλG ] + λk =0 (13)
∂xi ∂xi ∂xi ∂xi
k

The comparison of the optimality conditions for classical and hybrid approaches
gives the relationships between the Lagrangian multipliers in the two formulations:

λβ − f (x) − ηλG
λβ = (14)
dβ (x, y)

and
λG
λH = (15)
f (x) − λβ

These developments show that the solution of problem (8) respects exactly the
optimality conditions of the initial problem, given by (4a) and (5b), where the two
phenomena were separated. Otherwise, the hybrid Lagrangian definition does not
introduce any modification in the optimality conditions.
In the literature, the hybrid method has been successfully applied for several exam-
ples (Kharmanda et al. 2001–2003). An industrial application of a lorry brake system
design (for KNORR-BREMSE Company) has been successfully carried out during the
PhD thesis of (Mohsine 2006).

3.3 Im prov ed hy b r id me t ho d (I H M)

3.3.1 Ba si c f ormu la t io n
Using the hybrid method, we can obtain local optima and designer may then select the
best optimum. In the improved hybrid method, we introduce the design point and the
optimum solution in the objective function and the constraints at the design point and
at the optimum solution as follows:

min F(x, y) = f (x) · dβ (x, y) · f (my )


x,y
subject to G(x, y) = 0
gk (x) ≤ 0 (16)
gj (my ) ≤ 0
and dβ (x, y) ≥ βt

The random vector y has mean values my and standard-deviations σy · f (my ) is the
optimal objective function and gj (my ) is the constraint at which we can control the
optimal configuration. The solution of this problem depends on two important points.
It can be carried out simultaneously in the hybrid design space (HDS).
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 197

3.3.2 F u rt h er dev el opment


We show the equivalence of the improved method and the classical (initial) one.
The improved hybrid Lagrangian is written as

LI (x, y, λ) = f (x) · dβ (x, y) · f (my ) + λβ βt − dβ (x, y)


 
+ λG G(x, y) + λk gk (x) + λj gj (my ) (17)
k j

In order to write the optimality conditions of the improved hybrid Lagrangian, we


note that the derivatives of f (my ) and of g(my ) with respect to y are nil:
! "
∂f my 
 =0 (18)
∂y 
y∗

and

∂g(my ) 
=0 (19)
∂y y∗

Because the my value coincide the optimal solution of the objective function and we
derivate with respect to random distribution for which we propose a function Q that
we can write my = Q(y). So it gives:

∂f ◦ Q 
(y) = 0 (20)
∂y y∗

So the optimality conditions of the improved hybrid Lagrangian are:

∂LI ∂f ∂dβ
= dβ (x, y) · f (my ) + [f (x) · f (my ) − λβ ]
∂xi ∂xi ∂xi
∂G  ∂gk
+ λG + λk =0 (21a)
∂xi ∂xi
k

∂LI ∂dβ ∂G
= [f (x) · f (my ) − λβ ] + λG =0 (21b)
∂yi ∂yi ∂yi
∂LI
= βt − dβ (x, y) = 0 (21c)
∂λβ
∂LI
= G(x, y) = 0 (21d)
∂λG
∂LI
= gk (x) = 0 (21e)
∂λk
∂LI
= gj (my ) = 0 (21f)
∂λj
198 Structural design optimization considering uncertainties

The optimality conditions (21) represent the optimal solution by a linear combi-
nation of different gradients of f , dβ , G and gk . At the convergence, the distance
dβ stretches toward the reliability index β, which next stretches toward βt when the
associated constraint is active. By comparing the conditions (21) with the optimality
conditions of the classical formulation (see (4 and 5)); we can note that the only dif-
ference in the search direction lies in the coupled term: ∂G/∂xi . In fact, two cases may
occur in function of the type of the optimization variables xi .

Case 1: xi is a deterministic mechanical parameter (e.g. xi is a parameter of the limit


state). In this case, the limit state sensitivity takes the form (Ditlevsen & Madsen 1996)

∂G ∂dβ
=η (22)
∂xi ∂xi

with the norm η


0 0 0 0
0 ∂H 0 0 0 ∂G ∂Tj−1 (x, u) 0
0
η=0 0
0 ∂u 0 = 0 0 (23)
j 0 ∂yj ∂uj 0

Case 2: xi is a probability distribution parameter of the random variable yi (e.g. xi


is the mean of yi ). In this case, xi is a pure probability variable and has no effect on
the limit state function, leading to: ∂G/∂xi = 0. In this case, we obtain

∂H ⎨ ∂G for i=j
= ∂yj (24)
∂xi ⎩
0 for i = j

where

∂G ∂dβ
=η (25)
∂yi ∂yi

From (22) and (24), we can see that the gradient vectors of G and dβ are
co-directional. It means that there is no modification of the search direction. The
introduction of this result in the first optimality condition of the improved hybrid
Lagrangian (21a) leads to

∂LI ∂f ∂dβ  ∂gk


= dβ (x, y) · f (my ) + [f (x · f (my ) − λβ + ηλG ] + λk =0 (26)
∂xi ∂xi ∂xi ∂xi
k

The comparison of the optimality conditions for classical and improved hybrid
approaches gives the relationships between the Lagrangian multipliers in the two
formulations:

λβ − f (x) · f (my ) − ηλG


λβ = (27)
dβ (x, y) · f (my )
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 199

and

λG
λG = (28)
f (x) · f (my ) − λβ

These developments show that the solution of problem (17) respects exactly the
optimality conditions of the initial problem, given by (4) and (5), where the two
phenomena were separated. Otherwise, the improved hybrid Lagrangian definition
does not introduce any modification in the optimality conditions. Applications of
transient analysis have demonstrated that the main benefit of the improved hybrid
method. Here, we improve the structure performance by much more minimizing the
objective function than the hybrid method (Mohsine et al. 2005). To conclude this sec-
tion, we can compare between the three kinds of numerical methods: classical, hybrid
and improved hybrid RBDO methods. The classical method leads to very high com-
putational cost and weak convergence. The hybrid method has successfully reduced
the computing time relative to the classical one (Kharmanda et al. 2001–2003). The
improved hybrid method can improve the optimum value of the objective function
more than the hybrid method (Mohsine et al. 2005). In these presented numerical
method, the reliability-based design optimization problem has two kinds of variables:
random and deterministic that still leads to expensive procedures and may not yield
local optima. In the next section, we present two semi-analytical methods that reduce
the optimization problem scale.

4 Semi-numerical RBDO methods

4.1 Optimum s afety factor method (O SF)

4.1.1 Basic formul ati on


It is our aim that the safety factors should be independent of the engineering experience.
In fact the engineering experience is based on experimental work, design knowledge,
etc. However, when designing a new type of structure, we usually need some experi-
mental background for proposing suitable safety factors. When applying safety factors
the initial cost will increase, and this increase should not be too large. Given that sen-
sitivity analysis plays a very important role and can provide us with the influence
of the parameters on the structure studied, we will use this concept in the proper
direction and combine it with the reliability analysis. The main disadvantage of the
Deterministic Design Optimization (DDO) procedure is that it may not satisfy an
appropriate required reliability level. Although we improve the reliability level of the
structure when using the hybrid RBDO, this approach leads to a saving of computa-
tional time (which may be then available for the reliability analysis). Thus, our OSF
approach consists in using both sensitivity analysis and reliability analysis to over-
come the disadvantages of DDO and RBDO by numerical methods (Kharmanda &
Olhoff 2003, Kharmanda et al. 2003, 2004c, Kharmanda & Olhoff 2007). Table 8.1
shows the different formulations of optimum safety factors for normal, lognormal and
uniform distributions.
200 Structural design optimization considering uncertainties

Table 8.1 Optimum safety factors for normal, lognormal and


uniform distributions.

Law OSF

Normal Sfi = 1 + γi ·u∗i


1 1 2
Lognormal S fi =  exp ln(1 + γi2 ) · u∗i
1 + γi2

Uniform Sfi = 1 − 3γi (1 − 2(u∗i ))

u2 u2
Infeasible
f domain

PD u1
P* bt
b
H(u)0
POp H(u)0
a
Feasible
u1
domain

(b)
(a)

Figure 8.3 (a) Design point modeling and (b) Optimum solution modeling.

4.1.2 F u rth er d e v e lo p m e n t
Let us consider an example of only two normalized variables u1 and u2 (see Figure 8.3a).
For an assumed failure scenario H(u) ≤ 0, the design point P∗ is calculated by

min d 2 = u21 + u22


u (29)
subject to H(u1 , u2 ) ≤ 0

The Lagrangian function for the problem (29) can be written as:

L(u, λ, s) = d 2 (u) + λ · [H(u) + s2 ] (30)

where the inequality constraint in (29) is adjoined by the Lagrange multiplier λ, after
having converted the inequality constraint into equality H(u) + s2 = 0, by introducing
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 201

the real slack variable s. The optimality conditions for this Lagrangian are:

∂L ∂d 2 ∂H
= +λ = 0, i = 1, 2 (31a)
∂ui ∂ui ∂ui
∂L
= H(u) + s2 = 0 (31b)
∂λ
∂L
= 2sλ = 0 (31c)
∂s

The optimality condition for L with respect to s, yields the so-called switching con-
dition sλ = 0, and the necessary condition ∂2 L/∂s2 ≥ 0 for a minimum of L implies that
the Lagrangian multiplier λ must be non-negative, i.e., λ ≥ 0. So due to the condition
(31c), we distinguish between two cases:

Case 1: If the real slack variable is non-zero (s = 0), the Lagrangian multiplier has to
be zero (λ = 0) and the limit state constraint must be less than zero (H(u) < 0), which
correspond to the case of failure.
Case 2: If the real slack variable is zero (s = 0), the Lagrangian multiplier is non-
negative (λ ≥ 0) and the limit state is defined by the equality constraint H(u) = 0. The
solution here is found on the limit state function and represents the Design Point.
The first case is not suitable to our reliability-based study whereas the second one
is basic for our approach. Since we have only two normalized variables u1 and u2 ,
equation (31a) can be written as:

∂L ∂d 2 ∂H
= +λ =0 (32a)
∂u1 ∂u1 ∂u1
∂L ∂d 2 ∂H
= +λ =0 (32b)
∂u2 ∂u2 ∂u2

Using the square distance d 2 in equation (29), we get:

∂H λ ∂H
2u1 + λ = 0 ⇔ u1 = − (33a)
∂u1 2 ∂u1
∂H λ ∂H
2u2 + λ = 0 ⇔ u2 = − (33b)
∂u2 2 ∂u2

From Figure 8.3, at the design point P∗ , the tangent of α is given by: tan α = u2 /u1
and using equations (33a) and (33b), we get:

∂H
u2 ∂u2
tan α = = (34)
u1 ∂H
∂u1

Equation (34) shows the relationship between the distribution of the normalized
vector components and the sensitivity of the limit state function. Problem (29) gives us
202 Structural design optimization considering uncertainties

the reliability index β as the minimum distance between the limit state function and the
origin (Hasofer & Lind 1974). This way the resulting reliability index may be lower
or higher than the target reliability index βt . As we wish to satisfy a required target
reliability level, we now write:

βt2 = u21 + u22 (35)

Using equations (34) and (35), we get:



∂H 2
∂u1
βt2 = u22  + u2
2
(36)
∂H 2
∂u2
or
⎛  ⎞
∂H 2
⎜ ∂u ⎟
⎜ 1 ⎟
u22 ⎜ 2 + 1⎟ = βt
2
(37)
⎝ ∂H ⎠
∂u2
Here, the value of normalized vector components principally depends on the
percentage of the limit state gradient. So u2 is written as:
+
, 
, ∂H 2
,
, ∂u2
u2 = βt ,
, 2  (38)
- ∂H ∂H 2
+
∂u1 ∂u2

In general, when considering the normal distribution law, the normalized variable
ui is given by:
yi − m i
ui = , i = 1, . . . , n (39)
σi

The standard deviation σi can be related to the mean value mi by:

σi = γi · mi , i = 1, . . . , n (40)

This way we introduce the safety factors Sfi corresponding to the design variables
xi . The design point can be expressed by:

yi = Sfi · mi , i = 1, . . . , n (41)

By (40) and (41), we replace σi and xi in equation (39) and get:

Sfi − 1
ui = , i = 1, . . . , n (42)
γi
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 203

Using equation (42), we can write (38) in the following form:


+
, 
, ∂H 2
,
Sf2 − 1 , ∂u2
= βt ,
, 2  (43)
γ2 - ∂H ∂H 2
+
∂u1 ∂u2

or in terms of Sf2 :
+
, 
, ∂H 2
,
, ∂u2
Sf2 = 1 + γ2 · βt ,
,   (44)
- ∂H 2 ∂H 2
+
∂u1 ∂u2

The calculation of the normalized gradient ∂H/∂u is not directly accessible because
the mechanical analysis is carried out in the physical space, not in the standard space.
The computation of the normalized gradient is carried out by applying the chain rule
on the physical gradient ∂G/∂x:

∂H ∂G ∂Tk−1 (u, y)
= , i = 1, . . . , n, k = 1, . . . , K (45)
∂ui ∂yk ∂ui

where T −1 (y, u) is the probabilistic transformation function. After some algebra, the
normalized gradient can be written as:
 
∂H  ∂G 
=   i = 1, . . . , n
∂ui  ∂y , (46)
i

The distribution of the components of the vector u can be measured by the sensitivity
analysis of the limit state function with respect to the design point vector y.
+  
,  ∂G 
,  
,  ∂y 
,
Sf2 = 1 ± γ2 · βt ,    
2
(47)
-  ∂G   ∂G 
 + 
 ∂y   ∂y 
1 2

For a single limit state problem of n design variables, and sum from j = 1 to n,
equation (47) can thus be written in the following form:
+  
, 
,  ∂G 
, 
, ∂yi 
Sfi = 1 ± γi · βt ,
,  , i = 1, . . . , n (48)
-  ∂G 
n
 
j=1 ∂yj
204 Structural design optimization considering uncertainties

with the optimum values u∗i of the normalized vector:

+  
, 
,  ∂G 
, 
, ∂yi 
u∗i = ±βt ,
,  , i = 1, . . . , n
-  ∂G 
n
 
j=1 ∂yj

Here, the sign ± depends on the sign of the derivative, i.e.

∂G
> 0 ⇔ Sfi > 1, i = 1, . . . , n (49)
∂yi
∂G
< 0 ⇔ Sfi < 1, i = 1, . . . , n (50)
∂yi

Using these safety factors, we can satisfy the required reliability level and avoid the
complexity of the problem.
In the literature, the OSF method has been successfully applied for several static
examples (Kharmanda et al. 2003–2004c). For the transient analysis, (Yang et al.
2005) from Ford Motor Company compared the results and efficiencies of different
RBDO methods on an exhaust system. The objective was to minimize the weight of the
system subject to constraints that the reliability of the resultant forces in each frequency
region should be less than specified values. All in all 144 constraints were imposed, but
many of them were inactive. (Yang et al. 2005) tested several RBDO methods. They
concluded that: ‘(Kharmanda et al. 2004c) also used structural safety factors, based
on the sensitivity of the limit-state function, for RBDO. In addition to its simplified
computational framework to completely decouple the optimization and the reliability
analyses, the method has two advantages:

1. It incorporates the partial safety-factor concept with which most designers are
familiar. And, theoretically, safety factors do not have to be tied to the individual
random variables and thus the MPPs (Most Probable Points).
2. It produces progressively improved reliable designs in the initial steps that help
designers keep track of their designs.’

According to the experience of Ford Motor Company, our method is considered


as a very good active constraint strategy (for problems with many constraints). For
modal analysis, it has been applied for a special case (Kharmanda et al. 2004d),
where the reliability-based optimum solution was determined subject to a prescribed
eigen-frequency fn . But if the failure interval [fa , fb ] is given, it is also very difficult to
determine the safest solution using the OSF method. So we have to develop an efficient
method to find the best point correspond to the eigen-frequency for a given frequency
interval.
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 205

Table 8.2 Mean values by safest point method for normal, lognormal
and uniform distributions.

Law Mean values

yai + ybi
Normal mi = , i = 1, … , n
2 a b
 ln(yi yi )
Lognormal mi = 1 + γi2 exp , i = 1, … , n
2
ya + ybi
Uniform mi = i , i = 1, … , n
2

Failure
domain

Safety
domain

ba bb f(Hz)

fa fn fb

Figure 8.4 The safest point at frequency fn .

4.2 Safest point method (SP)

4.2.1 Basic formul ati on


The safest structure under free vibrations for a given interval of eigen-frequency is
found at the safest position of this interval where the safest point has the same reliability
index relative to both sides of the interval. The use of the hybrid method here needs
a multiple procedures and high computing time (Kharmanda et al. 2007). Thus, we
present efficient formulations of the safest point method for normal, lognormal and
uniform distributions (see Table 8.2).

4.2.2 F u rt h er dev el opment


Let consider a given interval [fa , fb ]. For the first shape mode, to get the reliability-
based optimum solution for a given interval, we consider the equality of the reliability
indices:

βa = βb (51a)
206 Structural design optimization considering uncertainties

with
+ +
, n , n
, ,
βa = - (uai )2 and βa = - (ubi )2 i = 1, . . . , n (51b)
i=1 i=1

To verify the equality (51a), we propose the equality of each term. So we have:

uai = −ubi , i = 1, . . . , n (52)

According to the normal distribution law, the normalized variable ui is given by (39)
and (52), we get:

yia − mi y b − mi
=− i , i = 1, . . . , n (53)
σi σi

To obtain equality between the reliability indices (see equation 51a), the mean value
of variable corresponds to the structure at fn . So the mean values of safest solution are
located in the middle of the variable interval [yia , yib ] as follows:

yia + yib
xi = mi = , i = 1, . . . , n (54)
2
In a recent publication (Kharmanda et al. 2006, 2007), we found that the safest
point method is suitable for modal analysis more than the other methods that are
complex to implement and to converge in this kind of study.
To conclude this section, the OSF is simple to implement, can satisfy required reli-
ability levels, has only a single type of variable y, and only needs a single simple
optimization process to determine the design point. This method can be successfully
used for static and transient analysis problems but for a modal analysis where the aim
is to optimize a structure for a given eigen-frequencies interval, the safest point method
is very suitable to find the best structure.
Finally, to compare between the numerical and semi-numerical methods, we can note
that the computational time when using the numerical methods is very high relative to
the semi-numerical methods because we deal with two kinds of variables for numerical
methods but with only one kind for semi-numerical methods. As result, the numerical
methods can solve both optimization and reliability problems by numerical procedures
but the semi-numerical methods solve the reliability problem by analytical form and
the optimization problem by numerical procedure that leads to a reduction of the
computing time. In the next section, we present some numerical examples to compare
for different methods with object of showing their advantages depending on the studied
cases.

5 Numerical applications
We study three examples: static, modal and transient cases in order to provide the
designer with the suitable method for each case.
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 207

H3

H2

H1

(a) (b)

Figure 8.5 Layout of tri-material cantilever beam.

5.1 Static analys is: A cantilever tri-mat e ri al be am


The objectives of the following static analysis are to demonstrate:

1. DDO procedure cannot satisfy the required reliability level,


2. Semi-numerical methods such as OSF are simple to implement relative to
numerical ones such as hybrid method,
3. Semi-numerical methods such as OSF reduce efficiently the problem scale rela-
tive to numerical ones such as hybrid method that leads to a reduction of the
computing time.

The design problem under consideration pertains to a short tri-material cantilever


beam of length L = 100 mm, height H = 50 mm and width T = 20 mm, which is loaded
by a distributed pressure q = 15 N/mm2 . The beam structure is composed of three lay-
ers of material (Figure 8.5) of different Young’s moduli E1 = 200 GPa, E2 = 100 GPa
and E3 = 150 GPa, Poisson’s ratios ν1 = 0.3, ν2 = 0.1 and ν3 = 0.2, and yield stresses
y y y
σ1 = 48 MPa, σ2 = 18 MPa and σ3 = 42 MPa. The heights of the three layers are:
H1 = 10 mm, H2 = 30 mm, and H3 = 10 mm. To optimize the tri-material beam struc-
ture, the mean values mH1 , mH2 and mH3 of the heights H1, H2 and H3 are the design
variables. The physical heights H1, H2 and H3 are elements of the vector of random
variables. The target reliability index is taken to be: βt = 3, and the standard-deviations
are given by σH1 = 0.1mH1 , σH2 = 0.1mH2 and σH3 = 0.1mH3 . During the subsequent
design optimization processes, we consider all variables to be bounded by upper and
lower limits.

5.1.1 O ptimizat ion procedures


DDO procedure: The objective is to minimize the volume subject to the design con-
straints and consider a safety factor Sf that is applied to the stress and based on
engineering experience. The structure has to be designed by considering the maximum
y
allowable values σjw = σj /Sf , j = 1, 2, 3 for the von Mises stresses σjmax , j = 1, 2, 3 in the
208 Structural design optimization considering uncertainties

most critical points in each of the three layers of different material. Thus, the structural
optimization problem with the safety factor taken into account, can be written as

min Volume(H1, H2, H3)


H1,H2,H3
y
subject to σ1max (H1, H2, H3) ≤ σ1w = σ1 /Sf
y
(55)
σ2max (H1, H2, H3) ≤ σ2w = σ2 /Sf
y
σ3max (H1, H2, H3) ≤ σ3w = σ3 /Sf

The associated reliability evaluation without consideration of the safety factor can
be written in the form

min d(uH1 , uH2 , uH3 )


uH1 ,uH2 ,uH3
y
subject to σ1 − σ1max (H1, H2, H3; uH1 , uH2 , uH3 ) ≤ 0
(56)
y
σ2 − σ2max (H1, H2, H3; uH1 , uH2 , uH3 ) ≤ 0
y
σ3 − σ3max (H1, H2, H3; uH1 , uH2 , uH3 ) ≤ 0

Here, we take the value of the global safety factor applied to the yield stresses to be
Sf = 1.5. This way the allowable stresses will be: σ1w = 32, σ2w = 12 and σ3w = 28 MPa.
After having optimized the structure according to (55), the resulting volume is found
to be VDDO = 43 252 mm3 . The reliability index depends on the distribution law, and
optimum values of the reliability index for the three different types of distribution are
found to be: βDDO = 3.5127. Using DDO we cannot control a required reliability level.
However, by integrating the reliability concept into the design optimization process
(thereby performing RBDO), we can satisfy the reliability constraint.

Hybrid procedure: The classical method implies very high computational cost and
exhibits weak convergence stability. So we use the hybrid method to satisfy the required
reliability level (within admissible tolerances of 1%). In the hybrid procedure of RBDO,
we minimize the product of the volume and the reliability index subject to the limit state
functions and the required reliability level. The hybrid RBDO problem is written as

min Volume(H1, H2, H3) · dβ (mH1 , mH2 , mH3 , H1, H2, H3)
mH1 ,mH2 ,mH3 ,H1,H2,H3
y
subject to σ1max (H1, H2, H3) ≤ σ1
y (57)
σ2max (H1, H2, H3) ≤ σ2
y
σ3max (H1, H2, H3) ≤ σ3
dβ (mH1 , mH2 , mH3 , H1, H2, H3) ≥ βt

This optimization process is carried out in a hybrid design space. The resulting
optimal values of the reliability index are found to be: dβ = 3.0001 ≈ βt (i.e., 0.03%
higher than the target reliability index). The resulting optimum volumes are determined
as: Volhybrid = 41 782 mm3 . The experience of the designer on finite element software
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 209

Table 8.3 Safety factor values.


β=3
Variables ∂σ1 /∂yi ∂σ2 /∂yi ∂σ3 /∂yi Sf

H1 −1.0520 −0.2160 −0.7318 0.8255


H2 −0.7452 −0.2041 −0.6119 0.8458
H3 −0.8432 −0.6796 −0.8271 0.8108

plays a very important role in improving the objective function and controlling the
convergence. Although the method yields results that satisfy the required reliability
level within admissible tolerances, the problem is a complex optimization problem
and needs a large number of iterations to converge and improve the value of objective
function.
OSF procedure: This method includes three main steps:

1. The first step is to obtain the design point (the Most Probable Point). Here, we
minimize the volume subject to the design constraints without consideration of
the safety factors. This way the optimization problem is simply written as:

min Volume(H1, H2, H3)


H1,H2,H3
y
subject to σ1 (H1, H2, H3) ≤ σ1
(58)
y
σ2 (H1, H2, H3) ≤ σ2
y
σ3 (H1, H2, H3) ≤ σ3

The design point is found to correspond to the maximum von Mises stresses
σ1max = 47.335 MPa, σ2max = 17.177 MPa and σ3max = 41.999 MPa, that are almost
y y y
equivalent to the yield stresses σ1 , σ2 and σ3 .
2. The second step is to compute the optimum safety factors for normal distribution.
In this example, the number of the deterministic variables is equal to that of the
random ones. During the optimization process, we obtain the sensitivity values
of the limit state with respect to all variables. So there is no need for additional
computational cost. Table 8.3 shows the results leading to the values of the safety
factors, namely the sensitivity results for the different limit state functions.
3. The third step is to calculate the optimum solution. This encompasses inclusion
of the values of the safety factors in the values of the design variables in order
evaluate the optimum solution.

5.1.2 D iscussion
Table 8.4 presents the different results of the DDO and RBDO procedures. Both RBDO
procedures can satisfy the required reliability level βt = 3 but the DDO cannot. The
DDO may lead to high or low reliability levels because it does not control the reliability.
In order to demonstrate the efficiency of the OSF (semi-numerical) method relative to
the hybrid (numerical) procedure, we discuss below the results obtained by these pro-
cedures. The resulting design obtained by the OSF method is the best solution relative
to the design obtained by hybrid RBDO procedure as the objective is to provide the
210 Structural design optimization considering uncertainties

Table 8.4 Results for DDO and RBDO procedures.

Variables DDO RBDO

Hybrid OSF
method method

mH1 8.6285 8.3992 9.3974


mH2 25.232 24.753 21.851
mH3 9.3910 8.6298 9.1176
σ1max 31.152 33.096 34.347
σ2max 11.026 12.059 12.134
σ3max 27.999 29.718 30.915
H1 7.5034 7.4942 7.7576
H2 18.961 18.726 18.482
H3 7.4072 7.4368 7.3930
47.488
y
σ1 47.009 47.335
y
σ2 17.007 17.075 17.177
y
σ3 41.598 41.997 41.999
β 3.5127 3.0001 3.0000
Volume 43 252 41 782 40 366

best compromise between cost and safety. The OSF methodology satisfies the required
reliability level βt = 3 and gives a smaller structural volume than the hybrid method for
the reliability level. In order to improve the resulting structure by the hybrid method,
the designer can obtain several local optima and then select the best solution. The
resulting optimum volume obtained by OSF (VOSF = 40 366 mm3 ) is smaller than the
resulting volume determined by the hybrid method by 3.39%.
In general the DDO is simple to implement but it has two kinds of optimization
variables x and u and also needs two optimization procedures: the first determines the
optimal solution using safety factor, and the second yields the value of the reliability
index. Note that DDO cannot perform design subject to a required reliability level.
The hybrid method as a numerical method, can generally satisfy the required reliability
level but it has two types of optimization variables x and y and needs also to solve
a single, complex optimization problem. This means that the designer needs more
iteration to get several local optima in order to improve the objective function, and the
hybrid method is complex to implement exactly. The OSF as a semi-numerical method,
is simple to implement, can satisfy required reliability levels, has only a single type of
variable y, and only needs a single, simple optimization process to determine the design
point. It is demonstrated that the OSF method possesses several advantages: a smaller
number of optimization variables, good convergence stability, lower computing time,
and satisfaction of required reliability levels (see also Kharmanda et al. 2002, 2004c).

5.2 Mod al analy s is: An air c r aft wing


The objectives of the following modal analysis are to demonstrate that the safest
point method is the most suitable to use for the modal cases because of its simple
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 211

A B C D

(a) (b)

Figure 8.6 Aircraft wing.

implementation and its computing time reduction relative to the other methods. The
wing is uniform along its length with cross sectional area as illustrated in Figure 8.6a.
It is firmly attached to the body of the airplane at one end. The chord of the airfoil
has dimensions and orientation as shown in Figure 8.6b. The wing is made of low
density polyethylene with a Young’s modulus of 38e3 psi, Poisson’s ration of 0.3, and
a density of 8.3E-5 lbf-sec2/in4. Assume the side of the wing connected to the plane is
completely fixed in all degrees of freedom. The wing is solid and material properties
are constant and isotropic.
Here, we can consider thee structures: The first structure must be optimized subject
to the first frequency value of the given fa , the second one must be optimized at the end
frequency value of the interval fb , and the third structure must be optimized subject to
a frequency value fn that verifies the equality of reliability indices relative to both sides
of the given interval (see Figure 8.5). Let consider the interval [16,18] Hz and a given
interval to design the beam structure. This way, we consider that the frequency values
as follows: fa = 16 Hz, fb = 18 Hz and fn = ? Hz, where fn must verify the equality of
reliability indices:βa = βb . Table 8.5 shows that the safest point method provides the
solution with a good computational time relative to the HM.

5.3 Transient analys is: A triangular plate


The objective of the following transient analysis is to demonstrate that the improved
hybrid method can provide the designer with a better optimum value than the hybrid
method. A triangular plate structure being illustrated in Figure 8.7 is submitted to
pressure 200 Mpa. The Young’s modulus is: 207 GPa and Poisson’s ratio is: 0.3.
The thickness of this plate is: R0 = 10 mm and T1 = 30 mm. The radius of fillet is:
FIL = 10 rad. The yield stress is: σy = 235 Mpa.
The optimization problem is to find the optimum value of the structural volume
subject the maximum stress (transient response). This hybrid RBDO problem can be
expressed as:

min Volume(x) · dβ (x, y)


x,y
subject to σmax (y) − σy = 0
(59a)
σk (y) − σy ≤ 0
and dβ (x, y) ≥ βt
212 Structural design optimization considering uncertainties

Table 8.5 Results for Hybrid and SP procedures when βa = βb .

Parameter Initial Optimum solutions

Hybrid SP
method method

Fn A 0.13295 0.13391 0.12300


B 0.24112 0.20138 0.22838
C 0.30834 0.29656 0.29963
D 0.26316 0.20562 0.22668
Fa A1 0.11295 0.12331 0.11301
B1 0.20112 0.24105 0.21578
C1 0.24834 0.28214 0.27162
D1 0.18316 0.26306 0.23855
Fb A2 0.15295 0.14441 0.13320
B2 0.28112 0.24120 0.24121
C2 0.36834 0.31071 0.30939
D2 0.34316 0.26330 0.26406
Fn 17.9030 17.1080 16.9790
Fa 14.3580 16.0990 16.0000
Fb 21.8460 17.9530 17.9510
Volume 6.18645 5.55177 5.83910
Time(S) — 25 151

and the improved hybrid RBDO problem can be presented as follows:

min Volume(x) · dβ (x, y) · Volume(my )


x,y
subject to σmax (y) − σy = 0
(59b)
σk (y) − σy ≤ 0
and dβ (x, y) ≥ βt

Here, we can regroup T1, R0 and FIL in a random vector y but to optimize the
design, the means mT1 , mR0 and mFIL are regrouped in a deterministic vector x, and
their fix standard-deviation equals to 0.1 mx .
Here, the normalized variable ui is given by:
yi − m i
ui = , i = 1, . . . , n (60)
σi

where the mean mi and the standard deviation σi are two parameters of the distribution,
usually estimated from the available data. Table 8.6 shows the hybrid and improved
hybrid results. The improved and the hybrid RBDO satisfy the required reliability level
βt . However, the optimal volume obtained by the improved hybrid method is less than
that obtained by the hybrid method. This way the volume value reduction is almost
26% that leads to economic structures.
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 213

T1
30°
30°

Dimension: mm
R  Fil

z x
30° 40
30°

INRAD
200
30°
30°

Figure 8.7 Geometry and Boundary conditions of triangular plate structure.

Table 8.6 Results for RBDO procedures by HM and IHM.

Parameter HM IHM

T1 24.985 24.058
FIL 8.5833 9.1013
R0 7.3251 9.8216
σy 234.92 235.04
mT1 29.678 26.092
mFIL 10.600 9.1062
mR0 7.6991 6.0869
σw 204.51 216.42
Volume 105 874 78 250
β 3.8096 3.8

In this example, we demonstrate that the improved hybrid method can improve the
structural performance relative to the hybrid method but it needs a more complex
model (complex implementation) than the hybrid method.

6 Conclusions
For the static analysis, we first demonstrate that the DDO procedure may lead to
low or high reliability levels because it necessitates a proposition of a global safety
factor depending on the engineering experience (cannot control the reliability levels).
However, all methods of RBDO (Reliability-Based Design Optimization) respect the
required reliability level. Comparing the RBDO methods, it has been demonstrated
that the classical approach needs a high computing time relative to the hybrid method
and has weak convergence stability (see Kharmanda et al. 2001, 2002). The improved
214 Structural design optimization considering uncertainties

Table 8.7 Advantages and disadvantages of DDO and RBDO procedures.

Models Advantages Disadvantages

DDO – Simple to implement – No satisfaction of reliability requirements


– Two optimization processes
– Two types of optimization variables x, u
– May lead to local optima
RBDO
1. Numerical methods
CM – Satisfaction of reliability requirements – Weak convergence stability
– High computing time
– Very complicated to implement
– Two types of optimization variables x, u
– May lead to local optima
HM – Satisfaction of reliability requirements – Single, complex optimization process
– Two types of optimization variables x, y
– Complicated to implement
– May lead to local optima
IHM – Satisfaction of reliability requirements – Improvement of the objective function
– Single, complex optimization process
– Two types of optimization variables x, y
– Very complicated to implement
– More iteration to improve the objective
– May lead to local optima
2. Semi-numerical methods
OSF – Simple to implement – Leads, at least, to local optima
– Satisfaction of reliability requirements
– Single, simple optimization process
– Reduction of computing time
– Single type of variables y
SP – Simple to implement – Used only for modal analysis
– Satisfaction of reliability requirements
– Double simple optimization processes
– Reduction of computing time
– Single type of variables y

hybrid method needs a complex model to improve the optimum value of the objective
function relative to the hybrid method. The hybrid method has a good convergence
stability that makes it suitable for RBDO problems as a numerical method. How-
ever, the hybrid RBDO problem is more complex than that of deterministic design
and may not lead to local optima. To overcome both drawbacks, an Optimum Safety
Factor (OSF) method has been proposed to provide us with reliability-based optimum
designs without additional computing cost for probabilistic (reliability) constraints
and leads, at least, to local optima (Kharmanda et al. 2004c). As result, the OSF
being a semi-numerical method is very efficient for the RBDO problems in static
cases because of its simple implementation and the reduction of number of optimiza-
tion variables. If the designer needs to improve the objective function, the hybrid
and improved hybrid methods are suitable by testing several starting points and next
selecting the best solution. The improved hybrid method provides us with a better
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s 215

solution than the hybrid method but needs a complex implementation (Mohsine et al.
2005, Kharmanda & Olhoff 2007).
For modal analysis, the hybrid method has been applied for a special case of a
structure performing free vibrations (Kharmanda et al. 2003), where the reliability-
based optimum solution was determined subject to a prescribed eigen-frequency fn . The
optimum safety factor method has been also applied for a special case of a structure per-
forming free vibrations (Kharmanda et al. 2004a), where the reliability-based optimum
solution was determined subject to a prescribed eigen-frequency fn . But if the failure
interval [fa , fb ] is given, we cannot determine the reliability-based optimum solution
using optimum safety factor method and the hybrid necessitates a complex proce-
dure to optimize three structures simultaneously to get the equality between reliability
indices. The semi-numerical method called Safest Point (SP) method is very suitable
for the modal cases because of its simple implementation and small computing time
(Kharmanda et al. 2006, 2007).
For transient analysis, the hybrid and improved hybrid methods (numerical methods)
and the OSF method (semi-numerical method) are suitable to be used. When saving
the computational time or/and needing simple implementation, the OSF method is the
best approach to be used. However, for getting several solutions and improving the
optimum value of the objective function, we use the hybrid and the improved hybrid
methods (Mohsine et al. 2006). The improved hybrid method provides the designer
with a local optimum better than the hybrid one but the hybrid method is simpler to
implement than the improved hybrid method.
As a general conclusion, the DDO is simple to implement but it has two kinds
of optimization variables x and u and also needs two optimization procedures: the
first determines the optimal solution using safety factor, and the second yields the
value of the reliability index. Note that DDO cannot perform design subject to a
required reliability level. All numerical and semi-numerical methods RBDO satisfy the
required reliability level but they are different at computing time, convergence stability,
simplicity implementation, improvement of objective function value, kind of variables,
suitable uses.

References

Ditlevsen, O. & Madsen, H. 1996. Structural Reliability Methods. John Wiley & Sons.
Feng, Y.S. & Moses, F. 1986. A method of structural optimization based on structural system
reliability. J. Struct. Mech. 14:437–453.
Kharmanda, G., Mohamed, A. & Lemaire, M. 2001. New hybrid formulation for reliability-
based optimization of structures. The Fourth World Congress of Structural and Multidisci-
plinary Optimization, WCSMO-4, Dalian, China, 4–8 June 2001.
Kharmanda, G., Mohamed, A. & Lemaire, M. 2002. Efficient reliability-based design opti-
mization using hybrid space with application to finite element analysis. Structural and
Multidisciplinary Optimization 24:233–245.
Kharmanda, G., Mohamed, A. & Lemaire, M. 2003. Integration of reliability-based design
optimization within CAD and FE models. In: Recent Advances in Integrated Design and
Manufacturing in Mechanical Engineering, Kluwer Academic Publishers.
Kharmanda, G., El-Hami, A. & Olhoff, N. 2004a. Global Reliability–Based Design Optimiza-
tion. In: Frontiers on Global Optimization, C.A. Floudas (ed.), 255 (20), Kluwer Academic
Publishers.
216 Structural design optimization considering uncertainties

Kharmanda, G. 2004b. Two points of view for developing reliability-based design optimization.
NT2F4 (New Trends in Fatigue and Fracture IV), Aleppo, Syria, 10–12 May 2004.
Kharmanda, G., Olhoff, N. & El-Hami, A. 2004c. Optimum values of structural safety fac-
tors for a predefined reliability level with extension to multiple limit states. Structural and
Multidisciplinary Optimization, 27:421–434.
Kharmanda, G., Olhoff, N. & El-Hami, A. 2004d. Recent Developments in Reliability-Based
Design Optimization (Keynote Lecture). In: Computational Mechanics, Proc. Sixth World
Congress of Computational Mechanics (WCCM VI in conjunction with APCOM’04), Sept.
5–10, 2004, Beijing, China. Tsinghua University Press & Springer-Verlag.
Kharmanda, G. & Olhoff, N. 2007. Extension of optimum safety factor method to non-
linear reliability-based design optimization. Journal of Structural and Multidisciplinary
Optimization, to appear.
Kharmanda, G., Altonji, A. & El-Hami, A. 2006. Safest point method for reliability-based
design optimization of freely vibrating structures. 1st International Francophone Congress
for Advanced Mechanics, IFCAM01, Aleppo, Syria, 02–04 May 2006.
Kharmanda, G., Makhloufi, A. & Elhami, A. 2007. Efficient computing time reduction for
reliability-based design optimization. Qualita2007, 20–22 March 2007, Tanger, Maroc.
Koch, P.N., Yang, R.J. & Gu, L. 2004. Design for six sigma through robust optimization. Struct.
and Multidisc. Optim. 26:235–248.
Mohsine, A., Kharmanda, G. & El-hami, A. 2006. Improved hybrid method as a robust
tool for reliability-based design optimization. Structural and Multidisciplinary Optimization
32:203–213.
Mohsine, A. 2006. Contribution à l’optimization fiabiliste en dynamique des structures
mécaniques. Thèse de doctorat, INSA de Rouen, France (French version).
Tu, J., Choi, K.K. & Park, Y.H. 1999. A new study on reliability-based design optimization.
Journal of Mechanical Design, ASME 121(4):557–564.
Youn, B.D. & Choi, K.K. 2004. Selecting Probabilistic Approaches for Reliability-Based Design
Optimization. AIAA Journal 42(1):124–131.
Yang, R.J., Chuang C., Gu, L. & Li, G. 2005. Experience with approximate reliability-
based optimization methods II: an exhaust system problem. Structural and Multidisciplinary
Optimization 29:488–497.
Chapter 9

Advances in solution methods for


reliability-based design optimization
Alaa Chateauneuf & Younes Aoues
University Blaise Pascal, France

ABSTRACT: The solution of Reliability-Based Design Optimization implies high


computational efforts due to the coupling of reliability and optimization problems. The prob-
abilistic constraint is the key constraint in RBDO, which requires considerable computational
effort and reveals the classical iterative problems of numerical efficiency, accuracy and stability.
To solve the RBDO systems, three approaches are commonly used: the two-level approach, the
one-level approach and the decoupled approach. A good algorithm should satisfy the conditions
of efficiency, precision, generality and robustness. This chapter describes the recent advances in
numerical methods for RBDO solution, in order to give a comprehensive overview of the basis
and characteristics of the different approaches. The numerical applications on simple structures
allow us to compare the efficiency of the RBDO approaches.

1 Introduction
Reliability-Based Design Optimization (RBDO) aims at searching for the best com-
promise between cost reduction and reliability assurance, by considering system
uncertainties. Although the basic RBDO ideas have been established more than thirty
years ago, the solution is not yet easy, even for simple structures. The difficulty
lies in the consideration of the reliability constraints, which require a large com-
putational effort and involves classical numerical problems, such as convergence,
accuracy and stability. The situation becomes worst when finite element and CAD
models are involved, especially when material and geometrical nonlinearities are
considered.
While the optimization process is carried out in the space of the design variables, the
reliability analysis is performed in the space of the random variables, where a lot of
numerical calculations are required to evaluate the failure probability. Consequently,
in order to search for the optimal structural configuration, the design variables are
repeatedly changed, and each set of design variables corresponds to a new random
variable space which then needs to be manipulated to evaluate the structural reliability
at that point (Murotsu et al. 1994). Because of the too many repeated searches needed
in the above two spaces, the computational time for such an optimization becomes the
main problem.
Figure 9.1 shows the main involved models in the RBDO modeling of engineering
structures. The nested optimization, reliability, CAD and finite element models involve
nonlinear iterative numerical procedures, where the problems of convergence, preci-
sion and computation time are omnipresent. For practical design, the cost of simple
218 Structural design optimization considering uncertainties

Optimization problem
(design space)
Reliability problem
(random variable space)
CAD model
(geometrical variables)
Finite Element Model
(nodal variables)
Nonlinearities
and transient behavior
(mechanical variables)

Figure 9.1 Nested models in the RBDO of engineering structures.

finite element (which is already a large time consuming procedure) is multiplied by a


factor between ten and several thousands, which cannot be afforded in the design pro-
cess. The computation scheme is thus a big problem that researchers should overcome
in order to allow for practical applications.
Generally speaking, a good solver should satisfy the conditions of efficiency (com-
putation time), precision (accuracy of finding the optimum), generality (capability to
deal with different kinds of problems with or without large number of variables) and
robustness (stability of the convergence for any admissible initial point, local or global
convergence criteria, etc).
In the last decade, many advanced methods and techniques have been intensively
developed in both fields: optimization and reliability. This chapter aims to describe
the most common numerical methods to solve RBDO problems. After describing the
basic formulation, the two-level, the one-level and the decoupled approaches are pre-
sented and discussed. Numerical applications are then presented for illustration and
comparative purposes. For more details about the presented approaches, the reader is
encouraged to review the original works referenced at the end of this chapter.

2 Basic RBDO formulation


Basically, the RBDO problem is defined as the minimization of either the initial cost or
the expected total cost (i.e. initial and expected failure costs), subjected to the constraint
of an admissible failure probability Pft , in addition to the other structural constraints.
As mentioned above, the particularity in RBDO lies in the computation of the reli-
ability constraint, which involves additional computational effort and convergence
difficulties. This constraint can be evaluated by one of the reliability methods, such
as FORM/SORM, RSM or even Monte Carlo (Ditlevsen et Madsen 1996, Rackwitz
2001, Lemaire 2006). The RBDO is formulated as:
min f (d)
d
subject to Pf (d) ≤ Pft (1)
and gj (d) ≤ 0
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 219

x2 Physical space u2 Normalized space


Failure domain
Failure domain Gu (u, d)  0
G(x, d)  0
P*
u*2 MPP

mx2 G(x, d)  0 Gu(u, d)  0


Safe β α
domain
mx x1 u* u1
1 1

Figure 9.2 Reliability index solution and probabilistic transformation.

where d is the vector of design variables, f (d) is the objective function, gj (d) are the
structural deterministic constraints, Pf (d) is the failure probability of the structure and
Pft is the admissible failure probability. In the First Order Reliability Method (FORM),
the failure probability Pf is given as a function of the reliability index β:

Pf (d) = (−β(d)) ≈ Pr[G(X, d) ≤ 0] (2)

where X is the vector of random variables (whose realization is noted x), Pr[·] is the
probability operator and (·) is the standard gaussian cumulated function. It is to be
noted that the design variables d may be either independent deterministic variables or
distribution parameters, especially the mean values, of some of the random variables.
These two cases should be carefully taken into account when computing the gradient
vectors. The reliability level is defined by an invariant reliability index β, as defined by
(Hasofer and Lind 1974), which is evaluated by solving the constrained optimization
problem:

β = min u = (Ti (x))2
i (3)
under: G(T(x), d) ≤ 0

where u is the distance between the median point and the failure subspace in the nor-
malized space ui and Ti (·) is an appropriate probabilistic transformation: i.e. ui = Ti (x).
The image of the performance function G(x, d) in the normalized space is written:
Gu (u,d) = G(x, d) (Figure 9.2). The solution of this problem is called the Most Proba-
ble Failure Point (MPP), the design point or the β-point, where β = u∗i ; it is noted P∗
or either x* or u*, whether physical or normalized space is considered, respectively.
In fact, the term Most Probable Point is not rigorous from the probabilistic point of
view, P∗ is just the point corresponding to the maximum joint density in the failure
domain. However, in RBDO, the term MPP is preferred to the term design point, as it
avoids confusion between design optimization and design for reliability.
220 Structural design optimization considering uncertainties

For the reliability problem describe in equation 3, the Kuhn-Tucker optimality


conditions are written:

∇u u + λ∇u Gu (u∗ , d) = 0


(4)
Gu (u∗ , d) = 0

where ∇u is the gradient operator in the normalized space and λ is the Lagrange mul-
tiplier. The solution of the above equations leads to: λ = 1/∇u Gu (u∗ , d), and hence
the reliability problem must satisfy the conditions:

Gu (u∗ , d) = 0
(5)
∇u Gu (u∗ , d) · u∗ + ∇u Gu (u∗ , d) · u = 0

The optimization process is carried out in the space of the design parameters d, which
are deterministic. In parallel, the solution of the reliability problem is performed in
the space of the random variables by solving the optimization problem in equation 3.
Traditional reliability-based design optimization requires a double loop iteration pro-
cedure, where reliability analysis is carried out in the inner loop for each change in the
design parameters, in order to evaluate the reliability constraints. The computational
time for this procedure is extremely high due to the multiplication of the number of
iterations in both optimization and reliability problems, involving a very high number
of mechanical analyses.
Recent developments in the literature aims at solving the numerical difficulties,
through three main approaches:

– Two-level approaches, which are based on the improvement of the traditional


double-loop approach by increasing the efficiency of the reliability analysis.
– Mono-level approaches which aim at solving simultaneously the optimization and
the reliability problems within a single loop dealing with both design and random
variables.
– Decoupled approaches, where the reliability constraint is replaced by an equiva-
lent deterministic (or pseudo-deterministic) constraint, involving some additional
simplifications.

In the following sections, the basic ideas behind these approaches will be briefly
described.

3 Two-level approaches
A straight forward approach to solve RBDO problems is a two-level approach, where
the outer loop aims to solve the optimization problem by improving the design variables
d and the inner loop aims to solve the reliability problem by dealing with the random
variables x. In order to reduce the computational effort in the two-level formulation,
two RBDO approaches have been proposed to deal with probabilistic constraints:

– Reliability Index Approach (RIA) considers the cost reduction under the reliability
index constraint.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 221

– Performance Measure Approach (PMA) involves an inverse reliability problem as


an alternative constraint.

3.1 Reliability Index Approach (RIA)


Traditionally, the RBDO procedure is solved in the two spaces: the space of design
variables, corresponding to deterministic physical space and the space of Gaussian
random variables, obtained by probabilistic transformation of the random physi-
cal variables. In the classical approach, the RBDO is calculated by nesting the two
following problems:

• optimization problem under reliability constraints:

min f (d)
x
subject to β(d) ≥ βt (6)
and gj (d) ≤ 0

where f (d) is the objective function, gj (d) are the associated deterministic con-
straints, β(d) is the reliability index of the structure and βt is the target
reliability.
• calculation of the reliability index β(d):

min u = [Ti (x, d)]2
x
i (7)
subject to G(x, d) ≤ 0

where u is the distance between the origin and the considered point in the nor-
malized random space, G(x,d) is the limit state function and Ti (·) is the probabilistic
transformation to the normalized space.
The solution of this RBDO problem consists in solving the two nested optimiza-
tion problems. For each new set of the design parameters, the reliability analysis is
performed in order to get the new MPP, corresponding to a given reliability level. As
illustrated in Figure 9.3, this procedure leads to slow convergence scheme and zigzag-
ging due to the sequential changes of the optimal point and the Most Probable Point.
The method is somehow similar to relaxation procedures, known as a low conver-
gence scheme. Actually, it is well established that RIA converges slowly or even fails
to converge for a number of problems (Choi and Youn 2002).

3.2 Perf ormance Meas ure Approach (P MA)


This method is based on an inverse reliability analysis, where the performance function
level is specified as a constraint, instead of the reliability index itself (Tu et al. 1999,
2000). The performance measure is written:
−1
Gp (d) = FG [(−βt ); x, d] (8)
222 Structural design optimization considering uncertainties

Target reliability
u2 index location
u2

MPP
MPP
Limit state
Limit state
G (d) = 0
G (d)  0
G (d) > 0

G (d)  0 βt

G
of
o fG a se
as e x0 In cre
re
x0 I nc
u1
u1 (b)
(a)

Figure 9.3 Illustration of (a) RIA and (b) PMA searches.

where (·) is the standard gaussian cumulated distribution function and FG (·) is the
−1
CDF of the performance function G(·) and FG (·) its inverse. In the standard gaussian
space, the performance measure is directly evaluated at the Most Probable Failure
Point P*, such that the target reliability can be satisfied.

Gp (d) = G(x∗ , d | u∗  = βt ) (9)

The RBDO is then formulated as:

min f (d)
dk
subject to Gp (d) ≤ 0 (10)
and gj (x) ≤ 0

where Gp (·) is obtained by solving the problem defined in equation 9.


PMA is shown to be efficient and robust, since it is easier to minimize a complicated
objective function subjected to a simple constraint than to minimize a simple objective
function subjected to a complicated constraint. However, several numerical examples
using PMA show inefficiency and instability in the assessment of probabilistic con-
straints during the RBDO process, even with Advanced Mean Value AMV or Hybrid
Mean Value HMV methods (Youn and Choi 2004a). For this reason, (Youn and Choi
2004b) proposed a coupling of HMV with Response Surface Method, specifically
developed for reliability and optimization analyses.
In Figure 9.3, the search scheme of PMA is compared with RIA. While RIA is zigzag-
ging, PMA goes first to the hyper-sphere with a radius equal to the target reliability
index, then iterations are carried out on this hyper-sphere. This is the reason why
convergence is faster and more stable in the case of PMA.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 223

Although that many applications, such as those given by Frangopol (1995) and
Nikolaidis and Burdisso (1988), are based on RIA algorithms, the PMA is increasingly
used for large-scale problems. Lee et al. (2002) have conducted a comparative study
between RIA and PMA, where RIA has shown to be less efficient for high reliability
levels. They analyzed several examples and concluded that conventional RIA is not
computationally attractive, compared with recently introduced target performance
based approaches.

4 Mono-level approaches
The mono-level methods are aimed at improving the efficiency of the RBDO proce-
dures, by introducing the reliability at the same loop as optimization. The basis of the
one-level approaches consists in solving both optimization and reliability problems
without nesting the two problems. In this way, parallel convergence can be reached in
both design and random spaces, and the computational cost may be saved.
Among the mono-level approaches in the literature, one can indicate the well-known
works of (Madsen and Friis Hansen 1992; Kuschel and Rackwitz 1997), which are
based on reformulating the RBDO problem. The solution can then be obtained by
traditional nonlinear optimization algorithms.

4.1 Total cost formulation


The work of (Madsen and Friis Hansen 1992) belongs to the earlier efforts in this
topic. They proposed a combined method integrating the expected failure cost in the
objective function. The proposed mono-level formulation is written as:

min CT (d) = CI (d) + Cf (−u)


d

subject to Gu (u, d) = 0 (11)


u ∇u Gu (u, d)
and =−
u ∇u Gu (u, d)

The last condition can also be written:

∇u Gu (u, d) · u + ∇u Gu (u, d) · u = 0 (12)

This formulation has the advantage of being solved by standard optimization algo-
rithms, but requires the explicit implementation of the probabilistic transformation,
as well as the computation of the second order derivatives. The numerical examples
carried out by the authors showed very large number of mechanical calls, compared to
two-level RBDO models. Despite the high computational cost, further improvements
of the combined method were still possible to make it an attractive alternative to the
classical nested RBDO.

4.2 Formulation with optimality conditi o ns


Kuschel and Rackwitz (1997) have developed two formulations for RBDO: either by
minimizing the expected total cost, or by maximizing the structural reliability for a
224 Structural design optimization considering uncertainties

given cost. In this mono-level approach, the reliability constraints are replaced by
Karush-Kuhn-Tucker conditions for the first order reliability problem. These optimal-
ity conditions are then introduced as new constraints in the mono-level optimization
problem.
The total cost formulation is written as:

min f (d, u) = CI (d) + Cf (d)(−u)


d
subject to Gu (u, d) = 0
∇u Gu (u, d) · u + ∇u Gu (u, d) · u = 0
(13)
(−u) ≤ Pft
u = T(x, d)
and gj (d) ≤ 0

The maximum reliability formulation is written as:

max u
d
subject to Gu (u, d) = 0
∇u Gu (u, d) · u + ∇u Gu (u, d) · u = 0
(14)
CI (d) + Cf (d) (−u) ≤ Ct
u = T(x, d)
and gj (d) ≤ 0

To allow for efficient solution of both problems, the sensitivity operators are pro-
vided in the algorithm. The reliability index sensitivity is given by (Enevoldsen and
Sørensen 1994):

∂β 1 ∂Gu (u∗ , d)
= (15)
∂dk ∇u Gu (u∗ , d) ∂dk

and the expected cost sensitivities are computed as following:

∂CT (u, d) ∂CI (d) ∂Cf (d)


= + (−u)
∂dk ∂dk ∂dk
∂CT (u, d) ui
= −Cf (d) φ(u) (16)
∂ui u

The authors have applied this approach on several examples and showed the
efficiency of the approach.

4.3 Hyb ri d f or mulat io n


A mono-level approach has also been introduced by the hybrid formulation, proposed
by (Kharmanda et al. 2002), allowing to combine deterministic and random variables.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 225

The RBDO formulation is based on defining a new objective function F(x,d) which
integrates cost and reliability aspects as following:

min F(d, x) = f (d) · Tβ (x, d)


d,x
subject to gj (d) ≤ 0
(17)
Tβ (x, d)) ≥ βt
and G(x, d) ≤ 0

where Tβ (x, d) is the image of u(x, d) in the physical space (while u(x, d) is a straight
line, Tβ is generally a curve). The minimization of the function F(x,d) is carried out
in the hybrid space of deterministic and random variables. An example of this hybrid
design space is given in Figure 9.4, where the reliability levels Tβ are represented by
ellipses (case of normal joint distribution), the objective function levels are given by
solid curves and the limit state function is represented by dashed lines. Two important
points can be observed: the optimal solution Pd∗ and the reliability solution Px∗ (i.e. the
design point found on the curves G(x, d) = 0 and Tβ = βt ). This hybrid space contains
all information about the RBDO model (e.g. optimal points, sensitivities, reliability
levels, objective function iso-values and constraints).
The optimality conditions for this hybrid formulation are:

∇x F(d, x) − λ∇x Tβ (x, d) + ∇x G(x, d) = 0


∇d F(d, x) + λj ∇d gj (d) − λ∇d Tβ (x, d) + ∇d G(x, d) = 0
λj gj (d) = 0 (18)
λ(βt − Tβ (x, d)) = 0
G(x, d) = 0

Hybrid Design Space


x2, d2
Limit state decreasing
G(x→, →y )0
G(x→, y→)0
G(x , y ) 0

Tb bt
→ →

Tb  bt
Px*
Tb
Objective function levels
g
easin

Pd*
decr
f (x→)

x1, d1

Figure 9.4 Hybrid design space.


226 Structural design optimization considering uncertainties

It can be shown that these optimality conditions satisfy the initial two-level RBDO
formulation (Kharmanda et al. 2003). While the method is theoretically attrac-
tive, numerical applications have shown that special care should be considered
in the implementation of such a procedure, in order to ensure efficiency and
convergence.
Kaymaz and Marti (2006) have developed a specific formulation to apply two- and
one-level approaches to elastoplastic structural behavior. In this study, the one-level
approach required a more complex formulation and a larger number of optimization
variables, but no difficulties have been observed for convergence. According to Royset
et al. (2001), the mono-level approach may have several disadvantages: 1) even with
first order optimization algorithms, the mono-level approach requires second-order
derivatives, 2) an explicit formulation of the probabilistic transformation is required,
3) the mono-level approach is not suitable for system reliability constraints. Never-
theless, the mono-level approach seems to be very attractive, but still requires specific
developments.

5 Decoupled approaches
The idea of decoupling optimization and reliability problems seems to be very attrac-
tive as nested loops can be avoided and a lot of reliability analyses can be saved. This is
generally carried out by defining a specific approximation and an equivalent determin-
istic parameter. However, the main challenge lies in the specification of the equivalent
RBDO problem allowing to reach accurate precision.
A basic idea consists in defining an equivalent deterministic constraint in terms
of the standard deviation of the performance function. The optimal design is then
searched for under approximated percentile of the performance function; this method is
known as the Approximate Moment Approach (AMA). The updating of the equivalent
deterministic constraint can be carried out by performing a reliability analysis after
each convergence to new optimal points.
Starting from the initial point, an alternative solution consists in performing reli-
ability analysis to determine the Most Probable Failure Point (MPP) and hence the
reliability index and the safety factors. The new limit state equation at the MPP is then
used as a constraint in the deterministic optimization analysis, where the output are the
new design parameters. These two steps can be solved in sequence until convergence
(Torng an Yan 1993; Zou et al. 2004).
Der Kiureghian and Polak (1988), Kirjner-Neto et al. (1998) and Royset et al.
(2001) developed decoupled approaches by reformulating the RBDO problem as a
deterministic semi-infinite optimization problem, where outer approximation method
allows to solve the reliability problem independently of the optimization scheme. In
this approach, the reliability constraint is firstly transformed into an infinite number of
deterministic limit state constraints, and then the application of an outer programming
algorithm allows to solve the RBDO problem.
In a recent work, (Ching and Hsu 2006) proposed a method to transform the relia-
bility constraint into deterministic constraint, by introducing the so-called limit state
factor, multiplying the nominal limit state. When the equivalent deterministic con-
straint is defined, the RBDO can be solved as a classical deterministic optimization
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 227

problem. However, it is to be proved whether this approach can be applied for real
engineering structures, with several limit states involving large number of design and
random variables.

5.1 Approximate Moment Approach (AMA)


This approach transforms the probabilistic constraints to an approximated determin-
istic constraint, given by a percentile of the performance function (similar to the
characteristic value approach in classical design). The RBDO problem is written as
(Koch et al. 2007):

min f (d)
d
(19)
subject to mG (d) + k σG (x, d) ≤ 0

where mG and σG are respectively the mean and the standard deviation of the perfor-
mance function and k is a coefficient to be specified for a given safety level. Unlike
other methods, the AMA does not require reliability analysis, as the required infor-
mation are only the first and second moments of the performance function. While the
mean is approximately computed in terms of the mean values of the random variables,
the variance is based on first order development of the performance function, which
can be written for independent variables as:

  ∂G(x, d) 2
σG
2
= σX i (20)
∂xi x=mX
i

The method is efficient, as practically no extra cost is required, with respect to stan-
dard deterministic optimization. However, this approach implies many simplifications
and cannot lead to accurate reliability results. Consequently, the error in the reliability
estimation does not allow for convenient RBDO procedure, and in many cases leads
to meaningless results. The main defect lies in the assumption that the random vari-
ables and the performance function are normally distributed, which is far from being
appropriate for most of engineering structures. The other strong assumption lies in
the computation of the variance of G, which assumes linear combination of random
variables, leading to very limited application field.

5.2 Sequential Optimization with Reliabi l i ty A s s e s s me nt (SORA)


The Sequential Optimization with Reliability Assessment SORA is based on a single
loop strategy composed by a sequence of deterministic optimization and reliability
analyses. For each loop, the deterministic optimization is carried out, then the perfor-
mance measure is checked and updated. The new value of the performance measure is
then used in the next loop as a constraint limit.
228 Structural design optimization considering uncertainties

Three major ideas are introduced in the SORA method (Du & Chen 2002):

– A reliability percentile formulation is used to evaluate the design feasibility at the


desired reliability level,
– An equivalent deterministic optimization is applied in order to reduce the number
of reliability analyses,
– Efficient MPP search algorithm (the Modified Advance Mean Value Method-
MAMV) is used for inverse reliability evaluation.

The use of the reliability percentile instead of full reliability analysis leads to com-
putational time reduction. This percentile allows for the identification of the feasible
domain in design optimization. For a given reliability level R = 1 − Pf , the percentile
reliability performance is given by:

Gp = G(x∗ , d) such as: Pr [G(X, d) ≥ Gp ] = R (21)

The RBDO model can thus be written as:

min f (d)
d (22)
subject to G(x∗ , d) ≥ 0

This formulation has the advantage of being fully deterministic, which can be
solved by any classical optimization algorithm. However, the solution of equation
21 requires several calls to the structural model, which reduces the efficiency of the
approach.

5.3 Seq uen ti al appr o ximat e pr o g r amm i n g (SAP)


In this method (Chen et al. 2006; Yi et al. 2006), a sequence of approximate
programming is performed until the identification of the optimum point. In each sub-
programming problem, the reliability analysis is approximated at the current MPP.
By using suitable linearization, a recurrence formula derived from the optimality con-
ditions at the MPP is developed in order to approximate the reliability index and its
derivatives. At each step, the previously found MPP is taken as the linearization point.
The use of response sensitivities improves the efficiency of the proposed algorithm.
This procedure enables concurrent convergence of design optimization and reliability
calculations.
The optimization problem is written:

min f (d)
d

subject to β̃(x, d) ≥ βt (23)

and gj (d) ≤ 0
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 229

where β̃ is the approximated reliability index, obtained through the recurrence formula:
(r)
 ∂β̃(x, d)  (r+1) (r)
β̃(x, d (r+1)
) = β̃(x, d ) +(r)
 · (dl − dl )
∂dl 
l
 (r)
∂G  (r)
G(r) − ∂ui 
· ui  (r) (r)
(r) i G(r)
β̃(x, dk ) = = − αi · ui (24)
∇u G(r)  ∇u G 
(r)
i

u∗i = −αi · β̃(x, d(r) )


(r)

where the subscripts (r) and (r + 1) indicate the iteration numbers and αi is the direction
cosine for the variable ui .

5.4 Probabilistic s ufficiency factor


Wu et al. (2001) and Qu and Haftka (2003) introduced the probabilistic sufficiency
factors in order to replace the RBDO with a series of deterministic optimizations
by converting reliability constraints into equivalent deterministic constraints. For a
prescribed failure probability Pft , the probabilistic sufficiency factor Psf is given by
solving:

Pr [γ ≤ Psf ] = Pft (25)

where γ is the global safety factor, define by the random ratio between strength and
stress. This means that the probabilistic sufficiency factor is simply a percentile of the
safety factor that corresponds to the target failure probability. Qu and Haftka (2003)
proposed to compute Pft by Monte Carlo simulations. When Pft is defined, the RBDO
problem can be written as:

min f (d)
d
subject to 1 − Psf ≤ 0 (26)

and gj (d) ≤ 0

It can be seen that the sufficiency factor constraint is equivalent to the target reliabil-
ity constraint. The drawback of the method lies in the use of Monte Carlo simulations,
which is generally large time consuming and presents significant numerical noise.

5.5 S ingle-loop double vector (SLDV)


In the Single-loop double-vector method (SLDV), there are two variable vectors: one for
the mean values (design parameters) and one for the MPP values. This method has been
improved by (Chen et al. 1997) who proposed to work with only one vector, leading to
Single-loop single-vector approach (SLSV), on the basis of first order approximation
of the limit state.
230 Structural design optimization considering uncertainties

6 System reliability optimization


The progress in System Reliability-Based Design Optimization SRBDO is relatively
slow because it depends on the system reliability analysis where the computational
time, and the numerical instability lead to many difficulties in the SRBDO formula-
tion. The solution of the relevant failure modes is a time consuming process, which is
mainly due to design variable changes at each iteration of the optimization procedure.
Consequently, the relevant failure modes also change during optimization iterations
(e.g. a failure mode which is the most important within a given iteration may become
negligible in the following iterations, due to design variable changes). The redundancy
in the system reliability must be taken into account in the SRBDO Process (Murotsu
et al. 1994), but it was found that a non redundant structure would need a higher safety
margin than redundant one in order to achieve the same acceptable level of damage
tolerance.
Moses (1997) indicated that although many efforts have been made to com-
pute the system reliability, the fundamental idea of system reliability problem is to
extrapolate the analysis of the component reliability and performance to an over-
all structural risk assessment. Feng and Moses (1986) proposed an algorithm to
identify the failure modes through incremental loading models, in order to be intro-
duced in the system reliability constraint in the formulation of the reliability-based
optimization.
Different frameworks have presented many methodologies for SRBDO: the system
reliability may be considered as a single probabilistic constraint (Moses 1997), the sys-
tem reliability is replaced by the reliability indexes of the significant failure modes
(Rackwitz 2001; Enevoldsen and Sørensen 1993), which is an alternative to the
original formulation, and finally the system reliability constraint and component
reliability constraints were simultaneously taken into account; An alternative approach
of SRBDO can be based on multi-criteria optimization (Frangopol 1995, Kuschel and
Rackwitz 2000).
In an early work, (Enevoldsen and Sørensen 1993) proposed a sequential strategy
to solve the RBDO of structural systems. The use of sensitivity operators for cost
and reliability index, allows the authors to ensure stable convergence of the RBDO
algorithm under system reliability constraints. More recently, Kuschel and Rackwitz
(2000) proposed a mono-level approach for the reliability-based optimization of series
systems.
From another point of view, (Fu and Frangopol 1990) proposed to deal
with RBDO of structural systems as a multi-objective optimization problem.
This leads to a consistent decision making procedure for structural design and
assessment.
Although system optimization is usually considered either as a macro-component or
as an independently acting components where safety constraints are specified sep-
arately, (Aoues and Chateauneuf 2007) proposed a scheme for consistent RBDO
of structural systems. The basic idea consists in updating the component target
safety levels in order to meet the overall system target and to avoid over-designed
components. In the main optimization loop, the cost function is minimized under
the constraints that component reliability indexes must satisfy the adapted tar-
get values. In the inner updating procedure, the target indexes are adjusted to
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 231

meet the system reliability requirement. The proposed formulation is written in the
form:


⎪min C(d)

⎨ d
Updated
subject to βj (d) ≥ βtj (27)




dL ≤ d ≤ dU

Updated
where βtj is the updated target reliability index for the jth failure mode and βj (d)
is the reliability index for the considered design configuration. The system reliability
depends on its component reliabilities as well as the correlation ρjk between the different
failure modes, it can be expressed as:

βsys = f (β, ρ) (28)

where β is the reliability index vector and ρ is the matrix of correlations between
the failure modes. The embedded updating procedure is expressed by least square
minimization for the difference between the updated targets and the actual indexes
under the constraint of satisfying the required system safety. The procedure aims to
solve the system:
⎧ mp

⎪ 
⎨ min Updated
(βtj − β j )2
Updated
βt (29)

⎪ j i=1
⎩ Updated
subjected to βsys (βtj , ρjk ) ≥ βt_sys

Updated
where the updated targets βtj are themselves the optimization parameters. The
optimal solution corresponds to the best quadratic fitting between the component
Updated
indexes βj and the corresponding target indexes βtj , under the constraint of
satisfying the system target; this constraint is always active at the optimal solution:
Updated
βsys (βtj , ρjk ) = βt_sys . The updating procedure plays a key-role as it searches for the
best values of the target indexes which pull down the reliability indexes for structural
components.

7 Numerical applications
In this section, three examples are presented in order to illustrate the application
of RBDO methods. In the first example, a steel hook is optimized by a mono-level
approach. The second example concerns a bracket truss, where different methods are
compared for high nonlinear performance function. Finally, statically determinate and
redundant trusses are optimized to show the numerical efficiency of the applied
algorithms.

7.1 Steel hook


The RBDO is applied to the design of the steel hook shown in Figure 9.5 (Kharmanda
et al. 2002). The hook is loaded by a shaft in contact with the circular surface of radius
232 Structural design optimization considering uncertainties

a a
R2

R3 R3
t3 t3
b
L

d
t2
t1
e
f t1

t2

Figure 9.5 Hook configuration and finite element mesh.

Table 9.1 Random and design variables.

Variable Mean Std-deviation

a ma 3
b mb 2
c mc 4
d md 4
e me 4
f mf 4
t1 mt1 1
t2 mt2 1
t3 mt3 1
F 400 20

R1 and supported by an axis through the upper hole of radius R2 . While the upper
part has uniform thickness, trapezoidal cross-section is chosen for the curved part in
order to better redistribute the stresses. The following dimensions are fixed: the loading
radius R1 = 190 mm, the hanging hole R2 = 100 mm, the filet radius R3 = 100 mm and
the hook height L = 1200 mm. The used material is construction steel with Young’s
modulus E = 200 GPa and yield stress fY = 235 MPa.The hook is modeled by 1602
solid 20-node elements, with 18,600 degrees of freedom, the applied load F = 400 kN
is distributed over 30 elements on the contact surface.
In this study, the optimal design is to be found under reliability considerations.
The mean values of dimensional properties (ma , mb , mc , md , me , mf , mt1 , mt2 , mt3 ) are
considered as design parameters d, while the applied force F and the geometric variables
(a, b, c, d, e, f , t1, t2, t3) are taken as random variables X, as given in Table 9.1. For
this problem, the target reliability index is set to βt = 3.35, corresponding to a failure
probability of 4 × 10−4 .
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 233

7.1.1 D D O soluti on
In Deterministic Design Optimization, the structural volume is minimized under the
constraint of allowable stress corresponding to the yield stress divided by suitable
safety factor.

min V(d)
d (30)
subject to SF σmax ≤ fY

where SF is the global safety factor, related to loading force F, is set to 1.5 according to
common practice. The optimal volume is found to be: VDDO−1.5 = 0.2927 × 108 mm3
and the optimal design is given in Table 9.2. For this solution, the reliability analysis
is carried out, leading to the reliability index: β = 7.49 which is much higher than the
target value: βt = 3.35. Following this result, a cost reduction has been decided by
decreasing the global safety factor to: SF = 1.25. In this case, the optimal volume is
decreased to: VDDO−1.25 = 0.2508 × 108 mm3 and the corresponding reliability index
is: β = 3.64 > βt .
Three disadvantages can be observed in DDO approach: the first one is that
safety factors given in recommendation are not always suitable for structural systems,
the second one is the difficulty of reasonable choice of the safety factor because of their
critical role on manufacturing cost and structural reliability, and the third one is the
bad distribution of safety margins for different variables due to global scaling of the
safety level. For these reasons, it is very important to integrate the reliability analysis
in the optimization process.

7.1.2 RBD O soluti on


The Reliability-Based Design Optimization is formulated by introducing explicitly the
reliability constraint:

min V(d)
d (31)
subject to β(d) ≥ βt

where the reliability index is calculated by the solution of:

min u(x, d)


x
(32)
subject to σmax (x, d) ≥ fY

In this formulation, the stress becomes a random function. The hybrid formulation
is applied to solve the RBDO problem, leading to the optimal design parameters and
partial safety factors indicated in Table 9.2. While DDO is based on global safety factor
for loading F (SF = 1.25), RBDO shows that some parameters of the structure such as
dimensions, can also play a very important role on the structural safety (γt2 = 1.306
and γF = 1.068). Therefore, RBDO satisfies the required reliability level by adding or
removing material where it is necessary, and hence improves the structural performance
234 Structural design optimization considering uncertainties

Table 9.2 Optimal solutions obtained by DDO and RBDO.

Variable DDO RBDO Safety factors


optimum
SF = 1.5 SF = 1.25

a 125.7 135.4 110.7 1.005


b 74.5 78.1 80.0 1.006
c 187.1 191.1 198.2 1.001
d 216.5 219.3 198.2 1.001
e 185.8 191.4 198.1 1.002
f 173.5 181.2 151.6 1.000
t1 39.4 30.8 27.8 1.007
t2 10.4 10.0 13.1 1.306
t3 13.2 10.5 10.1 1.006
F – – – 1.068
Volume 0.2927 × 108 0.2508 × 108 0.235 × 108
Reliability 7.49 3.64 3.36

DDO stress RBDO stress


distribution distribution

Figure 9.6 Optimal solutions for DDO and RBDO.

by reducing the structural volume in uncritical regions. This can be understood as


a better distribution of the safety factors. Figure 9.6 shows the stress distributions
resulting from DDO and RBDO procedures. It can be seen that stress field is more
homogeneous for Reliability-Based Design Optimization than the distribution in the
Deterministic Design Optimization.

7.1.3 Bra c k et st r u ct u r e
Figure 9.7 shows a two-member bracket supporting a vertical load P applied at a
distance L from the wall hinge. The member AB, with 60◦ of inclination, is linked to
member CD through a pin-joint at B. Both members have rectangular cross-sections:
wAB × t for AB and wCD × t for CD, w stands for width and t for thickness. It is aimed
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 235

L
L /3

D
C B
E PW
60˚
E w
t
Cross section E-E
A

Figure 9.7 The parameterization of the bracket structure.

to optimally define the values of the parameters: t, wAB and wCD , by considering the
uncertainties in material and geometrical properties.
The two design constraints are:

• the maximum bending stress σb in member CD must be less than the yield stress
fY , taken as 225 MPa for the used steel. The maximum bending stress σb is located
at point B and is given by the following formulas:

6 MB PL ρgwCD tL2
σb = with: MB = + (33)
wCD t 2 3 18

• the compression force FAB in member AB must be less than the buckling load Fb .
The normal force in member AB is given by:

3P 3ρgwCD tL 1
FAB = + (34)
2 4 cos θ

and the buckling load for member AB is written as:

π2 EI π2 E t wAB3
Fb = 2
= ! 2L "2 (35)
LAB 12 3 sin θ

Therefore, it is aimed to minimize the structural weight under the two limit states:

G1 = fY − σb (wCD , t, L, P)
G2 = Fb (wAB , t, L) − FAB (wCD , t, L, P) (36)

The deterministic optimization is performed by using the partial safety factors, corre-
sponding to live and dead load factors: γs = 1.5 and γp = 1.35, respectively. For bending
stress, the partial factor is γr = 1.5; hence, in DDO, the admissible stress bending is
fy /γr . The random variables are given in Table 9.3, where the design variables are
considered as the distribution means of the geometrical properties.
236 Structural design optimization considering uncertainties

Table 9.3 Statistical data of the random variables.


Parameter Symbol Mean C.O.V Distribution

Applied load P (kN) 100 0.15 Gumbel


Young’s modulus E (GPa) 200 0.08 Gumbel
Yield stress f y (MPa) 225 0.08 Lognormal
Unit mass ρ (kg/m3 ) 7860 0.10 Weibull
Length L (m) 5 0.05 Normal
Width of member AB wAB (m) wAB 0.05 Normal
Width of member CD w CD (m) w CD 0.05 Normal
Thickness t (m) t 0.05 Normal

Table 9.4 Summary of the numerical results in the design of bracket structure.

Design weight βG1 βG2 Iteration CPU G-eval wAB w CD t


method (kg) (cm) (cm) (cm)

DDO 787.17 4.86 2.94 9 0.07 40 6.13 20.21 26.94


RIA 678.18 1.99 2.00 5 0.45 2340 6.08 15.68 20.91
PMA 678.88 2.00 2.01 7 0.57 2736 6.08 15.69 20.91
SORA 678.88 2.00 2.01 22 0.39 1340 6.08 15.69 20.91

For a target reliability βt = 2 (corresponding to a failure probability of 1%), the


reliability-based optimization problem is written:

) √ *
4 3
min W = ρgtL wAB + wCD
wAB ,wCD ,t 9 (37)
subject to β1 ≥ 2 and β2 ≥ 2

where β1 and β2 are the reliability indexes related to G1 and G2 , respectively. Table 9.4
compares the optimization results and computational effort for different methods. It
can be first seen that deterministic optimization often leads to high cost and relia-
bility, while RBDO approaches lead to better fit of the target safety. The Reliability
Index Approach (RIA), the Performance Measure Approach (PMA) and the Sequential
Optimization with Reliability Assessment (SORA) lead to the same design point, cor-
responding to 12.7% of weight reduction. However, the computational cost in SORA
is much less than for the other RBDO methods: 1340 evaluations of the performance
function instead of 2340 and 2736 evaluations for RIA and RBDO-PMA, respectively.
At the optimal RBDO point, Table 9.5 indicates the Most Probable Failure Point and
the corresponding partial safety factors. The load factor is lower for the buckling limit
state, as the reliability is more affected by the other random variables, especially by
the width wAB . Compared to DDO, these results show the advantage of RBDO in
adjusting the partial safety factors in terms of the reliability sensitivity with respect to
the uncertain variables, which have different influences on the different failure modes.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 237

Table 9.5 The most probable point and partial safety factors for RBDO solution.

P* (kN) E* (GPa) f Y∗ (MPa) ρ∗ (kg/m3 ) L* (m) ∗


wAB (cm) w ∗CD (cm) t* (cm)

G1 126.74 197 212.48 7739 5.10 6.08 15.37 20.06


G2 117.45 191 224.28 7754 5.17 5.67 15.70 20.56
Safety factors
γ G1 1.26 1.04 1.06 1.01 1.03 1.00 1.02 1.04
γ G2 1.17 1.05 1.00 1.01 1.03 1.07 1.00 1.02

2L L
3 3
C
B
α
D
h θ
t

w W
A

Figure 9.8 Inclined bracket structure.

The bracket structure is now considered by introducing the inclination of the bar
AB as an additional design parameter (Figure 9.8). This inclination is defined by the
angle α. The normal force FAB in member AB takes the form:

L ρgwCD tL
FAB = P+ (38)
h sin θ 2 cos α

and the maximum bending moment is:

PL ρgwCD tL2
MB = + (39)
3 18 cos α

The angle α introduces a high degree of nonlinearity in the limit state functions,
allowing to test the stability of the RBDO methods. Even with many initial trials,
the Reliability Index Approach (RIA) could not converged, because of the limit state
nonlinearity. The Performance Measure Approach (PMA) approach did not converge
when the AMV algorithm (Advanced Mean Value Method) was applied to perform
the inverse reliability analysis and to estimate the performance measure. However,
the use of HMV algorithm (Hybrid Mean Value) allows the PMA to converge. This
result confirms that HMV algorithm is more convenient for highly nonlinear limit
states.
The optimal inclination of the bracket is 25.4◦ for DDO and 24.5◦ for RBDO.
Figure 9.9 shows the convergence of the performance measure and the reliability index
238 Structural design optimization considering uncertainties

1000 12
βG
1
10
0 βG
2
8
Performance measures

–1000
GPMA 6

Reliability index
1

–2000 GPMA
2 4
GSORA
1
–3000 GSORA
2
2
0
–4000
–2
–5000
–4

–6000 –6
0 1 2 3 4 5 6 7 8 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(a) Iterations (b) Iterations

Figure 9.9 (a) Performance measure in PMA and SORA. (b) Reliability index during PMA
iterations.

Table 9.6 Numerical results in the design of bracket structure with inclination.

Design weight βG1 βG2 Iteration G-eval CPU wAB w CD t α(◦ )


method (kg) (s) (cm) (cm) (cm)

DDO 716.96 4.87 2.77 22 147 0.16 5.36 20.24 27.00 25.43
RIA Not converged
PMA 556.44 2.07 2.00 13 6790 1.42 5.37 15.69 20.93 24.64
SORA 556.45 2.07 2.00 30 1744 0.51 5.37 15.69 20.93 24.53

during optimization iterations. For both limit states, the reliability indexes converge to
the target value βt = 2. It can be generally observed that PMA converges slower than
SORA for this kind of problem. Table 9.6 confirms these results by indicating 6790
mechanical calls for PMA, against only 1744 calls for SORA. Once again, SORA has
proven to be more efficient and robust than RIA and PMA.

7.2 T i m b er trus s
The design of timber trusses is usually carried out by checking the ultimate cross-
section capacities with respect to the ultimate limit state. However, as these structures
are made of the assembly of several members, the overall ultimate capacity is highly
conditioned by the redundancy degree. In many structures, several components can
reach their ultimate capacity largely before reaching the overall structural failure load.
On the other hand, the structure could contain a number of critical members, that
produces the overall failure if any one of them fails, even for redundant structures.
In this context, the system reliability can be greatly different from the reliability of its
components.
The numerical applications are carried out for two roof trusses (Chateauneuf and
Noret 2005), where the depth and the breadth of the horizontal, bracing and upper
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 239

Table 9.7 Model parameter data.

Parameter Value

Young’s Modulus 11 GPa


Poisson’s ratio 0.25
Timber density 420 kg/m3
Distance between trusses 5m
Truss span 20 m
Truss height 5.77 m
Beam breadth 0.10 m

Table 9.8 Statistical data of the random variables.


Parameter Characteristic Mean C.O.V Distribution
value

Permanent load (Roof Load) (kN/m2 ) 479.2 384.6 0.15 Normal


Variables load (Concentrated Load) (kN) 1422.3 1071.4 0.25 Gumbel
Snow (kN/m2 ) 932.5 625 0.30 Normal
Wind (kN/m2 ) 400.8 301.8 0.20 Weibull
Bending strength (MPa) 14.16 24 0.25 Lognormal
Tension strength (MPa) 8.26 14 0.25 Lognormal
Compression strength (MPa) 12.39 21 0.25 Lognormal
Young’s Modulus (MPa) 8654.8 11 000 0.13 Lognormal

roof member are considered as design variables. The two trusses correspond to stati-
cally determinate and indeterminate structures, respectively. In the RBDO analysis, the
uncertainties of the strength and the applied loads are considered random variables,
as detailed in Table 9.8. The characteristic values correspond to a percentile of 95%
for loading and 5% for timber strength. The target failure probability is set to 10−4
which corresponds to βc = 3.7.
The RBDO algorithms are implemented in Matlab environment (Mathworks Inc.
2007), where the optimization toolbox is applied to solve the system. The mechani-
cal computation is carried out by the Finite element method, using CALFEM library
(CALFEM 2007). The comparative study is performed for different RBDO methods.
The limit state functions considered in this application are:

⎧ 

⎪ σc,d 2 σm,d

⎨G = + ≤ 1 in compression
fc,d fm,d
(40)

⎪ σt,d σm,d

⎩G = + ≤1 in tension
ft,d fm,d

where σc,d , σt,d , σm,d , fc,d , ft,d and fm,d are respectively the design values of stress and
strength in compression, tension and bending.
240 Structural design optimization considering uncertainties

b.L

Z X

a.L L

Figure 9.10 Truss with rigid joints.

Table 9.9 Initial design values and bounds.

Design variable Initial design Lower bound Upper bound

D1 (member 1,2,3) (cm) 30 2 100


H1 (member 1,2,3) (cm) 10 2 100
D2 (member 3,4,5,6) (cm) 40 2 100
H2 (member 3,4,5,6) (cm) 10 2 100
D3 (member 7,8,9,10,11) (cm) 10 2 100
H3 (member 7,8,9,10,11) (cm) 10 2 100

Table 9.10 Optimal results according to different RBDO methods.

Design Optimal weight min{β1 , β2 ,…,β11 } Optimization Reliability FEA CPU (s)
method (kg) iterations iterations calls

DDO 928.51 4.37 6 – 49 0.36


RIA 802.61 3.70 12 1095 103 521 749
PMA 802.61 3.70 12 807 70 323 510
SORA 802.49 3.70 15 171 2610 20

In the Deterministic Design Optimization, the partial safety factors are drawn from
the Eurocodes; the strength modification factor is also introduced in DDO and in
RBDO, to account for the humidity and the duration of loading.

7.2.1 S ta ti c a l ly d e t e r m in a t e t r u s s
The truss, illustrated in Figure 9.10, is formed by 11 rectangular timber members. The
cross-sections are rectangular with breadth b and depth d, where the initial values of
the six design variables are given in Table 9.9. It is to mention that this truss presents
11 performance functions, involving 11 reliability constraints in the RBDO procedure.
It is thus aimed to keep the lowest reliability index above the target level of 3.7.
Table 9.10 compares the results obtained by the different methods: DDO, RIA, PMA
and SORA. All the RBDO approaches converge to the optimal weight of 802.6 kg,
which is 14% lower than the deterministic result. While the number of optimization
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 241

Table 9.11 Iteration of the reliability analysis.

Method 0 1 2 3 4 5 6 7 8 9 10 11 12 Total

RIA 88 71 89 89 89 89 89 90 89 78 78 78 78 1095
PMA 63 62 62 62 62 62 62 62 62 62 62 62 62 807
SORA 69 51 51 171

Table 9.12 Numerical comparison of the optimal design.

Method d1 b1 d2 b2 d3 b3

DDO 14.64 13.91 36.53 18.26 11.74 11.15


RIA 13.11 12.45 34.39 17.19 10.72 10.20
PMA 13.11 12.45 34.39 17.19 10.72 10.20
SORA 13.11 12.45 34.39 17.19 10.72 10.18

iterations is comparable for different RBDO methods, the number of reliability iter-
ations is much higher for RIA and even for PMA. The number of the Finite Element
Analyses (FEA) is huge for these methods: more than 1,00,000 runs for RIA and
70,000 runs for PMA. It is to note that these FEA include those necessary to compute
the constraint gradients by finite difference techniques.
The fifth column in Table 9.10 gives the number of iterations realized either to
perform the reliability analyses in RIA or the inverse reliability analysis in PMA and
SORA. Table 9.11 shows how these reliability iterations (inner loop) are distributed
over the optimization iterations (outer loop). In the decoupled method (i.e. SORA),
this number corresponds to the number of reliability iterations at each cycle of the
equivalent deterministic design.
The optimal designs from these optimization methods are detailed in Table 9.12.
All the RBDO methods converge to the same optimal design, where all the dimen-
sions are lower than those from deterministic optimization. The iteration history is
illustrated in Figure 9.11, where the decoupled approach (SORA) can be easily dis-
tinguished. Figure 9.12 compares the characteristics of the RBDO methods on the
basis of the evaluation criteria: Cost, Safety, number of iterations, FEA calls and
CPU time.

7.2.2 Bra c ed t russ


Let us consider the same truss layout with additional members forming an X bracing
system; the truss has now 25 members. The cross-section depths and breadths are noted
d1 , b1 for members 1 to 6, d2 , b2 for members 7 to 12, d3 , b3 for members 13 to 25. The
redundant truss configuration implies a large amount of internal force redistribution
during the optimization process. Among the 25 limit states, the critical failure modes
change along the optimization iterations. The structural response becomes strongly
nonlinear due to interdependence of design and random variables.
242 Structural design optimization considering uncertainties

1000
DDO
RBDO-RIA
900 RBDO-PMA
SORA
Structural weight
800

700

600

500

400

0 2 4 6 8 10 12 14 16
Iterations

Figure 9.11 History of the structural weight.

RBDO-RIA
MEF-eval
RBDO-PMA
RBDO-SORA
DDO

Optimal cost CPU

Reliability index Iterations

Figure 9.12 Numerical performance of the design optimization methods.

The Reliability Index Approach could not converge in this example because of the
limit state non-linearity and of the probabilistic transformations. In addition, the brac-
ing members in this example have large reliability indexes, their evaluation implies very
high time consumption and leads to the divergence of the optimization algorithm. The
low number of finite element calls in SORA explains why the CPU time is so low
for this method, compared to PMA. This advantage is even larger for more complex
structural models. Although that PMA and SORA lead to almost the same structural
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 243

L3 L10

L8 L22 L17 L25 L11

L18 L15 L23 L24 L16 L21


Y L7 L13 L19 L20 L14 L12
Z X L1 L2 L3 L4 L5 L6

Figure 9.13 Braced truss with 25 members.

Table 9.13 Optimization results for the truss with X bracing.

Design Optimal min{β1 , β2 ,…,β25 } Optimization Reliability FEA CPU (s)


method weight (kg) iterations iterations calls

(DDO 1080.54 5.234 5 – 42 1.67


RIA No convergence
PMA 682.24 3.700 6 989 76 153 1175
SORA 677.87 3.699 17 327 5962 97

Table 9.14 Optimal designs of the truss with X bracing.

d1 b1 d2 b2 d3 b3

DDO 30.23 15.11 30.18 15.09 10.43 9.90


PMA 23.02 11.51 24.45 12.22 8.51 8.08
SORA 16.51 15.69 24.49 12.24 8.49 8.07

weight and reliability index, the optimal design parameters are quite different in both
approaches, especially for the dimensions b1 and d1 , as indicated in Table 9.14.
Figure 9.14 shows the iteration history for the three methods: DDO, PMA and
SORA. Although that SORA requires more iterations than PMA, it involves lower
number of reliability analyses, leading to a global reduction of the computation cost.
It proves, once more, its capacity to deal with engineering structures, by ensuring
convergence stability and efficiency.

8 Conclusions
As briefly described in the previous sections, a very intensive research activity is per-
formed in the field of RBDO solution methods. Three approaches are usually adopted:
two-level, mono-level and decoupled approaches. Although significant progress is per-
formed in developing efficient numerical methods, the application to practical engineer-
ing structures is still a challenge, knowing the complexity of realistic industrial systems.
In order to select a method, the designer has to search for a reasonable compromise
244 Structural design optimization considering uncertainties

1100 DDO
RBDO-PMA
1000 SORA
bopt = 5.23
900
Structural weight

800
bopt = 3.70
700

600

500

400

300
0 2 4 6 8 10 12 14 16
Iterations

Figure 9.14 History of the optimal design.

between the accuracy, the efficiency and the robustness of the applied RBDO algo-
rithm. As a basic choice, the two-level approach requires less development effort to
carry out RBDO. In this category, the performance measure approach leads to robust
and efficient scheme, with respect to conventional reliability index approach. Glob-
ally, the decoupled approaches, such as the Sequential Optimization with Reliability
Assessment, are very interesting, as they are stable and highly efficient, as many relia-
bility analyses can be avoided. In all cases, the RBDO algorithms should be considered
with special care and the results should be well validated by the designer, especially for
complex structural systems where several failure points and local optima often co-exist.

References

Aoues, Y. & Chateauneuf, A. 2007. Reliability-based optimization of structural systems by


adaptive target safety application to RC frames. Structural Safety. Article in Press.
CALFEM, A finite element toolbox to MATLAB, Version 3.3, Division of Structural Mechanics
and Division of Solid Mechanics, Lund University, Sweden, http://www.civeng.ucl.ac.uk/
Chateauneuf, A. & Noret, E. 2005. System reliability-based optimization of redundant timber
trusses. In: J.D. Sørensen (ed.). Reliability and optimization of structural systems, Proceedings
of the IFIP WG7.5 Working Conference on reliability and optimization of structural systems,
Aalborg, Denmark, May.
Chen, X., Hasselman, T.K. & Neill, D.J. 1997. Reliability based structural design optimization
for practical applications. Proceedings of the 38th AIAA/ASME/ASCE/AHS/ASC structures,
structural dynamics and material conference, Kissimmee, Florida, AIAA-97-1403.
Cheng, G., Xu, L. & Jiang, L. 2006. A sequential approximate programming strategy for
reliability-based structural optimization. Computers and Structures. Article in Press.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 245

Ching, J. & Hsu, W.-C. 2006. Transforming reliability limit state constraints into deterministic
limit state constraints. Structural Safety. In Press.
Choi, K.K. & Youn, B.D. 2002. On probabilistic approaches for reliability-based design opti-
mization. In: 9th AIAA/NASA/USA/ISSMO symposium on Multidisciplinary Analysis and
Optimization, September 4–6, Atlanta, GA, USA.
Der Kiureghian, A. & Polak, E. 1988. Reliability-based optimal design: a decoupled approach.
In: A.S. Nowak (ed.). Reliability and optimization of structural systems, Proceedings of the
8th IFIP WG7.5 Working Conference on reliability and optimization of structural systems,
Chelsea, MI, USA: Book Crafters. pp. 197–205.
Ditlevsen, O. & Madsen, H.O. 1996. Structural reliability methods. John Wiley & Sons.
Du, X. & Chen, W. 2002. Sequential optimization and reliability assessment method for effi-
cient probabilistic design. ASME, design engineering technical conferences and computers and
information in engineering conference, DETC2002/DAC-34127, Montreal, Canada.
EN 1995-1-1, Eurocode 5: Design of timber structures; Part 1-1: General rules and rules for
buildings. Comité Européen de Normalisation, 2005.
Enevoldsen, I. & Sørensen, J.D. 1993. Reliability-based optimization of series systems of parallel
systems. Journal of Structural Engineering 119(4):1069–1084.
Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering.
Structural Safety 15:169–196.
Feng, Y.S. & Moses, F. 1986. A method of structural optimization based on structural system
reliability. J. Struct. Mech. 14(4):437–453.
Fu, G. & Frangopol, D.M. 1990. Reliability-based Vector optimization of structural systems,
J. of Struct. Engrg. ASCE 116(8):2143–2161.
Hasofer, A.M. & Lind, N.C. 1974. An Exact and Invariant First Order Reliability Format.
J. Eng. Mech. ASCE 100, EM1:11–121.
Kaymaz, I. & Marti, K. 2006. Reliability-based design optimization for elastoplastic mechanical
structures. Computers and Structures. Article In Press.
Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. Efficient reliability-based
optimization using a hybrid space with application to finite element analysis. Journal of
Structural and Multidisciplinary Optimization 24(3):233–245.
Kirjner-Neto, C., Polak, E. & Der Kiureghian, A. 1998. An outer approximations approach to
reliability-based optimal design of structures. Journal Optim. Theory Appl. 98(1):1–16.
Koch, P.N., Yang, R.J. & Gu, L. Design for six sigma through robust optimization. Structural
and Multidisciplinary optimization. In Press.
Kuschel, N. & Rackwitz, R. 1997. Two basic problems in reliability-based structural
optimization. Mathematical Methods of Operations Research 46:309–333.
Kuschel, N. & Rackwitz, R. 2000. A new approach for structural optimization of series sys-
tem. In: R.E. Melchers, M.G. Stewart (ed.). Proceedings of the 8th International conference
on applications of statistics and probability (ICASP) in Civil engineering reliability and risk
analysis, Sydney, Australia, December 1999, Vol. 2, pp. 987–994.
Lee, J.O., Yang, Y.S. & Ruy, W.S. 2002. A comparative study on reliability index and target
performance based probabilistic structural design optimization. Computers and Structures
(80):257–269.
Lemaire, M., in collaboration with Chateauneuf, A. & Mitteau, J.C. 2006. Structural reliability.
ISTE, UK.
Madsen, H.O. & Friis Hansen, P. 1992. Comparison of some algorithms for reliability-based
structural optimization and sensitivity analysis. In: R. Rackwitz & P. Thoft-Christensen (eds):
Reliability and optimization of structural systems, Proceedings of the 4th IFIP WG7.5 Work-
ing conference on Reliability and Optimization of Structural Systems, Munich, Germany,
September 1991. Berlin: Springer. pp. 443–451.
Mathworks Inc. www.mathworks.com, 2007.
246 Structural design optimization considering uncertainties

Moses, F. 1997. Problems and prospects of reliability based optimization. Engineering Structures
19(4):293–301.
Murotsu, Y., Shao, S. & Watanabe, A. 1994. An approach to reliability-based optimization of
redundant structures. Structural Safety 16:133–143.
Nikolaidis, E. & Burdisso, R. 1988. Reliability-based optimization: a safety index approach.
Computer and Structures 28(6):781–788.
Qu, X. & Haftka, R.T. 2003. Design under uncertainty using Monte Carlo simulation and
probabilistic sufficiency factor. In: Proceedings of DET’03 conference, Chicago, IL,USA.
Rackwitz, R. 2001. Reliability analysis, overview and some perspectives. Structural Safety
23:366–395.
Royset, J.O., Der Kiureghian, A. & Polak, E. 2001. Reliability-based optimal structural design
by the decoupling approach. Reliability Engineering and System Safety 73:213–221.
Torng, T.Y. & Yan, R.J. 1993. Robust structural system design using a system reliability-
based design optimization method. In: P.D. Spanos & Y.T. Wu (ed.), Probabilistic Mechanics:
Advances in structural reliability methods, Springer-Verlag, NY:534–549.
Tu, J., Choi, K.K. & Park, Y.H. 1999. A new study on reliability-based design optimization.
Journal of Mechanical Design 121:557–564.
Tu, J., Choi, K.K. & Park, Y.H. 2000. Design potential method for robust system parameter
design. AIAA Journal 39(4):667–677.
Youn, B.D. & Choi, K.K. 2004. Selecting probabilistic approaches for reliability-based design
optimization. AIAA Journal 42(1):124–131.
Youn, B.D. & Choi, K.K. 2004. A new response surface methodology for reliability-based design
optimization. Computeres and Structures 82:241–256.
Yi, P., Cheng, G. & Jiang, L. 2006. A sequential approximate programming strategy for
performance-measure based probabilistic structural design optimization. Structural Safety.
Article in Press.
Wu, Y.T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety factor based approach for probability-
based design optimization. In: Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics, and Materials Conference, Seattle, WA, USA, Paper n◦ AIAA
2001-1522.
Zou, T., Mahadevan, S. & Sopory, A. 2004. A reliability-based design method using simu-
lation techniques and efficient optimization approach. ASME Design Engineering technical
conferences, Salt Lake City, Utah, DETC2004/DAC-57457.
Chapter 10

Non-probabilistic design optimization


with insufficient data using possibility
and evidence theories
Zissimos P. Mourelatos & Jun Zhou
Oakland University, Rochester, MI, USA

ABSTRACT: Early in the engineering design cycle, it is difficult to quantify product reliabil-
ity due to insufficient data or information for modeling the uncertainties. Design decisions are
therefore, based on fuzzy information that is vague, imprecise, qualitative, linguistic or incom-
plete. The uncertain information is usually available as intervals with lower and upper limits. In
this chapter, the possibility and evidence theories are used to account for uncertainty in design
with incomplete information. Possibility-based and evidence-based design optimization methods
are presented which handle a combination of probabilistic and non-probabilistic design vari-
ables. Also, a computationally efficient sequential possibility-based design optimization (SPDO)
method is implemented, which decouples the design loop and the reliability assessment of each
constraint. Two numerical examples demonstrate the application of possibility and evidence
theories in design and highlight the trade-offs among reliability-based, possibility-based and
evidence-based designs.

1 Introduction
Engineering design under uncertainty has recently gained a lot of attention. Uncer-
tainties are usually modeled using probability theory. In Reliability-Based Design
Optimization (RBDO), variations are represented by standard deviations which are
typically assumed constant, and a mean performance is optimized subject to proba-
bilistic constraints (Lee et al. 2002, Liang et al. 2007, Tu et al. 1999, Wu et al. 2001,
Youn et al. 2001). In general, probability theory is very effective when sufficient data is
available to quantify uncertainty using probability distributions. However, when suffi-
cient data is not available or there is lack of information due to ignorance, the classical
probability methodology may not be appropriate. For example, during the early stages
of product development, quantification of the product’s reliability or compliance to
performance targets is practically very difficult due to insufficient data for modeling
the uncertainties. A similar problem exists when the reliability of a complex system is
assessed in the presence of incomplete information on the variability of certain design
variables, parameters, operating conditions, boundary conditions etc.
Uncertainties can be classified in two general types; aleatory (stochastic or random)
and epistemic (subjective) (Klir & Filger 1988, Klir & Yuan 1995, Oberkampf et al.
2001, Sentz & Ferson 2002, Yager et al. 1994) Aleatory or irreducible uncertainty is
related to inherent variability and is efficiently modeled using probability theory. How-
ever, when data is scarce or there is lack of information, the probability theory is not
248 Structural design optimization considering uncertainties

useful because the needed probability distributions cannot be accurately constructed.


In this case, epistemic uncertainty, which describes subjectivity, ignorance or lack of
information, can be used. Epistemic uncertainty is also called reducible because it can
be reduced with increased state of knowledge or collection of more data.
Formal theories to handle uncertainty have been proposed in the literature including
evidence theory or Dempster – Shafer theory (Klir & Filger 1988, Yager et al. 1994),
possibility theory (Dubois & Prade 1988) and interval analysis (Moore 1966). Two
large classes of fuzzy measures, called belief and plausibility measures, respectively,
characterize the mathematical theory of evidence. They are mutually dual in the sense
that one of them can be uniquely determined from the other. Evidence theory uses plau-
sibility and belief (upper and lower bounds of probability) to measure the likelihood
of events. When the plausibility and belief measures are equal, the general evidence
theory reduces to the classical probability theory. Therefore, the classical probability
theory is a special case of evidence theory.
Possibility theory handles epistemic uncertainty if there is no conflicting evidence
among experts (Klir & Filger 1988). It uses a special subclass of dual plausibility and
belief measures, called possibility and necessity measures, respectively. In possibility
theory, a fuzzy set approach is common, where membership functions characterize
the input uncertainty (Zadeh 1965). Even if a probability distribution is not available
due to limited information, lower and upper bounds (intervals) on uncertain design
variables are usually known. In this case, interval analysis (Moore 1966, Muhanna &
Mullen 1999, Muhanna & Mullen 2001) and fuzzy set theory (Zadeh 1965) have been
extensively used to characterize and propagate input uncertainty in order to calculate
the interval of the uncertain output. An efficient method for reliability estimation with
a combination of random and interval variables is presented in (Penmetsa & Grandhi
2002). However, it is not implemented in a design optimization framework. A few
design optimization studies have been also reported, where some or all of the uncertain
design variables are in interval form (Du, Sudjianto & Huang 2005, Gu et al. 1998,
Rao & Cao 2002).
Optimization with input ranges has also been studied under the term anti-
optimization (Elishakoff et al. 1994, Lombardi & Haftka 1998). Anti-optimization
is used to describe the task of finding the “worst-case’’ scenario for a given problem.
It solves a two-level (usually nested) optimization problem. The outer level performs
the design optimization while the inner level performs the anti-optimization. The lat-
ter seeks the worst condition under the interval uncertainty. A decoupled approach
is suggested in (Lombardi & Haftka 1998) where the design optimization alternates
with the anti-optimization rather than nesting the two. It was mentioned that this
method takes longer to converge and may not even converge at all if there is strong
coupling between the interval design variables and the rest of the design variables. A
“worst-case’’ scenario approach using interval variables has also been considered in
multidisciplinary systems design (Du & Chen 2000, Gu et al. 1998).
Very recently, possibility-based design algorithms have been proposed (Choi et al.
2004, Mourelatos & Zhou 2005) where a mean performance is optimized subject to
possibilistic constraints. It was shown that more conservative results are obtained com-
pared with the probability-based RBDO. A comprehensive comparison of probability
and possibility theories is given in (Nikolaidis et al. 2004) for design under uncertainty.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 249

Evidence theory is more general than probability and possibility theories, even
though the methodologies of uncertainty propagation are completely different (Bae
et al. 2004, Oberkampf & Helton 2002). It can be used in design under uncertainty
if limited, and even conflicting, information is provided from experts. Furthermore,
the basic axioms of evidence theory allow to combine aleatory (random) and epis-
temic uncertainty in a straightforward way without any assumptions (Bae et al. 2004).
Evidence theory however, has been barely explored in engineering design. One of the
reasons may be its high computational cost due mainly to the discontinuous nature
of uncertainty quantification. Evidence-based methods have been only recently used
to propagate epistemic uncertainty (Bae et al. 2004) in large-scale engineering sys-
tems. Although a computationally efficient method is proposed in (Bae et al. 2004),
the design issue is not addressed. We are aware of only one study which propagates
epistemic uncertainty using evidence theory and also performs a design optimization
(Agarwal et al. 2004). The optimum design is calculated for multidisciplinary systems
under uncertainty using a trust region sequential approximate optimization method
with surrogate models representing the uncertain measures as continuous functions.
In this chapter, the possibility and evidence theories are used to account for uncer-
tainty in design with incomplete information. The formal theories to handle uncertainty
are first introduced using the theoretical fundamentals of fuzzy measures. The chap-
ter highlights how the possibility theory can be used in design. A computationally
efficient and accurate hybrid (global-local) optimization approach is presented for cal-
culating the confidence level of “fuzzy’’ response, combining the advantages of the
commonly used vertex and discretization methods. A possibility-based design opti-
mization method is subsequently described where all design constraints are expressed
possibilistically. The method gives a conservative solution compared with all con-
ventional reliability-based designs obtained with different probability distributions.
A general possibility-based design optimization method is also presented which han-
dles a combination of random and possibilistic design variables. Furthermore, a
sequential algorithm for possibility-based design optimization (SPDO) is presented.
It decouples a double-loop PBDO process into a sequence of cycles composed of a
deterministic design optimization followed by a set of worst-case possibility evaluation
loops. The series of deterministic and possibility loops is repeated until convergence is
achieved.
A computationally efficient design optimization method is also described based on
evidence theory, which can handle a mixture of epistemic and random uncertainties.
The method can be used when limited and often conflicting, information is available
from “expert’’ opinions. The algorithm quickly identifies the vicinity of the optimal
point and the active constraints by moving a hyper-ellipse in the original design space,
using an RBDO algorithm. Subsequently, a derivative-free optimizer calculates the
evidence-based optimum, starting from the close-by RBDO optimum, considering only
the identified active constraints. The computational cost is kept low by first moving
to the vicinity of the optimum quickly and subsequently using local surrogate models
of the active constraints only.
The chapter is organized as follows. Section 2 gives an introduction to fuzzy mea-
sures. Section 3 describes the fundamentals of possibility theory based on fuzzy
measures as well as some numerical methods for propagating non-probabilistic
250 Structural design optimization considering uncertainties

uncertainty, which are essential in possibility-based design. A detailed formulation of


Possibility-Based Design Optimization (PBDO) where design constraints are satisfied
possibilistically, is presented in section 4. Section 5 presents a detailed formulation of
an Evidence-Based Design Optimization (EBDO) method and its implementation. Sec-
tion 6 introduces the sequential algorithm for possibility-based design optimization.
All principles are demonstrated with two examples in section 7. Results are com-
pared among deterministic optimization, RBDO, PBDO, EBDO and SPDO. Finally, a
summary and conclusions are given in section 8.

2 Fuzzy measures
The evidence and possibility theories are based on the mathematical foundation of
fuzzy measures which provide the foundation of fuzzy set theory. Before we intro-
duce the basics of fuzzy measures, it is helpful to review the used notation on set
representation. A universe X represents the entire collection of elements having the
same characteristics. The individual elements in the universe X are denoted by x,
which are usually called singletons. A set A is a collection of some elements of X. All
possible sets of X constitute a special set called the power set ℘(X).
A fuzzy measure is defined by a function g: ℘(X) → [0, 1] which assigns to each crisp
(Ross 1995) subset of X a number in the unit interval [0, 1]. The assigned number
in the unit interval for a subset A ∈ ℘(X), denoted by g(A), represents the degree of
available evidence or belief that a given element of X belongs to the subset A.
In order to qualify as a fuzzy measure, the function g must obey the following three
axioms:

Axiom 1 (boundary conditions): g(Ø) = 0 and g(X) = 1.


Axiom 2 (monotonicity): For every A, B ∈ ℘(X), if A ⊆ B, then g(A) ≤ g(B).
Axiom 3 (continuity): For every sequence (Ai ∈ ℘(X), i = 1, 2,…) of subsets of
℘(X), if either A1 ⊆ Ai ⊆…or A1 ⊇ A2 ⊇ …(i.e., the sequence is monotonic), then
lim g(Ai ) = g( lim Ai ).
i→∞ i→∞

A belief measure is a function Bel: ℘(X) → [0, 1] which satisfies the three axioms of
fuzzy measures and the following additional axiom (Klir & Filger 1988):

Bel(A1 ∪ A2 ) ≥ Bel(A1 ) + Bel(A2 ) − Bel(A1 ∩ A2 ) (1)

The axiom (1) can be expanded for more than two sets. For A ∈ ℘(X), Bel(A) is
interpreted as the degree of belief, based on available evidence, that a given element of
X belongs to the set A.
A plausibility measure is a function

Pl : ℘(X) ⇒ [0, 1] (2)

which satisfies the three axioms of fuzzy measures and the following additional axiom
(Klir & Filger 1988)

Pl(A1 ∩ A2 ) ≤ Pl(A1 ) + Pl(A2 ) − Pl(A1 ∪ A2 ) (3)


N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 251

Every belief measure and its dual plausibility measure can be expressed with respect
to the non-negative function

m : ℘(X) ⇒ [0, 1] (4)

such that m(Ø) = 0 and



m(A) = 1 (5)
A∈℘(X)

The function m is called Basic Probability Assignment (BPA) due to the resemblance
of Eq. (5) with a similar equation for probability distributions. The basic probability
assignment m(A) is interpreted either as the degree of evidence supporting the claim
that a specific element of X belongs to the set A or as the degree to which we believe
that such a claim is warranted. At this point, it should be noted that the BPA is very
different from the probability distribution function. Basic probability assignments are
defined on sets of the power set (i.e., on A ∈ ℘ (X)), whereas the probability distribution
functions are defined on the singletons x of the power set (i.e., on x ∈ ℘ (X)). Every set
A ∈ ℘ (X) for which m(A) > 0 is called a focal element of m. Focal elements are subsets
of X on which the available evidence focuses; i.e. available evidence exists.
Given a BPA m, a belief measure and a plausibility measure are uniquely
determined by

Bel(A) = m(B) (6)
B⊆A


Pl(A) = m(B) (7)
B∩A =0

which are applicable for all A ∈ ℘(X).


In Eq. (6), Bel(A) represents the total evidence or belief that the element belongs
to A as well as to various subsets of A. The Pl(A) in Eq. (7) represents not only the
total evidence or belief that the element in question belongs to set A or to any of its
subsets but also the additional evidence or belief associated with sets that overlap with
A. Therefore, we have

Pl(A) ≥ Bel(A) (8)

Probability theory is a subset of evidence theory. When the additional axiom of belief
measures (see Eq. (1)) is replaced with the stronger axiom

Bel(A ∪ B) = Bel(A) + Bel(B) where A ∩ B = Ø (9)

we obtain a special type of belief measures which are the classical probability measures.
In this case, the right hand sides of Eqs (6) and (7) become equal and therefore,
 
Bel(A) = Pl(A) = m(x) = p(x) (10)
x∈A x∈A
252 Structural design optimization considering uncertainties

for all A ∈ ℘(X), where p(x) is the classical probability distribution function (PDF).
Note that the BPA m(x) is equal to p(x). Therefore with evidence theory, we can simul-
taneously handle a mixture of input parameters. Some of the inputs can be described
probabilistically (random uncertainty) and some can be described through expert opin-
ions (epistemic uncertainty with incomplete data). In the second case, the range of each
input parameter will be discretized using a finite number of intervals. The BPA value
for each interval must be equal to the PDF area within the interval.
It should be noted that according to evidence theory, the Bel(A) and Pl(A) bracket
the true probability P(A) (Klir & Filger 1988), i.e.

Bel(A) ≤ P(A) ≤ Pl(A) (11)

Evidence obtained from independent sources or experts must be combined. If the


BPA’s m1 and m2 express evidence from two experts, the combined evidence m can be
calculated by the following Dempster’s rule of combining (Sentz & Ferson 2002)

m1 (B)m2 (C)
B∩C=A
m(A) = for A = 0 (12)
1−K

where

K= m1 (B)m2 (C) (13)
B∩C=0

represents the conflict between the two independent experts. Dempster’s rule filters
out any conflict, or contradiction among the provided evidence, by normalizing with
the complementary degree of conflict. It is usually appropriate for relatively small
amounts of conflict where there is some consistency or sufficient agreement among
the opinions of the experts. Yager (Yager et al. 1994) has proposed an alternative rule
of combination where all degrees of contradiction are attributed to total ignorance.
Other rules of combining can be found in (Sentz & Ferson 2002).
The possibility theory is a subcase of the general evidence theory. It can be used to
characterize epistemic uncertainty, when incomplete data is available. It applies only
when there is no conflict in the provided body of evidence. In such a case, the focal
elements of the body of evidence are nested and the associated belief and plausibility
measures are called consonant. In contrary, when there is conflicting evidence, the
belief and plausibility measures are dissonant. A family of subsets of the universal set
is nested if they can be ordered in such a way that each is contained within the next.
Thus, A1 ⊂ A2 ⊂ · · · ⊂ An are nested sets. Consonant belief and plausibility measures
are usually known as necessity measures n and possibility measures π, respectively.
Therefore, if there is no conflicting information, n(A) = Bel(A) and π(A) = Pl(A). The
necessity and possibility are dual measures, related by

n(A) = 1 − π(A) (14)

where A is the complement of set A.


N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 253

3 Fundamentals of possibility theory


This section highlights the fundamentals of possibility theory as it was originally intro-
duced in the context of fuzzy set theory (Zadeh 1978). In the fuzzy set approach to
possibility theory, focal elements are represented by a-cuts of the associated fuzzy set.
Focal elements are subsets that are assigned nonzero degrees of evidence. The possibil-
ity theory can be used to bracket the true probability based on the fuzzy set approach
at various confidence intervals (a-cuts). The advantage of this is that as the design pro-
gresses and the confidence level on the input parameter bounds increases, the design
need not be reevaluated to obtain the new bounds of the response.
Similarly to the probability measures, which are represented by the probability
distribution functions, the possibility measures can be represented by the possibility
distribution function r : X ⇒ [0, 1] such that

π(A) = max r(x) (15)


x∈A

It can be shown that possibility measures are formally equivalent to fuzzy sets. In this
equivalence, the membership grade of an element x corresponds to the plausibility of
the singleton consisting of that x. Therefore, a consonant belief structure is equivalent
to a fuzzy set of X.
A fuzzy set is an imprecisely defined set that does not have a crisp boundary. It
provides instead, a gradual transition from “belonging’’ to “not belonging’’ to the set.
A function can be defined such that the values assigned to the elements of the set are
within a specified range and indicate the membership grade of these elements in the
set. Larger values denote higher degrees of set membership. Such a function is called a
membership function and the set defined by it a fuzzy set.
The membership function µA by which a fuzzy set A is usually defined has the form
µA : X → [0, 1] where [0, 1] denotes the interval of real numbers from 0 to 1, inclusive.
Given a fuzzy subset A of X with membership function µA , Zadeh (Zadeh 1978) defines
a possibility distribution function r associated with A as numerically equal to µA , i.e.
r(x) = µA (x) for all x ∈ X. Then, he defines the corresponding possibility measure π as

π(A) = sup r(x) for each A ∈ ℘(X) (16)


x∈A

Eq. (16) is equivalent to Eq. (15) when X is finite. In the fuzzy set approach to possi-
bility theory, focal elements are represented by a-cuts of the associated fuzzy set. For
the remaining of this discussion, we will follow the fuzzy set approach to possibility
theory.
Eq. (11) states that the true probability is bracketed by the belief and plausibility
measures. If we know the possibility distribution function µY (y) of the response Y,
then the true probability P(Y) can be also bracketed as

n(Y) ≤ P(Y) ≤ π(Y) (17)

where the necessity n(Y) and possibility π(Y) measures are calculated from Eqs (14)
and (16), respectively. The “extension principle’’ (Klir & Filger 1988, Ross 1995,
254 Structural design optimization considering uncertainties

x (x)

1.0

0.0
XaL XN XaU X

dL (a) dU (a)

Figure 10.1 Triangular possibility distribution for a fuzzy variable.

Yager et al. 1994) is used to calculate the possibility distribution function µY (y) of the
response.

3.1 F uz z i f i c a ti o n pr o c es s and e xt e ns io n p r i n ci p l e
The process of quantifying a fuzzy variable is known as fuzzification. If any of the input
variables is imprecise, it is considered fuzzy and must be therefore, fuzzified in order
for the uncertainty to be propagated using fuzzy calculus. The fuzzification is done
by constructing a possibility distribution, or membership function, for each imprecise
(fuzzy) variable. Details can be found in (Ross 1995). The membership function takes
values in the [0, 1] interval. Here, we use convex normal possibility distributions to
characterize the fuzzy variables. An example of a convex normal triangular possibility
distribution is shown in Fig. 10.1. The point for which the possibility is equal to one is
called normal point. The possibility distribution is convex since it is strictly decreasing
to the left and right of the normal point. At each confidence level, or a-cut, a set Xa is
defined as

Xa = {x : xaL ≤ x ≤ xaR , a ∈ [0, 1]} (18a)

which is a monotonically decreasing function of a; i.e.

a1 > a2 ⇒ Xa1 ⊂ Xa2 for every a1 , a2 ∈ [0, 1] (18b)

Due to the convexity of the possibility distribution function, all sets generated at dif-
ferent a-cuts are nested according to Eq. (18b). Therefore, the convexity and normality
of the possibility distribution function satisfies the basic requirement of nested sets (no
conflicting evidence) in possibility theory.
After the fuzzification of the imprecise input variables, the “extension principle’’
is used to propagate the epistemic uncertainty through the transfer function in order
to calculate the fuzzy response. The “extension principle’’ calculates the possibility
distribution of the fuzzy response from the possibility distributions of the fuzzy input
variables. In particular, given the transfer function y = f (x), where the output y depends
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 255

on the N independent fuzzy inputs x = {x1 , . . . , xN }, the “extension principle’’ states


that the possibility distribution µY of the output is given by

µY [y = f (x)] = sup {min [µX (f (xj ))]} (19)


y j

where “sup’’ denotes the suprenum operator that gives the least upper bound. The
above equation can be interpreted as follows. For a crisp value of the output y, there
may exist more than one combination of crisp values of input variables x resulting in
the same output. The possibility of each combination is given by the smallest possi-
bility value for all fuzzy input variables. The possibility that y = f (x), is given by the
maximum possibility for all these combinations. Note that in probability theory, the
probability of an outcome is equal to the product of the probabilities of the constituent
events. In fuzzy set theory however, the possibility of an outcome is equal to the min-
imum possibility of the constituent events. If the outcome can be reached in many
ways, then the outcome probability, in probability theory, is given by the sum of the
probabilities of all the ways. In fuzzy theory, the possibility of the outcome is given by
the maximum possibility of all the possibilities (Ross 1995).
The direct (“brute force’’) solution of Eq. (19) is practically intractable except for
simple cases involving one or two fuzzy variables. The computational effort increases
exponentially with increasing number of fuzzy input variables. For this reason, approx-
imate numerical techniques have been proposed, among which the discretization
method (Akpan et al. 2002) and the vertex method (Penmetsa & Grandhi 2002) are
the most popular ones.
In the discretization method, the domain of each fuzzy variable i; 1 ≤ i ≤ N is dis-
cretized with Mi discrete values at each a-cut. Then the output y is evaluated at all
9
N
possible combinations Mi for each a-cut. Subsequently, Eq. (19) is used to calculate
i=1
the possibility distribution of the output. The range of the output is defined by the
minimum and maximum response from all combinations. Although this method can
be very accurate, the associated computational cost is practically prohibitive.
In the vertex method, all the binary combinations of only the extreme values of the
fuzzy variables at an a-cut are fed into the deterministic transfer function. The bounds
of the fuzzy response are then obtained at the a-cut, by choosing the maximum and
minimum responses. The procedure is repeated for all a-cuts of interest. The method
has the potential to give accurate bounds of the response based on the bounded input.
However, when the transfer function exhibits minima or maxima within the domain
defined by the extreme values of the input variables, the vertex method is inaccurate.
This is due to the fact that the function is evaluated only at the binary combinations of
the input variable bounds. For a problem with N fuzzy input variables, the required
number of function evaluations for the vertex method is A ∗ 2N , where A is the number
of a-cuts.
In general, the vertex method is computationally more efficient compared with the
discretization method. However, the required computational effort grows exponen-
tially with the number of input fuzzy variables (Ross 1995). For this reason, most of
the reported applications are restricted to very few fuzzy variables (Chen & Rao 1997,
Mullen & Muhanna 1999, Rao & Sawyer 1995).
256 Structural design optimization considering uncertainties

A hybrid (global-local) optimization method has been reported in (Mourelatos &


Zhou 2005), which ensures computational efficiency without loss of accuracy. An
optimization algorithm is used to calculate the minimum and maximum values of
the response at each a-cut. Because the global minimum and maximum values of the
response are needed, a derivative free, global optimizer called DIRECT (DIvisions of
RECTangles), is used in order to avoid being trapped at a local optimum and obtain
therefore, an inaccurate solution. DIRECT is a modification of the standard Lips-
chitzian approach that eliminates the need to specify a Lipschitz constant (Jones et al.
1993). Although global optimizers may get close to the global optimum quickly, it
takes them longer to achieve a high degree of accuracy because they usually have a
slow rate of convergence. This suggests that the best performance can be obtained by
combining DIRECT with a gradient-based local optimizer in a hybrid approach. In
this work, DIRECT is first used, followed by a local optimizer based on Sequential
Quadratic Programming (SQP). DIRECT provides a converged global optimum based
on “loose’’ convergence criteria. Subsequently, the DIRECT solution is used as starting
point for SQP, which identifies the optimum accurately and efficiently.

3.2 A m a th em at ic al e xample
The following two-variable, six-hump camel function (Wang 2003) is used

1 6
y(x1 , x2 ) = 4x21 − 2.1x41 + x + x1 x2 − 4x22 + 4x42 , x1,2 ∈ [−2, 2]
3 1

to illustrate the accuracy and efficiency of the hybrid optimization method of the
previous section and compare it with the vertex and discretization methods. For
demonstration reasons, the following simple triangular membership functions are used
for the two input variables x1 and x2

⎪ 0 ≤ xi ≤ 2
⎨ − xi + 1,
µXi (xi ) = 2 i = 1, 2

⎩ xi + 1, −2 ≤ x ≤ 0
i
2

Fig. 10.2 shows the contour plot of the six hump camel function. The H’s indi-
cate all extreme points. Points H2 and H5 with coordinates (0.0898, −0.7127) and
(−0.0898, 0.7127) respectively, are two global optima with an equal function value
of ymin = −1.0316.
The calculated membership functions of the response y using the vertex, discretiza-
tion and hybrid optimization methods are plotted in Fig. 10.3. Ten a-cuts are used
for all three methods. For the discretization method, the range of each input fuzzy
variable, at each a-cut, is equally split in 15 divisions. It is known that if the input
membership functions are convex normal, the response membership function must
also be convex normal. The justification is that when the input uncertainty increases
(low a-cut values), the uncertainty of the response must remain the same or increase. As
shown in Fig. 10.3, the response membership function obtained by the vertex method
is not convex and therefore, it is wrong.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 257

1.5

0.5
X2

0.5

1

1.5

2
2 1.5 1 0.5 0 0.5 1 1.5 2
X1

Figure 10.2 Contour plot for mathematical example.

0.9
Number of alpha-cuts  10
0.8

0.7
Vertex method
0.6
Discretization method
␮Y (y)

0.5
Optimization method
0.4

0.3

0.2

0.1

0
10 0 10 20 30 40 50 60
y

Figure 10.3 Response membership function for mathematical example.

As explained in section 3.1, the discretization method evaluates the function not
only at the upper and lower limits of the input variables at each alpha cut but also
between the bounds. Thus, it can capture the extreme points that might be present
in between the upper and lower bounds. At each alpha cut, all combinations are
obtained and the minimum and maximum response values are calculated in order
to get the response membership function. It is clear that the response becomes more
accurate as the number of divisions per alpha cut increases. As shown in Fig. 10.3,
258 Structural design optimization considering uncertainties

Table 10.1 Accuracy and efficiency comparison of vertex, discretization and


hybrid optimization methods.

Vertex Discretization Hybrid Optimization

Lower Bound 47.73 −1.01 −1.03


Upper Bound 55.73 55.73 55.73
No. of F.E. 4 256 140

the response membership function calculated with the discretization method, is convex
and normal. The uncertainty decreases as the level of confidence increases (increasing
a-cut values). The major disadvantage of this method is that as the number of design
variables increases and the number of divisions per a-cut also increases, the method
becomes computationally very expensive. In this example, the number of a-cuts is
10 and the number of divisions per a-cut is 15. Therefore, the number of function
evaluations is 10 ∗ (15 + 1)2 = 2560. The response membership function of the six
hump camel function is also calculated using the proposed hybrid optimization method.
The result is identical with that obtained with the discretization method (see Fig. 10.3).
Table 10.1 summarizes the lower and upper bound values of the response at the zero
a-cut, as calculated by the vertex, discretization and hybrid optimization methods.
The vertex method is very efficient but inaccurate. The hybrid optimization method
however, has the same accuracy with the “brute force’’ discretization method but it is
much more efficient.

4 Possibility-based design optimization


In deterministic design optimization, an objective function is minimized subject to sat-
isfying a set of constraints. In Reliability-Based Design Optimization (RBDO), where
all design variables are characterized probabilistically, an objective function is usually
minimized subject to the probability of satisfying each constraint being greater than a
specified high reliability level.
In this section, a methodology is presented on how to use possibility theory in
design. We will show that the possibility-based design is conservative compared with
all RBDO designs obtained with different probability distributions. In RBDO, some
optimality is usually sacrificed in order to accommodate the random uncertainty.
The possibility-based design sacrifices a little more optimality in order to accommodate
the lack of probability distribution information. It therefore, encompasses all RDBO
designs obtained with different distributions.
According to Eq. (11), the probability P(A) of event A is bracketed by the belief
Bel(A) and plausibility Pl(A); i.e. Bel(A) ≤ P(A) ≤ Pl(A). We have also mentioned that
for consonant (no conflicting evidence) belief structures, the plausibility measures are
equal to the possibility measures, resulting in η(A) ≤ P(A) ≤ π(A), where η and π are
the necessity and possibility measures, respectively (see Eq. 17). This means that the
possibility π(A) provides an upper bound to the probability P(A). From the design
point of view, we can thus conclude (Klir & Filger 1988, Ross 1995, Zadeh 1978) that
what is possible may not be probable, and what is impossible is also improbable.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 259

mG(g)

a1

a a g
gmin gN gmax gmax
gmin
g0

Figure 10.4 Used notation in possibility-based design optimization.

Note that for an impossible event A, the possibility π(A) is zero. If we therefore,
make sure that the possibility of violating a constraint is zero, then the probability
of violating the same constraint will be also zero. If feasibility of a constraint g is
expressed with the positive null form g ≥ 0, the constraint is always satisfied if

π(g ≤ 0) = 0 (20)

The possibility π in Eq. (20) is calculated using Eq. (16). Fig. 10.4 shows the
membership function µG (g) of constraint g. The possibility of set A = {g: gmin ≤
α α α
g ≤ gmin , α ∈ [0, 1]} is π(A) = α and the possibility of set B = {g: gmin ≤ g ≤ gmax ,
α ∈ [0, 1]} is π(B) = 1. Similarly, the possibility of constraint violation is π(g ≤ 0) = α1 .
Eq. (20) can be relaxed as

π(g ≤ 0) ≤ α (21)

where the a-cut level is small; i.e. α << 1. Based on Fig. 10.4, the relation (21) is
satisfied if
α
gmin ≥0 (22)
α
where gmin is the global minimum of g at the a-cut. Eq. (22) is analogous to the
R-percentile formulation (Tu et al. 1999) of a probabilistic constraint in RBDO. The
α
possibilistic constraint of Eqs (21) or (22) becomes active if gmax = 0.
Based on this discussion, a possibility-based design optimization (PBDO) problem
can be formulated as

min f (d, xN , pN )
d,xN
subject to π(gi (d, X, P) ≤ 0) ≤ α, i = 1, . . . , n (23)

dL ≤ d ≤ dU , xL ≤ xN ≤ xU
260 Structural design optimization considering uncertainties

where d ∈ Rk is the vector of deterministic design variables, X ∈ Rm is the vector of


possibilistic design variables, P ∈ Rq is the vector of possibilistic design parameters
and xN and pN are the normal point vectors for the possibilistic design variables and
parameters, respectively. According to the used notation, a bold letter indicates a
vector, an upper case letter indicates a possibilistic variable or parameter and a lower
case letter indicates a deterministic variable or a realization of a possibilistic variable or
parameter. Feasibility of the ith deterministic constraint is expressed with the positive
null form gi ≥ 0.
The possibilistic design variables are represented with convex normal possibility dis-
tributions (membership functions). Note that they may not be necessarily triangular.
The superscript N denotes the normal point of each distribution where the membership
function value is equal to one. Subscripts L and U denote lower and upper bounds,
respectively. In PBDO, we will assume that the membership functions of the possi-
bilistic design variables have a constant shape and that their normal points are design
variables moving within predetermined bounds. This is analogous to RBDO where the
PDF of each random design variable stays constant while its mean value is a design
variable.
Based on Eq. (22), the PBDO formulation (23) is equivalent to

min f (d, xN , pN )
d,xN
subject to giαmin ≥ 0 i = 1, . . . , n (24)

dL ≤ d ≤ dU , xL ≤ xN ≤ xU

The PBDO formulation (23) or (24) is a double-loop optimization problem where an


optimization is performed (inner loop) when the design optimization (outer loop) calls
for a possibilistic constraint evaluation. It should be noted that the PBDO optimum at
a = 1 coincides with the deterministic optimum.

4.1 PBD O wi t h a c o mb inat io n o f r and o m a n d


possi b i l i st ic var iab les
Reliability-based design optimization (RBDO) provides optimum designs in the
presence of only random (or aleatory) uncertainty (Liang et al. 2007, Tu et al. 1999,
Wu et al. 2001). A typical RBDO problem is formulated as (Liang et al. 2007)

min f (d, µY , µZ )
d,µY

subject to P(gi (d, Y, Z) ≥ 0) ≥ Ri = 1 − pfi , i = 1, . . . , n (25)

dL ≤ d ≤ dU , µLY ≤ µY ≤ µU
Y

where Y ∈ R is the vector of random design variables and Z ∈ Rr is the vector of


random design parameters.
For a variety of practical applications however, there may not be enough infor-
mation to characterize all design variables and parameters probabilistically. A subset
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 261

of them can be therefore, characterized possibilistically using membership functions.


A possibility-based design optimization problem with a combination of random and
possibilistic (or fuzzy) variables can be formulated as

min f (d, µY , µZ , xN , pN )
d,xN ,µY
subject to giαmin ≥ 0, i = 1, . . . , n dL ≤ d ≤ dU (26)

µLY ≤ µY ≤ µU
Y , xL ≤ x ≤ xU
N

with

giαmin = min gi (d, U, x, p), i = 1, . . . , n,


x,U,p
subject to βi = βti
xLα (xN ) ≤ x ≤ xUα (xN ), pαL ≤ p ≤ pαU

where βt is the target reliability index. Note that xLα and xUα are the lower and upper
limits of X at an a-cut.
Problem (26) represents a double-loop performance measure approach (PMA)
optimization sequence. The design optimization of the outer loop calls a series of
possibilistic constraints in the inner loop. Each possibilistic constraint is in general,
a global optimization problem. The inner loop calculates the minimum value of each
limit state function considering that the realizations of possibilistic variables X vary
between xLα and xUα , the realizations of possibilistic parameters P vary between pαL
and pαU and the reliability requirement βi = βti is satisfied. It therefore, calculates the
worst-case scenario.

5 Evidence-based design optimization (EBDO)


In this section, a methodology is presented on how to use evidence theory in design. We
will show that the evidence theory-based design is more conservative compared with all
RBDO designs obtained with different probability distributions and less conservative
compared with the PBDO design.
If feasibility of a constraint g is expressed with the non-negative null form g ≥ 0, we
have shown that Bel(g ≥ 0) ≤ P(g ≥ 0) ≤ Pl(g ≥ 0) where P(g ≥ 0) is the probability of
constraint satisfaction. Therefore,

P(g < 0) ≤ pf is satisfied if Pl(g < 0) ≤ pf (27)

where pf is the probability of failure which is usually a small prescribed value. The
above statement is equivalent to

P(g ≥ 0) ≥ R is satisfied if Bel(g ≥ 0) ≥ R (28)

where R = 1 − pf is the corresponding reliability level.


262 Structural design optimization considering uncertainties

Hence, an evidence theory-based design optimization (EBDO) problem can be


formulated as

min f (d, xN , pN )
d,xN
subject to Pl(gi (d, X, P) < 0) ≤ pfi , i = 1, . . . , n (29)

dL ≤ d ≤ dU , xLN ≤ xN ≤ xU
N

where X ∈ Rm and P ∈ Rq are the vectors of uncertain design variables and parameters.
The superscript “N’’ indicates nominal value of uncertain variables or parameters. The
uncertainty is provided by expert opinions.
It should be noted that the plausibility measure is used instead of the equivalent
belief measure, in Problem (29). The reason is that at the optimum, the failure domain
for each active constraint is usually much smaller than the safe domain over the frame
of discernment (FD) (domain of all focal elements with nonzero combined BPA; see
next section). As a result, the computation of the plausibility of failure is much more
efficient than the computation of the belief of safe region.

5.1 Assessi ng B el and Pl wit h d emps t e r-s h a f e r t h e o r y


Evidence theory can quantify epistemic uncertainty, even when the experts provide con-
flicting evidence. This section shows how to propagate epistemic uncertainty through
a given model (transfer function) which is necessary in calculating the plausibility of
constraint violation in Problem (29). The uncertainty propagation will be illustrated
using the following simple transfer function

y = f (a, b) (30)

where a ∈ A, b ∈ B are two independent input parameters and y is the output. The com-
bined BPA’s for both a and b are obtained from Dempster’s rule of combining of Eq.
(12) if multiple experts have provided evidence for either a or b. With combined infor-
mation for each input parameter, we define a vector c = aci , bcj , needed to calculate
the output y as

C = A × B = {c = [aci , bcj ], aci ∈ A, bcj ∈ B} (31)

where subscript c stands for “combined’’ and i, j indicate focal elements.


Taking advantage of assumed parameter independency, the BPA for c is

mc (hij ) = m(aci )m(bcj ) (32)

where hij = [aci , bcj ] and aci , bcj denote intervals such that a ∈ aci and b ∈ bcj . Equa-
tion (32) can be used to calculate the combined BPA structure for the entire domain
C. For every (a, b) ∈ c |c ∈ C, needed to evaluate the output y, the combined BPA mc is
used. A representative combined BPA structure is shown in Fig. 10.5.
The Cartesian product C of Eq. (31) is also called frame of discernment (FD) in
the literature. It consists of all focal elements (rectangles in Fig. 10.5 with nonzero
combined BPA) and can be viewed as the finite sample space in probability theory.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 263

BPA
0.8
0.6 b
0.4 0.6
0.2 0.4
0.2
0
0
0.2
a 0.4
0.6

Figure 10.5 Representative BPA structure for two parameters a and b.

gmax g 0 g0
g0 gmax
g0 gmax g0
gmin g0
gmin
g0 g
g0 min g0

Figure 10.6 Schematic illustration of focal element contribution to belief and plausibility measures.

Let a domain F being defined as

F = {g : g = f (a, b) − y0 > 0, (a, b) ∈ c, c = [ac , bc ] ⊂ C} (33)

where y0 is a specified value. According to evidence theory,

Bel(F) ≤ pf ≤ Pl(F) (34)

where pf = P(g > 0) is the true probability.


The Bel (F) and Pl (F) are calculated using Eqs (6) and (7) where set A is equal to
set F of Eq. (33) and B is a rectangular domain (focal element) such that B ⊆ A for
Eq. (6) and B ∩ A = 0 for Eq. (7). B ⊆ A means that the focal element must be entirely
within the domain g > 0 and B ∩ A = 0 means that the focal element must be entirely or
partially within the domain g > 0 (see Fig. 10.6). In order to identify if a focal element
B satisfies B ⊆ A or B ∩ A = 0, the following minimum and maximum values of g must
be calculated

[gmin , gmax ] = [min g(x), max g(x)] (35)


x x

for xL ≤ x ≤ xU where (xL , xU ) defines the focal element domain. For monotonic
functions, the vertex method (Penmetsa & Grandhi 2002) can be used to calculate the
minimum and maximum values in Eq. (35) by simply identifying the minimum and
264 Structural design optimization considering uncertainties

Hyper-ellipse
X1 Initial design
point
g1 (x1, x2)  0
Feasible region
Frame of
discernment g2(x1, x2)  0
Objective
B reduces

EBDO optimum
MPP for g1  0
Deterministic
optimum
X2

Figure 10.7 Geometrical interpretation of the EBDO algorithm.

maximum values among all vertices of the focal element domain. For non-monotonic
functions, a global optimizer is needed. If for a focal element, gmin and gmax are both
positive, the focal element will contribute to the calculation of belief and plausibility.
On the other hand, if gmin and gmax are both negative, the focal element will not
contribute to the calculation of belief or plausibility. If however, gmin is negative and
gmax is positive, the focal element will not contribute to the belief but it will contribute
to the plausibility calculation. This is shown schematically in Fig. 10.6.
In summary the following tasks are performed in order to calculate the belief and
plausibility of the failure region:

1. For each input parameter, combine the evidence from the experts by combining
the individual BPA’s from each expert using Dempster’s rule of combining (Eq.
(12)).
2. Construct the BPA structure for the m-dimensional frame of discernment, where
m is the number of input parameters. Assuming independent input parameters,
Eq. (32) is used.
3. Identify the failure region space (set F of Eq. (33)).
4. Use Eqs (6) and (7) to calculate the belief and plausibility measures of the failure
region. The failure region must be identified only within the frame of discernment.
The true probability of failure is bracketed according to Eq. (34).

5.2 Im pl em ent at io n o f t he EB DO alg o r i t h m


A computationally efficient solution of Problem (29) is presented here. As a geometrical
interpretation of it, we can view the design point (d, x) moving within the feasible
domain so that the objective f is minimized (see Fig. 10.7). If the entire FD is in the
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 265

feasible domain, the constraints are satisfied and are inactive. A constraint becomes
active if part of the FD is in the “failure’’ region so that the plausibility of constraint
violation is equal to pf . In general, Problem (29) represents movement of a hyper-cube
(FD) within the feasible domain.
In order to save computational effort, the bulk of the FD movement, from the initial
design point to the vicinity of the optimal point (point B of Fig. 10.7), can be achieved
by moving a hyper-ellipse which contains the FD. The center of the hyper-ellipse is
the “approximate’’ design point and each axis is arbitrarily taken equal to three times
the standard deviation of a hypothetical normal distribution. This assumes that each
dimension of the FD hyper-cube is equal to six times the standard deviation of the
hypothetical normal distribution. The hyper-ellipse can be easily moved in the design
space by solving a RBDO problem. The RBDO optimum (point B of Fig. 10.7) is in the
vicinity of the solution of Problem (29) (EBDO optimum). The RBDO solution also
identifies all active constraints and their corresponding most probable points (MPP’s).
The maximal possibility search algorithm (Choi et al. 2004) can also be used to move
the FD hyper-cube in the feasible domain. It should be noted that the 3-sigma axes
hyper-ellipse is arbitrary. The size of the hyper-ellipse is not however, crucial because it
is only used to calculate the initial point (point B of Fig. 10.7) of the EBDO algorithm.
The latter calculates the true EBDO optimum accurately. From our experience, a 3 to
4-σ size works fine.
At this point, we generate a local response surface of each active constraint around its
MPP. In this work, the Cross-Validated Moving Least Squares (CVMLS) (Tu & Jones
2003) method is used based on an Optimum Symmetric Latin Hypercube (OSLH)
(Ye et al. 2000) “space-filling’’ sampling.
A derivative-free optimizer calculates the EBDO optimum. It uses as initial point
the previously calculated RBDO optimum which is close to the EBDO optimum.
Problem (29) is solved, considering only the identified active constraints. For the
calculation of the plausibility of failure Pl(g < 0) of each active constraint, an algo-
rithm presented in (Mourelatos & Zhou 2005) is used. It identifies all focal elements
which contribute to the plausibility of failure. The computational effort is significantly
reduced because accurate local response surfaces are used for the active constraints.
The cost can be much higher if the optimization algorithm evaluates the actual active
constraints instead of their efficient surrogates (response surfaces). It should be noted
that a derivative-free optimizer is needed due to the discontinuous nature of the com-
bined BPA structure. The DIRECT derivative-free, global optimizer is used (Jones
et al. 1993).

6 A sequential algorithm for possibility-based


design optimization (SPDO)
The computational effort of the double-loop approach of Problem (26) may be pro-
hibitive especially for large-scale applications. For this reason, a Sequential algorithm
for Possibility-based Design Optimization (SPDO) method is proposed in this section.
It decouples the double-loop PBDO process of Problem (26) by using successive cycles
composed of a deterministic design optimization followed by a set of possibilistic
evaluation loops. In each cycle, the deterministic optimization and the possibilistic
266 Structural design optimization considering uncertainties

xN2, X2 Deterministic constraint Possibilistic constraint


g(xN1, xN2)0 p[g(x1, x2 )  0]  a

cut

(xN1, xN2)
Shift to:
SP2
g(xN1 – SP1, xN2 – SP2)
SP1

(x1,worst , x2,worst )

xN1,X1

Figure 10.8 Shifting of feasible domain for only uncertain variables.

evaluations are decoupled. The latter are conducted after the deterministic optimiza-
tion. If at the deterministic optimum of a cycle, a particular possibilistic constraint
is violated, a “shifting’’ vector is determined which moves the constraint boundary
within the deterministic feasible domain. The “shifted’’ constraints are then used to
perform a new deterministic design optimization. The series of deterministic and pos-
sibilistic evaluation loops continues until convergence is achieved; i.e. the objective
function is minimized without violating any possibilistic constraint. At convergence
the magnitude of the “shifting’’ vector is zero. The idea of using a “shifting’’ vector
has been originally proposed in (Du & Chen 2004).

6.1 SPD O wi th o nly po s s ib ilis t ic var iabl e s


Before we present the proposed SPDO algorithm for a combination of possibilistic
and random variables, we will introduce the approach when there are not any random
variables. For comprehension purposes, we initially assume without loss of generality,
that there are not any deterministic design variables or possibilistic design parameters.
In this case, Problem (24) reduces to

min f (xN )
xN

subject to giαmin ≥ 0 i = 1, . . . , n (36)

xL ≤ xN ≤ xU

For illustration purposes, Fig. 10.8 gives a geometrical interpretation using only two
possibilistic variables X1 and X2 .
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 267

The normal points of the two possibilistic variables are denoted by xN N


1 and x2 ,
respectively. The deterministic and possibilistic constraint boundaries are denoted
by g(xN1 , x2 ) = 0 and π[g(x1 , x2 ) ≤ 0] ≤ α, respectively. Because the possibility-based
N

design is more conservative than the deterministic design, its feasible region is reduced
compared with that of the deterministic design.
Problem (36) is solved using a sequence of cycles. Each cycle is composed of a
deterministic optimization followed by a possibilistic evaluation. During the first cycle,
the following deterministic problem is solved

min f (xN )
xN
subject to gi (xN ) ≥ 0 i = 1, . . . , n
xL ≤ xN ≤ xU

The optimal point xN = (xN N


1 , x2 ) is on the boundary of the active determinis-
tic constraints. A possibilistic evaluation at the desired a-cut, is then implemented
for each constraint at xN = (xN N
1 , x2 ) in order to determine the worst-case point
xworst = (x1,worst , x2,worst ). The following problem is solved for the ith constraint,

min gi (x)
x
subject to xN − δL (α) ≤ x ≤ xN + δU (α)

where δL (α) and δU (α) are the lower and upper bounds of x at the desired a-cut (see
Fig. 10.1).
If the solution xi,worst (worst-case point) of the above problem is deterministically
infeasible, we must force it at least onto the deterministic constraint in order to ensure
feasibility of the ith possibilistic constraint. This can be achieved by using a “shifting’’
vector SP = (SP1 , SP2 ) similarly to (Du & Chen 2004). In this case, the deterministic
optimization of the next cycle is formulated as

min f (xN )
xN

1 − SP1 , x2 − SP2 ) ≥ 0,
gi (xN i = 1, . . . , n
N
subject to
xL ≤ xN ≤ xU

For multiple possibilistic constraints, the boundary of each constraint is shifted inside
the deterministic feasible region by the distance between the deterministic optimal
point and the worse case point. The new feasible region is smaller in comparison with
that of the previous cycle.
In general, the deterministic optimization problem for the kth cycle is

min f (k d,k xN , pN )
k d,k xN

subject to gi (k d,k xN −k SP,k−1 pworst ) ≥ 0, i = 1, . . . , n (37)


dL ≤k d ≤ dU , xL ≤k xN ≤ xU

where the left superscript indicates the cycle number. The “shifting’’ vector for the
possibilistic design variables is k SP = k−1 xN − k−1 xworst . Because the “shifting’’ vector
268 Structural design optimization considering uncertainties

xN2, X2 Deterministic constraint Possibilistic constraint


g(mY1, xN2)  0 p[g(mY1, x2 )  0]  a

␣ cut

(mY1, xN2)
Shift to:
SP g(mY1 – SS, xN2 – SP)
SS

(Y1,MPP , x2,worst )

mY1, Y1

Figure 10.9 Shifting of feasible domain for a combination of uncertain and random variables.

idea cannot be used for the possibilistic design parameters P, the worst-case vector
k−1
pworst from the previous cycle, is used. For the first cycle, the worst-case vector
pworst is assumed equal to the nominal point pN .
Subsequently, n possibilistic evaluation problems are solved (one for each possibilis-
tic constraint). The following problem is solved for the ith constraint
min gi (k d, x, p)
x,p
subject to x − δL (α) ≤ x ≤ k xN + δU (α)
k N (38)
pL (α) ≤ p ≤ pU (α)

and its solution determines k xworst and k pworst which are used in the next cycle. Problems
(37) and (38) are repeated for a few cycles until convergence is achieved.

6.2 SPD O wi th b o t h po s s ib ilis t ic and r a n d o m v a r i a b l e s


A sequential approach is described in this section for a mixture of possibilistic and
random variables. For demonstration purposes, Fig. 10.9 shows a geometric inter-
pretation for a hypothetical problem with one random design variable (Y1 ), one
2 ) = 0 and
possibilistic design variable (X2 ), and one deterministic constraint g(µY1 , xN
one possibilistic constraint π[g(µY1 , x2 ) ≤ 0] ≤ α.
For this general case, Problem (26) is solved. In the first cycle, the following
deterministic optimization is performed
min f (d, µY , µZ , xN , pN )
d,xN ,µY
subject to gi (d, µY , µZ , xN , pN ) ≥ 0, i = 1, . . . , n
(39)
dL ≤ d ≤ dU , µLY ≤ µY ≤ µU
Y

xL ≤ xN ≤ xU
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 269

A possibilistic evaluation is then implemented for each constraint at the optimal point
(d, µY , xN ) of Problem (39) in order to identify the worst-case point (xworst , pworst ), at
the desired a-cut. The following problem is solved for the ith constraint,

min gi (d, U, x, p)
U,x,p
subject to U = βti
(40)
xN − δL (α) ≤ x ≤ xN + δU (α)

pL (α) ≤ p ≤ pU (α)

The solution of the above problem is the worst-case point (d, Yi,MPP , Zi,MPP , xi,worst ,
pi,worst ) where (Yi,MPP , Zi,MPP ) is the worst-case most probable point (MPP) for the ith
constraint. If point (d, Yi,MPP , Zi,MPP , xi,worst , pi,worst ) is deterministically infeasible, we
must force it at least onto the deterministic constraint in order to ensure feasibility. This
is achieved by using a “shifting’’ vector SS = {SS1 , . . . , SS } for the  random variables
and a “shifting’’ vector SP = {SP1 , . . . , SPm } for the m possibilistic variables. In this
case, the deterministic optimization of the next cycle is

min f (d, µY , µZ , xN , pN )
d,xN ,µY
subject to gi (d, µY − SS,1 ZMPP , xN − SP,1 pworst ) ≥ 0, i = 1, . . . , n (41)
xL ≤ xN ≤ xU

In summary, the deterministic optimization problem for the kth cycle is

min f (k d,k µY , µZ , k xN , pN )
k d,k xN ,k µ
Y
subject to gi (k d,k µY −k SS,k−1 ZMPP , k xN − k SP,k−1 pworst ) ≥ 0, i = 1, . . . , n
(42)
dL ≤k d ≤ dU , xL ≤ k xN ≤ xU
µLY ≤ k µY ≤ µU
Y

where the left superscript indicates the cycle number and the “shifting’’ vec-
tors for the random design variables and the possibilistic design variables are
k
SS = k−1 µY − k−1 YMPP and k SP = k−1 xN − k−1 xworst , respectively. Each constraint has
its own “shifting’’ vectors.
After the deterministic optimization, n possibilistic evaluation problems are solved
(one for each constraint). The possibilistic assessment problem for the ith constraint is,

min gi (k d, U, x, p)
U,x,p
subject to U = βti
(43)
x − δL (α) ≤ x ≤ k xN + δU (α)
k N

pL (α) ≤ p ≤ pU (α)

The solution of Problem (43) determines the worst-case point (k d, Yi,MPP , Zi,MPP ,
xi,worst , pi,worst ) which is used in the next cycle. The sequence of Problems (42) and (43)
is repeated until convergence.
270 Structural design optimization considering uncertainties

Starting

k  1,0 YMPP 0␮Y,0ZMPP  ␮Z, 0xworst  0xN,0pworst  pN

kSS  k1␮Y –k1YMPP


kSP  k1xN –k1xworst

Det. Optimization
min f(kd, k ␮Y, ␮Z, kxN, pN)
k ,k ,k N
d ␮Y X
kk1 Subject to g(kd,k␮Y – kSS, k1ZMPP,kxN – kSP,k1pworst ) 0
k
d,k␮Y ,kXN
Possibilistic Evaluation
Find kYMPP,kZMPP ,kxworst and kpworst
min g(kd, kU , kX, kp)
kU,kx,kp

Subject to U  bt
k

x ␦L (a)  kx  kxN  ␦U(a)


k N

pL(a)k p  pU(a)
kY k k k
MPP, ZMPP , xworst , pworst

N f converget?
feasible solution?

Y
End

Figure 10.10 Flowchart for the SPDO algorithm.

Figure 10.10 shows the flowchart of the proposed SPDO algorithm for a combi-
nation of possibilistic and random variables. The details of the algorithm have been
already provided in this section. More information is provided in (Zhou & Mourelatos
2007).

7 Examples
In this section, the possibility-based and evidence-based design algorithms as well
as the SPDO are demonstrated with a cantilever beam example and a pressure ves-
sel example. In both examples, comparisons are made with deterministic design and
reliability-based design results. It should be noted that theoretically, the possibility
and reliability-based results cannot be compared because the possibility and reliability
theories are based on different axioms. However for practical purposes, we attempt
to compare them by arbitrarily using membership functions which “resemble’’ the
probability density functions used in the reliability-based results.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 271

t Z

L  100 in w

Figure 10.11 Cantilever beam under vertical and lateral bending.

7.1 A cantilev er beam example


In this example, a cantilever beam in vertical and lateral bending (Wu et al. 2001) is
used (see Fig. 10.11). The beam is loaded at its tip by the vertical and lateral loads
Y and Z, respectively. Its length L is equal to 100 in. The width w and thickness t
of the cross-section are deterministic design variables. The objective is to minimize
the weight of the beam. This is equivalent to minimizing f = w ∗ t, assuming that the
material density and the beam length are constant.
Two non-linear failure modes are used. The first failure mode is yielding at the fixed
end of the cantilever; the other failure mode is that the tip displacement exceeds the
allowable value of D0 = 2.5 . The PBDO problem is formulated as,

min f = w ∗ t
w,t
subject to gjαmin ≥ 0 j = 1, 2

600 600
g1 (y, Z, Y, w, t) = y − ∗Y + 2 ∗Z
wt 2  w t
 2 2
4L3 Y Z (44)
g2 (E, Z, Y, w, t) = D0 − 2
+
Ewt t w2
0 ≤ w, t ≤ 5

where g1 and g2 are the limit states corresponding to the two failure modes. The design
variables w and t are deterministic. In the RBDO study of (Liang et al. 2007), Y, Z, y
and E are normally distributed random parameters with Y ∼ N(1000, 100) lb, Z ∼ N
(500, 100) lb, y ∼ N (40 000, 2000) psi and E ∼ N(29 ∗ 106 , 1.45 ∗ 106 ) psi; y is the
random yield strength, Z and Y are mutually independent random loads in the vertical
and lateral directions respectively, and E is the Young modulus. A reliability index
β = 3 has been used in (Liang et al. 2007) for both constraints.
For the PBDO case, Y, Z, y and E are possibilistic parameters described with the
triangular membership functions (xN − 3 ∗ σ, xN , xN + 3 ∗ σ) where xN is the normal
point of each variable is and σ is the used standard deviation in the RBDO study.
The frame of discernment defined by the (xN − 3 ∗ σ, xN + 3 ∗ σ) coordinates is also
used in EBDO.
Table 10.2 compares the deterministic optimization, RBDO, PBDO and EBDO
results. The PBDO optimum (objective function) with a = 0 is higher than the RBDO
optimum. Because it represents the worst case design, it provides an upper bound of
272 Structural design optimization considering uncertainties

Table 10.2 Comparison of PBDO, EBDO and RBDO optima for the cantilever beam example.

Design variables Determ. Reliability Possibility optimum Evidence optimum


optimum optimum
α = 0.1 α=0 pf = 0.1 pf = 0.0013

w 2.0470 2.4781 2.5298 2.5901 2.4534 2.5028


t 3.7459 3.8421 4.1726 4.210 3.6162 3.9902
Objective
f(w,t) 7.6679 9.5212 10.556 10.901 8.8721 9.9868
Constraints
g 1 (x)/ y 0 0 0 0 0 0.0032
g 2 (x)/D0 0 0.1436 0.15 0.168 0.00428 0.0835

all RBDO optima obtained with different distributions, as long as these distributions
have similar variability ranges (e.g. different beta distributions defined over the same
range). For a higher a-cut (a = 0.1), the PBDO optimum reduces. It should be noted
that the PBDO optimum at a = 1 coincides with the deterministic optimum. The last
two rows of Table 10.2 show the normalized values of the two constraints at the opti-
mum. The first constraint is normalized by the mean yield strength y = 40 000 and the
second constraint is normalized by the allowable tip displacement D0 = 2.5. Although
both constraints are active at the deterministic optimum, only the first constraint is
active for both the RBDO and PBDO optima.
The EBDO problem formulation is the same with Problem (44) but with different
constraints. The new constraints are Pl(gi < 0) ≤ pf , i = 1, 2. The uncertain parameters
P = [Y, Z, y, E] have the BPA structure of Table 10.3. The BPA for each interval of an
uncertain parameter is assumed to be equal to the area under the PDF used in RBDO, in
order to compare the EBDO design with the corresponding RBDO design. This is not
how the BPA is obtained in general. As it has been mentioned, expert opinions are used
to construct the BPA structure. If however, a random variable or parameter is described
probabilistically, equivalent BPA values within specified intervals are calculated as
equal to the area under the PDF. In doing so, the evidence theory can be used to
handle a mixture of probabilistic and non-probabilistic variables.
The last two columns of Table 10.2 show the EBDO results for pf = 0.1 and 0.0013
(β = 3). As expected, the deterministic optimum of 7.6679 is less than the RBDO
optimum of 9.5212 which in turn, is less than the EBDO optimum of 9.9868 at
pf = 0.0013 (β = 3). For pf = 0.1, the EBDO optimum reduces. Furthermore, the
EBDO optimum of 9.9868 at pf = 0.0013 is better than the worst case PBDO opti-
mum of 10.901 (a = 0). Although only the first constraint is active for the RBDO
and PBDO optima, both constraints are active for the EBDO optima, similarly to the
deterministic case.

7.2 A pressu re ve s s e l e xample


This example considers the design of a thin-walled pressure vessel (Lewis & Mistree
1997) which has hemispherical ends as shown in Fig. 10.12. The design objective is to
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 273

Table 10.3 BPA structure for y,Y, Z and E.

Z y (×103 )

Interval BPA Interval BPA


(%) (%)

[200 300] 2.2 [35 37] 6.1


[300 400] 13.6 [37 38] 9.2
[400 450] 15 [38 39] 15
[450 500] 19.2 [39 40] 19.2
[500 550] 19.2 [40 41] 19.2
[550 600] 15 [41 42] 15
[600 700] 13.6 [42 43] 9.2
[700 800] 2.2 [43 45] 7.1

Y E (×106 )

Interval BPA Interval BPA


(%) (%)

[700 800] 2.2 [26.5 27.5] 10


[800 900] 13.6 [27.5 28.5] 21
[900 1 000] 34.1 [28.5 29] 13.5
[1000 1 100] 34.1 [29 29.5] 13.5
[1100 1 200] 13.6 [29.5 30.5] 21
[1200 1 300] 2.4 [30.5 31.3] 21

R R
R L R
L

Figure 10.12 Thin-walled pressure vessel.

calculate the radius R, mid-section length L and wall thickness t in order to maximize
the volume while avoiding yielding of the material in both the circumferential and radial
directions under an internal pressure P. Geometric constraints are also considered. The
material yield strength is Y. A safety factor SF = 2 is used.
274 Structural design optimization considering uncertainties

Table 10.4 BPA structure for R, L, t, P and Y.


R L t BPA (%)

[RN − 6.0 RN − 4.5] [LN − 12 LN − 9] [tN − 0.4 tN − 0.3] 0.13


[RN − 4.5 RN − 3.0] [LN − 9 LN − 6] [tN − 0.3 tN − 0.2] 2.15
[RN − 3.0 RN ] [LN − 6 LN ] [tN − 0.2 tN ] 47.72
[RN RN + 3.0] [LN LN + 6] [tN tN + 0.2] 47.72
[RN + 3.0 RN + 4.5] [LN + 6 LN + 9] [tN + 0.2 tN + 0.3] 2.15
[RN + 4.5 RN + 6.0] [LN + 9 LN + 12] [tN + 0.3 tN + 0.4] 0.13

P Y BPA (%)

[800 850] [208000 221000] 0.13


[850 900] [221000 234000] 2.15
[900 1000] [234000 260000] 47.72
[1000 1100] [260000 286000] 47.72
[1100 1150] [286000 299000] 2.15
[1150 1200] [299000 312000] 0.13

The PBDO problem is stated as

4 3
max f = πR + πR2N LN
RN ,LN ,tN 3 N
subject to gjα ≥ 0 j = 1, . . . , 5
min

where,
P(R + 0.5t)SF
g1 (X) = 1.0 −
2tY
P(2R2 + 2Rt + t 2 )SF
g2 (X) = 1.0 −
(2Rt + t 2 )Y
L + 2R + 2t
g3 (X) = 1.0 −
60
R+t
g4 (X) = 1.0 −
12
5t
g5 (X) = 1.0 −
R
0.25 ≤ tN ≤ 2.0
6.0 ≤ RN ≤ 24
10 ≤ LN ≤ 48

The EBDO problem formulation is the same but with constraints Pl(gj (X) < 0) ≤
pf j = 1, . . . , 5. For the EBDO case, the uncertainty in design variables R, L, and t
and design parameters P and Y are represented with the combined BPA structure of
Table 10.4. To compare results with RBDO, the BPA values of R, L, t, P and Y are
taken equal to the area under the PDF of a normal distribution for the intervals shown
in Table 10.4. The normal distributions for R, L, t, P and Y have standard deviations
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 275

Table 10.5 Comparison of deterministic, RBDO, and EBDO optima for vessel example.

Design variables Determ. optimum Reliability optimum Evidence optimum

pf = 0.2 pf = 0.0228

RN 11.750 8.7244 8.333 8.1111


LN 36.000 33.5186 30.407 26.1852
tN 0.250 0.269 0.347 0.3472
Objective
−f (RN , LN ) 22 400 10 791 9053 7644

Table 10.6 Convergence history for the pressure vessel example.

Cycle # Design variables Obj. g1 (X) g2 (X) g3 (X) g4 (X) g5 (X)


(RN , µL , µt )

α=0
1 (11.75, 36.0, 0.25) 22 398 −0.2551 −1.5101 −0.2502 −0.3917 0.6897
2 (7.0108, 30.3867, 0.2892) 6132 0.4996 0.0 0.0 0.0 0.0258
3 (7.0107, 30.3867, 0.2893) 6132 0.5 0.0 0.0 0.0 0.0256
α = 0.2
1 (11.75, 36.0, 0.25) 22 398 −0.1857 −1.3713 −0.2202 −0.3167 0.7239
2 (7.9108, 30.3867, 0.2892) 8044 0.4996 0.0 0.0 0.0 0.4326
3 (7.9107, 30.3867, 0.2893) 8044 0.5 0.0 0.0 0.0 0.4325

equal to 1.5, 3, 0.1, 50 and 13 000, respectively. The mean values for parameters
P and Y are taken equal to 1000 and 260 000. The intervals for R, L, t, P and Y
extend four standard deviations from each side of the normal point, in an attempt to
use a similar variation with the RBDO study. Finally, EBDO and PBDO use the same
frame of discernment.
Table 10.5 compares the deterministic optimization, RBDO and EBDO results. Sim-
ilar conclusions with the previous example are drawn. A reliability index β = 2.0
(pf = 0.0228) has been used in the RBDO study for all constraints. The EBDO max-
imum volume for pf = 0.0228 is lower than the corresponding RBDO volume. For
comparison purposes, the EBDO optima for pf = 0.2 and pf = 0.0228 have also been
calculated. As shown in Table 10.5, the EBDO maximum volume increases with
increasing pf , as expected. In this example, the third and fourth constraints are active
for the deterministic, RBDO and EBDO optima.
Table 10.6 gives the convergence history of the SPDO method for α = 0 and a = 0.2.
It lists the values of design variables, objective function and the five constraints for
each cycle. For both a-cuts, the algorithm converges in three cycles.
276 Structural design optimization considering uncertainties

Table 10.7 SPDO results and comparisons for the pressure vessel example.

Design variables Determ. opt. Double-loop RBDO Double-loop PBDO SPDO

a = 0.2 a=0 a = 0.2 a=0

RN 11.750 8.7244 7.9108 7.000 7.9107 7.0107


µL 36.000 33.5186 30.483 30.660 30.3867 30.3867
µt 0.250 0.269 0.2894 0.2997 0.2893 0.2893
Objective
f (RN , µL ) 22 400 10 791 8062 6150 8044 6132
Constraints
g 1 (X) 0.8173 0.5003 0.5 0.55 0.5 0.5
g 2 (X) 0.6346 0 0 0.1 0 0
g 3 (X) 0 0 0 0 0 0
g 4 (X) 0 0 0 0 0 0
g 5 (X) 0.8936 0.6891 0.4323 0 0.4325 0.0256
No. of F.E. 96 5904 9470 10 534 1832 2121

Table 10.7 compares the deterministic optimization, RBDO, double-loop PBDO and
SPDO results. Two a-cuts (α = 0 and α = 0.2) are used for the possibilistic design. Sim-
ilarly to the previous example, the proposed SPDO approach gives the same results
with the double-loop PBDO with a much better efficiency. For example, the num-
ber of function evaluations for a = 0 is 2121 and 10 534 for SPDO and double-loop
PBDO, respectively. As expected, the deterministic optimum of 22 400 is higher than
the RBDO optimum of 10 791 which is in turn, higher than the worst-case (a = 0)
PBDO optimum of 6150. Note that in this example the objective is maximized. Also,
the PBDO optimum of 8062 (a = 0.2) is higher than the worst-case optimum of 6150
(a = 0).
At the deterministic optimum, only the third and fourth constraints are active. How-
ever at the RBDO and PBDO optima, the second constraint is also active. It should
be also noted that the computational cost of the double-loop PBDO is usually higher
than that of the double-loop RBDO (see Table 10.7) due to the different problem
formulation between the two.

8 Conclusions
In this chapter, the possibility and evidence theories were used to assess design reli-
ability with incomplete information. The possibility theory was viewed as a variant
of fuzzy set theory. The different types of uncertainty and formal uncertainty theo-
ries were first introduced using the fundamentals of fuzzy measures. Subsequently, the
commonly used vertex and discretization methods which are used for propagating
non-probabilistic uncertainty were reviewed and compared with a hybrid (global-
local) optimization method. It was showed that the hybrid optimization method is
very efficient and has the same accuracy with the “brute force’’ discretization method.
The possibility theory was also used in design. A possibility-based design optimiza-
tion method was proposed where all design constraints are expressed possibilistically.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 277

It was shown that the method gives a conservative solution compared with all con-
ventional reliability-based designs obtained with different probability distributions.
A general possibility-based design optimization method was also presented which
handles a combination of random and possibilistic design variables. Furthermore, a
sequential algorithm for possibility-based design optimization (SPDO) was introduced.
It decouples a double-loop PBDO process into a sequence of cycles composed of a deter-
ministic design optimization followed by a set of worst-case reliability evaluation loops.
The computational cost is kept low by first using the performance measure approach
in reliability analysis and second by decoupling the deterministic design optimization
from the worst-case reliability evaluation.
A computationally efficient evidenced-based design optimization method was also
described, which can handle a mixture of epistemic and random uncertainties. A mean
performance is optimized subject to the plausibility of constraint violation being small.
Uncertainty is quantified using “expert’’ opinions. Two examples demonstrated the
proposed possibility-based and evidence-based design optimization methods. It was
shown that both the PBDO and EBDO designs are more conservative compared with
the RBDO design. However, the EBDO design is usually less conservative compared
with the PBDO design.

References

Agarwal, H., Renaud, J.E., Preston, E.L. & Padmanabhan, D. 2004. Uncertainty Quantification
Using Evidence Theory in Multidisciplinary Design Optimization. Reliability Engineering and
System Safety 85:281–294.
Akpan, U.O., Rushton, P.A. & Koko, T.S. 2002. Fuzzy Probabilistic Assessment of the Impact
of Corrosion on Fatigue of Aircraft Structures. Paper AIAA-2002-1640.
Bae, H.-R., Grandhi, R.V. & Canfield, R.A. 2004. An Approximation Approach for Uncer-
tainty Quantification Using Evidence Theory. Reliability Engineering and System Safety 86:
215–225.
Bae, H.-R., Grandhi, R.V. & Canfield, R.A. 2004. Epistemic Uncertainty Quantification Tech-
niques Including Evidence Theory for Large-Scale Structures. Computers and Structures 82:
1101–1112.
Chen, L. & Rao, S.S. 1997. Fuzzy Finite Element Approach for the Vibration Analysis of
Imprecisely Defined Systems. Finite Elements in Analysis and Design 27:69–83.
Choi, K.K., Du, L. & Youn, B.D. 2004. A New Fuzzy Analysis Method for Possibility-
Based Design Optimization. 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization
Conference, AIAA 2004-4585, Albany, NY.
Du, X. & Chen, W. 2000. An Integrated Methodology for Uncertainty Propagation and
Management in Simulation-Based Systems Design. AIAA Journal 38(8):1471–1478.
Du, X. & Chen, W. 2004. Sequential Optimization and Reliability Assessment Method for
Efficient Probabilistic Design. ASME Journal of Mechanical Design 126:225–233.
Du, X., Sudjianto, A. & Huang, B. 2005. Reliability-Based Design with a Mixture of Random
and Interval Variables. ASME Journal of Mechanical Design 127:1068–1076.
Dubois, D. & Prade, H. 1988. Possibility Theory. New York: Plenum Press.
Elishakoff, I.E., Haftka, R.T. & Fang, J. 1994. Structural Design under Bounded Uncertainty –
Optimization with Anti-Optimization. Computers and Structures 53:1401–1405.
Gu, X., Renaud, J.E. & Batill, S.M. 1998. An Investigation of Multidisciplinary Design
Subject to Uncertainties. 7th AIAA/USAF/NASA/ISSMO Multidisciplinary Analysis and
Optimization Symposium, St. Louis, Missouri.
278 Structural design optimization considering uncertainties

Jones, D.R., Perttunen, C.D. & Stuckman, B.E. 1993. Lipschitzian Optimization Without the
Lipschitz Constant. Journal of Optimization Theory and Applications 73(1):157–181.
Klir, G.J. & Filger, T.A. 1988. Fuzzy Sets, Uncertainty, and Information. Prentice Hall.
Klir, G.J. & Yuan, B. 1995. Fuzzy Sets and Fuzzy Logic: Theory and Applications.
Prentice Hall.
Lee, J.O., Yang, Y.O. & Ruy, W.S. 2002. A Comparative Study on Reliability Index and Target
Performance Based Probabilistic Structural Design Optimization. Computers and Structures
80:257–269.
Lewis, K. & Mistree, F. 1997. Collaborative, Sequential and Isolated Decisions in Design.
Proceedings of ASME Design Engineering Technical Conferences, Paper# DETC1997/
DTM-3883.
Liang, J., Mourelatos, Z.P. & Tu, J. 2007. A Single-Loop Method for Reliability-Based Design
Optimization. In press International Journal of Product Development. Also, Proceedings of
ASME Design Engineering Technical Conferences, 2004, Paper# DETC2004/DAC-57255.
Lombardi, M. & Haftka, R.T. 1998. Anti-Optimization Technique for Structural Design under
Load Uncertainties. Computer Methods in Applied Mechanics and Engineering 157:19–31.
Moore, R.E. 1966. Interval Analysis. Prentice-Hall.
Mourelatos, Z.P. & Zhou, J. 2006. A Design Optimization Method using Evidence Theory.
ASME Journal of Mechanical Design 128(4):901–908.
Mourelatos, Z.P. & Zhou, J. 2005. Reliability Estimation with Insufficient Data Based on
Possibility theory. AIAA Journal 43(8):1696–1705.
Muhanna, R.L. & Mullen, R.L. 2001. Uncertainty in Mechanics Problems – Interval-Based
Approach. Journal of Engineering Mechanics 127(6):557–566.
Mullen, R.L. & Muhanna, R.L. 1999. Bounds of Structural Response for all Possible Loadings.
ASCE Journal of Structural Engineering 125(1):98–106.
Nikolaidis, E., Chen, S., Cudney, H., Haftka, R.T. & Rosca, R. 2004. Comparison of Probability
and Possibility for Design Against Catastrophic Failure Under Uncertainty. ASME Journal of
Mechanical Design 126:386–394.
Oberkampf, W.L. & Helton, J. 2002. Investigation of Evidence Theory for Engineering
Applications. AIAA Non-Deterministic Approaches Forum, AIAA 2002-1569, Denver, CO.
Oberkampf, W., Helton, J. & Sentz, K. 2001. Mathematical Representations of Uncertainty.
AIAA Non-Deterministic Approaches Forum, AIAA 2001-1645, Seattle, WA, April 16–19.
Penmetsa, R.C. & Grandhi, R.V. 2002. Efficient Estimation of Structural Reliability for
Problems with Uncertain Intervals. Computers and Structures 80:1103–1112.
Penmetsa, R.C. & Grandhi, R.V. 2002. Estimating Membership Response Function using
Surrogate Models. Paper AIAA 2002-1234.
Rao, S.S. & Cao, L. 2002. Optimum Design of Mechanical Systems Involving Interval
Parameters. ASME Journal of Mechanical Design 124:465–472.
Rao, S.S. & Sawyer, J.P. 1995. A Fuzzy Finite Element Approach for the Analysis of Imprecisely
Defined Systems. AIAA Journal 33:2264–2370.
Ross, T.J. 1995. Fuzzy Logic with Engineering Applications. McGraw Hill.
Sentz, K. & Ferson, S. 2002. Combination of Evidence in Dempster – Shafer Theory. Sandia
National Laboratories Report SAND2002-0835.
Tu, J., Choi, K.K. & Park, Y.H. 1999. A New Study on Reliability-Based Design Optimization.
ASME Journal of Mechanical Design 121:557–564.
Tu, J. & Jones, D.R. 2003. Variable Screening in Metamodel Design by Cross-Validated Moving
Least Squares Method. Proceedings 44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics and Materials Conference, AIAA-2003-1669, Norfolk, VA.
Wang, G. 2003. Adaptive Response Surface Method Using Inherited Latin Hypercube Design
Points. ASME Journal of Mechanical Design 125:1–11.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s 279

Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety – Factor Based Approach for
Probabilistic – Based Design Optimization. 42nd AIAA/ASME/ASCE/AHS/ASC Structures,
Structural Dynamics and Materials Conference. Seattle, WA.
Yager, R.R., Fedrizzi, M. & Kacprzyk, J. (eds) 1994. Advances in the Dempster – Shafer Theory
of Evidence. John Wiley & Sons, Inc.
Ye, K.Q., Li, W. & Sudjianto, A. 2000. Algorithmic Construction of Optimal Symmetric Latin
Hypercube Designs. Journal of Statistical Planning and Inference 90:145–159.
Youn, B.D., Choi, K.K. & Park, Y.H. 2001. Hybrid Analysis Method for Reliability-Based
Design Optimization. ASME Journal of Mechanical Design 125(2):221–232.
Zadeh, L.A. 1965. Fuzzy Sets. Information and Control 8:338–353.
Zadeh, L.A. 1978. Fuzzy Sets as a Basis for a Theory of Possibility. Fuzzy Sets and Systems
1:3–28.
Zhou, J. & Mourelatos, Z.P. 2007. A Sequential Algorithm for Possibility-Based Design Opti-
mization. In press ASME Journal of Mechanical Design. Also, Proceedings of ASME Design
Engineering Technical Conferences, 2006, Paper# DETC2006-99232.
Chapter 11

A decoupled approach to reliability-


based topology optimization for
structural synthesis
Neal M. Patel, John E. Renaud & Donald Tillotson
University of Notre Dame, Notre Dame, IN, USA

Harish Agarwal
General Electric Global Research, Niskayuna, NY, USA

Andrés Tovar
National University of Colombia, Bogota, Colombia

ABSTRACT: Conceptual designs of structures have been generated using topology opti-
mization over the past two decades. However, traditional topology optimization techniques
neglect uncertainties that exist in the real-world. In this chapter, this problem is addressed by
including the notion of reliability into the design process. A reliability-based topology optimiza-
tion (RBTO) framework for structural synthesis is proposed using a decoupled reliability-based
design optimization (RBDO) approach, so that the topology synthesis is separate from the reli-
ability analysis. In the algorithm presented, a maximum allowable displacement failure mode
is considered. Starting from a continuum design space of uniform material distribution and ini-
tial uncertain variable values, a deterministic topology optimization is followed by a reliability
analysis of the resulting structure to determine the most probable point of failure (MPP) for
the current structure. The MPP is determined with respect to the maximum allowable deflec-
tion of the structure for a given applied loading. The non-gradient Hybrid Cellular Automaton
(HCA) method used for topology optimization is combined with the decoupled approach for
RBDO to develop a new continuum-based approach to RBTO. The objective of this chapter is
to present the background behind the methods employed and demonstrate capabilities of the
RBTO framework using examples.

1 Introduction
Concept designs for minimum compliance structures can be synthesized using topology
optimization (Bendsoe and Kikuchi 1988). Traditional techniques neglect variabilities
that occur over the life and use of a structure. For example, the structure of a bridge
can incur vastly different loading depending on the traffic pattern for a given time of
the day. Uncertainties may exist in certain material properties as well. Reliability-based
design optimization (RBDO) is a probabilistic optimization method that has been used
in design problems to account for variation and uncertainty. The objective of RBDO
is to mediate between cost and safety. In deterministic optimization, designs are often
driven to the limits of the design constraints, neglecting tolerances in modeling and
simulation uncertainties. Therefore, the resulting optimized designs can be unreliable
with a high probability of failure when in use. Factor of safety techniques have been
282 Structural design optimization considering uncertainties

employed as a popular method for accounting for uncertainties and off-design oper-
ation, but these designs are typically over-engineered, leading to higher cost since the
uncertainties are not necessarily quantified. The probabilistic RBDO approach facili-
tates the design to a specific risk and target reliability level accounting for the various
sources of uncertainty. In probabilistic optimization methods, these variational uncer-
tainties are modeled as random variables. In this respect, the deterministic analysis
can be viewed as an extension of the probabilistic analysis, where the deterministic
quantities are a trivial instance of the random variables.
Reliability-based topology optimization (RBTO) extends the notion of reliability to
the area of topology optimization. In this chapter, we consider a discretized continuum
design domain, where the density of each element is used as a design variable. Tradi-
tional topology optimization methods drive the topology of a structure to an optimum
design based on a single constraint on mass. However, nothing can be said about the
reliability of the resulting topology since it does not account for uncertainties and
modes of failure that the structure realistically would require. Because of the large
number of design variables associated with topology optimization problems, the
inclusion of RBDO methods could be computationally time prohibitive for large-
scale problems because of gradient calculations required in the sensitivity analysis.
Therefore, research in this area is on concentrated in developing efficient reliabil-
ity based topology optimization techniques. Kharmanda et al. (Kharmanda, Olhoff,
Mohamed, and Lemaire 2004) proposed a reliability-based methodology for topol-
ogy optimization using a heuristic strategy that aims to reduce mass while improving
the reliability level of the structure without increasing its weight. However, in this
approach, the failure mode is purely a linear combination of the random variables
and does not have any physical meaning. Mogami et al. (Mogami, Nishiwaki, Izui,
Yoshimura, and Kogiso 2006) incorporated reliability-based constraints in the topol-
ogy optimization method using discrete frame elements and the traditional double-loop
approach. Maute and Frangopol (Maute and Frangopol 1998) extended the notion
of reliability to Micro-Electro-Mechanical Systems (MEMS) design using topology
optimization.
In the RBTO framework presented here, a decoupled approach is employed such
that the topology optimization is separate from the reliability analysis (Agarwal and
Renaud 2006). The decoupled reliability-based design optimization methodology is
an approximate technique to obtain consistent reliable designs at a lower computa-
tional expense. Starting from an initial design domain of full material and uncertain
parameters, such as loads, a complete topology optimization is followed by a reliabil-
ity analysis of the structure; because the main optimization and the reliability analysis
phases are detached, we refer to this as a decoupled approach.
Although the RBTO framework can be generalized for use with any topology opti-
mization method, in this work the Hybrid Cellular Automaton (HCA) method is
utilized for deterministic continuum structural synthesis of minimum compliance struc-
tures (Tovar, Quevedo, Patel, and Renaud 2006). It is assumed that the structural
deformation is elastic and loading is static. The change in density is evaluated locally
using a CA rule, while the compliance is evaluated using a global structural analy-
sis via the finite element method (FEM). In the presented methodology, RBTO has
the same objective as the deterministic topology optimization: minimize compliance.
Typically maximum deflection and stress are of concern when deigning a structure for
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 283

maximum stiffness. Here we consider the mode of failure to be the maximum deflec-
tion of the structure when loads are applied. Therefore, a constraint on maximum
allowable displacement of the structure is implemented as well as a similar displace-
ment constraint formulation for the limit-state function. The utilization of the gradient
free HCA method in the RBTO framework adds efficiency to the methodology.
In the topology optimization problem, the design variables are the densities of the
material elements that make up the design domain. Characteristics of the problem
that may have some associated uncertainty are identified as uncertain parameters. The
reliability subproblem is applied to the topology generated. A new topology optimiza-
tion is executed using the uncertain parameter values at the most probable point of
failure (MPP), as determined in this subproblem. This process is repeated until conver-
gence. The RBTO framework is applied to two design problems and the final designs
are validated using the Monte Carlo simulation. In these problems, the elastic modu-
lus and applied loading are considered as the uncertain parameters, characterized by
a normal distribution, and a first-order estimate is used to approximate the failure
surface.

2 Reliability-based design optimization


Optimized designs based on a deterministic formulation are usually associated with a
high probability of failure due to inherent uncertainties associated with the imposed
design constraints. In today’s competitive marketplace, it is very important that the
resulting designs are both optimum and at the same time reliable. Optimized designs
without considering the variability of design variables and parameters can be prone
to failure in service. In order to achieve the objective of obtaining reliable optimum
designs, a designer must replace a deterministic optimization with a reliability-based
design optimization (RBDO), where the critical probabilistic constraints are replaced
with reliability constraints, as shown below

min f (x, p)
x
subject to gD (V) ≥ 0
(1)
gR (x, p) ≥ 0
xl ≤ x ≤ xu

where x and p represent the design variables and fixed parameters, respectively, and
gR and gD denote reliability and deterministic constraints. The reliability constraints
are either constraints on probabilities of failure corresponding to each probabilistic
constraint, or a single constraint on the overall system probability of failure. The
reliability constraints (gR ) can be formulated as

giR = Pallowi − Pi i = 1, . . . , k (2)


gR = Pallowsys − Psys (3)

for k constraints where Pi is the failure probability of the probabilistic constraint giR at
a given design and Pallowi is the allowable probability of failure for that failure mode.
284 Structural design optimization considering uncertainties

The parameter Psys is the system failure probability at a given design and Pallowsys is
the allowable system probability of failure. These probabilities of failure are usually
estimated by employing standard reliability techniques.
The reliability analysis is a tool used to compute the reliability index or the probabil-
ity of failure corresponding to a given failure mode or for the entire system (Haldar and
Mahadevan 2001). The reliability analysis involves a probability distribution trans-
formation, the search for the MPP, and the evaluation of the cumulative Gaussian
distribution function. The uncertainties are modeled as continuous random variables
V = (V1 , V2 , . . . , Vn )T , with known (or assumed) continuously differentiable distribu-
tion functions, FV (v). The ith random probabilistic constraint is denoted as giR (V, η),
where η refers to deterministic parameters, also called limit state parameters. In the
following, v denotes a realization of the random variables V. Letting giR (V, η) ≤ 0
represent the failure domain and giR (V, η) = 0 be the so-called limit state function,
then the time-invariant probability of failure for the ith probabilistic constraint is
given by


Pi (η) = fV (v)dx (4)
giR (x,η)≤0

where fV (v) is the joint probability density of V. It is almost impossible to find an


analytical solution to the above integral. In standard reliability techniques, a probabil-
ity distribution transformation T is usually employed, as illustrated in Fig. 11.1. An
arbitrary n-dimensional random vector V = (V1 , V2 , . . . , Vn )T is mapped into an inde-
pendent standard normal vector U = (U1 , U2 , . . . , Un )T . The standard normal random
variables are characterized by zero mean and unit variance. The limit state function in
U-space can be obtained as giR (x, η) = giR (T(u), η) = GR i (u, η) = 0. The failure domain
in U-space is GRi (u, η) ≤ 0.

x2 u2

GR (u, h)  0 (fail)
gR (x, h)  0 (safe) u  T (x) u*(MPP)

b FORM
x1 u1
SORM
gR (x, h)  0 (fail)
GR (u, h)  0 (safe)

Original space Standard space

Figure 11.1 Transformation from the original space to the standard space.
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 285

Equation (4) thus transforms to



Pi (η) = φU (u) du (5)
GR
i (u,η)≤0

where φU (u) is the standard normal density of u. If the limit state function in U-space
is affine, i.e., if GR (u, η) = T u + β, then an exact result for the probability of failure
is Pf = (−β), where () is the cumulative Gaussian distribution function. If the limit
state function is close to being linear, i.e., if GR (u, η) ≈ T u + β with β = −T u∗ ,
where u∗ is the solution of the following optimization problem

min ||u||
u (6)
subject to GR (u, η) = 0

then the first-order estimate of the probability of failure is Pf = (−βp ), where 


represents the vector of direction cosines at the solution point. The solution u∗ of
the above optimization problem, the so called design point, β-point or the MMP of
failure, defines the reliability index βp = ||u∗ ||. This method of estimating the prob-
ability of failure is known as the First-Order Reliability Method (FORM) (Haldar
and Mahadevan 2001). In the second-order reliability method (SORM), the limit state
function is approximated as a quadratic surface (Breitung 1984). However, first order
approximations, Pf (η) ≈ (−βp ), are usually sufficient for most practical cases and,
therefore, are used in this chapter. Using the FORM estimate, the reliability constraints
in Eq. (2) can be written in terms of reliability indices as follows

girc = βi − βreqdi (7)

where βi is the calculated reliability index and βreqdi = −−1 (Pallowi ) is the desired
reliability index for the ith probabilistic constraint. This is referred to the reliability
index approach (RIA).
RIA can be solved as an optimization problem to solve for the constraint in Eq. (2).
The reliability index corresponding to a failure mode requires the solution of the
optimization problem in (6). Various algorithms have been reported in the literature
(P. Lui 1991) to solve for the solution, which typically requires many system analysis
evaluations. Moreover, RIA may fail to provide a solution to the FORM problem,
especially when the limit state surface is far away from the origin in U-space or when
the case GR (u, η ) = 0 never occurs at a particular design variable setting. Thus, the
most challenging task is the search for the MPP.
To overcome these difficulties in RIA, Choi et al. (Choi, Youn, and Yang 2001)
provides an improved formulation to solve the RBDO problem. In this method, known
as the performance measure approach (PMA), the reliability constraints are stated by
an inverse formulation

girc = Gi (ui∗ , η )
R,∗
i = 1, . . . , k (8)
286 Structural design optimization considering uncertainties

R,∗
where Gi is the solution to an inverse reliability analysis (IRA). This optimization
problem is stated as

i (u, η )
min GR
u (9)
subject to u = βreqdi

where ui∗ is the optimum (corresponding MPP in IRA) of the ith reliability constraint.
Solving RBDO by the PMA formulation is usually more efficient and robust than the
RIA formulation, where the reliability is evaluated directly. PMA is, therefore, used
in the proposed methodology. The efficiency lies in the fact that the search for the
MPP of an inverse reliability problem is easier to solve than the search of the MPP
corresponding to an actual reliability.

2.1 Rel i a b i l i ty in s t r uc t ur al o pt imizat i o n


Reliability in structural design has been developed considerably since the 1970’s (Moses
1973). Haldar and Mahadevan (Haldar and Mahadevan 2001), Haftka et al. (Haftka,
Gürdal, and Kamat 1990), among others (Frangopol 1998), present a comprehen-
sive background in structural reliability. Murotsu and Shao (Murotsu and Shao 1989)
applied the notion of reliability to shape optimization of truss structures, where nodal
coordinates are used as shape design variables with sizing design variables, such as
the cross-sectional areas of the truss members. Papadrakakis and Lagaros utilized neu-
ral networks and the Monte Carlo simulation to perform reliability-based structural
optimization of large-scale structural systems. Royset et al. (Royset, Kiureghian, and
Polak 2001) developed a decoupled technique for reliability-based structural optimiza-
tion where the structural optimization and reliability analysis were separated. In that
methodology, a semi-definite optimization algorithm was utilized for the structural
optimization. Frangopol and Maute (Frangopol and Maute 2003) reviewed the state
of the art in reliability based design in both civil and aerospace structures. The inclu-
sion of reliability was then extended to the design of aeroelastic structures by Allen
and Maute (Allen and Maute 2004). In the current work, reliability is explored in the
area of topology optimization.

3 Topology optimization
The roots of topology optimization date back to the late 1980’s. This computational
technique for the optimal distribution of material of continuum structures was first
introduced by Bendsøe and Kikuchi (Bendsoe and Kikuchi 1988). Topology optimiza-
tion can be viewed as a method for developing an initial, or concept, design. The
optimization process systematically eliminates and redistributes material throughout
the domain to minimize or maximize a specified objective. Early work in topology
optimization generally dealt with simple problems that used the assumptions of elastic
material properties, linear deformations, and static loading conditions. A compre-
hensive review of topology optimization can be found in literature by Bendsøe and
Sigmund (Bendsøe and Sigmund 1989), Rozvany (Rozvany 1997), and Eschenaueuer
and Olhoff (Eschenaueuer and Olhoff 2001).
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 287

The objective and constraints considered are a global structural response such as
mean compliance, von Mises stresses, eigenfrequencies, or geometrical parameters such
as volume (or mass) or perimeter. This can be extended to multiple loading conditions.
Traditionally, using the static-elastic assumption, the objective of a structural optimiza-
tion problem is to achieve minimum compliance or strain energy with a constraint on
the mass, or volume V. This can be expressed formally as

min f (ρ)
ρi
N
subject to i=1 ρi v i ≤ V (10)
ρmin ≤ ρi ≤ ρmax

where ρ are the elemental densities. The compliance of a structure due to loading can
be expressed as

c(x) = dT K(x)d (11)

where K is the global stiffness matrix, d is the vector of global displacements, and F is
the vector of external global forces. The vector x is the set of design variables related
to the material state of the elements in the design domain, such as ρ.

3.1 M aterial parametrization


Ultimately, the goal of topology optimization is to determine a material distribu-
tion within the design domain to achieve a specified objective. One can utilize
discrete structural elements, known as the ground structure approach, to described
a structure. In this work, continuum elements are used. For continuum structures,
the homogenization and density approaches are the two most popular material
parameterizations.

3.1.1 The hom og eni zati on approach


The initial work in topology optimization of continuum structures was based on com-
posite material models to describe the material properties in all dimensions. This
technique, presented in the seminal work of Bendsøe and Kikuchi (Bendsoe and Kikuchi
1988), is referred to as the homogenization approach, which uses composite materi-
als as the basis for describing varying material properties where each element is a
microstructure. The homogenization can be viewed as an interpolation model for void
and full material.
In the homogenization approach, the design domain consists of square cells. Each cell
has a rectangular hole at the centroid defined by lengths a and b, as shown in Fig. 11.2.
The rectangle is orientated at an angle θ. Ultimately, the density of an element is a
function of the variables ai , bi , and θi . Thus, each element has three variables associated
with it. The relationship between the size of the cavity and the material properties is
obtained using homogenization method. This method is typically employed in two
stages (Duysinx 1997). In the first stage, the microstructure orientation θ is varied,
288 Structural design optimization considering uncertainties

 a
u

Figure 11.2 A unit cell of a microstructure parameterized using the homogenization method.

Figure 11.3 An illustration of the density approach to material parametrization in topology


optimization.

based on the principal strains. In the second stage, the microstructure parameters a
and b are updated. This approach to material parametrization is typically utilized for
linear-elastic material assumptions (Bendsoe and Kikuchi 1988), but has been applied
to problems with both material and geometric nonlinearities as well (Yuge and Kikuchi
1995; Yuge, Iwai, and Kikuchi 1999).

3.1.2 Th e d en si t y a p p r o a ch
A second technique for material parametrization deals with the more direct approach of
associating just one design variable with each individual material element, as illustrated
in Fig. 11.3. This is called the density approach. The material model is defined to
allow the material to assume intermediate property values by utilizing an interpolation
function. The design variables are the relative densities (xi ) of the elements where
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 289

0 represents a void and 1 is full density. The density of a material element can be
expressed as

ρi (xi ) = xi ρ0 (0 < xi ≤ 1) (12)

where ρ0 is the density of the base material. In utilizing the finite element method
(FEM), the design variable is mapped to the global stiffness matrix by relating the
relative density of an element to its elastic modulus. The solid isotropic material with
penalization (SIMP) model (Bendsoe 1989; Zhou and Rozvany 1991) is a commonly
utilized interpolation scheme that heuristically relates the relative density to the elastic
modulus of each element using the following expression
p
Ei (xi ) = xi E0 (13)

where p is the penalization parameter (p ≥ 1) and E0 is the elastic modulus of the


base isotropic material. Therefore, we can view the elements of differing relative den-
sities in the design domain as unique isotropic material elements. The power p is
used to penalize intermediate densities to drive the elemental densities within in the
design domain to have either full density (x = 1) or no density (x = 0). Most optimizers
require this penalization to generate 0–1 topologies. Although the density approach is
used in gradient-based optimization methods because it is a continuous function, this
material parametrization can be utilized with non-gradient methods so that material
is distributed in a continuous manner from one iteration to the next. This allows the
topology to evolve in a smooth, efficient manner. In this RBTO framework, a linear
interpolation model (p = 1) is utilized to relate the design variable xi to the elastic
modulus of a material element with an intermediate density, as expressed by

Ei (xi ) = xi E0 (14)

3.2 Optimization techniques


Various methodologies have been developed for topology optimizations over the past
two decades. Topology optimization algorithms fall into the categories of mathe-
matical programming (MP), optimality criteria (OC), and evolutionary programming
methods (Bendsøe and Sigmund 1989). Mathematical programming techniques are
mathematically based methods for optimization. OC methods are derived from
the Karush-Kuhn-Tucker (KKT) optimality conditions. Evolutionary methods are
heuristic, or intuition-based, approaches that use mechanisms inspired by biologi-
cal evolution, such as reproduction, mutation, and survival of the fittest, to find an
optimal solution to a problem. An important distinction between classes of methods
is that MP and OC methods utilize continuous design variables whereas evolutionary
methods use discrete representations as design variables.
Sequential Convex Programming (SCP) is an example of a MP approach used
for solving topology design problems. The Method of Moving Asymptotes (MMA),
developed by Svanberg (Svanberg 1987), is the most popular SCP algorithm used
for structural optimization because of its efficiency. In this method, a strictly con-
vex subproblem is approximated at each iteration based on sensitivity information
290 Structural design optimization considering uncertainties

at the current design and then solved. The roots of OC methods in continuum-based
topology optimization date back to the pioneering work of Bendsøe and Kikuchi. OC
methods are primarily suited for problems containing a small number of constraints as
compared to the number of design variables. In general, the OC methods are more
computationally efficient than conventional MP methods (Rozvany, Bendsoe, and
Kirsh 1995). Since the material volume constraint is the only active constraint, an OC
method can be used to provide more rapid convergence compared to other optimization
schemes.
The aforementioned algorithms require gradient information in obtaining the final
solution. In contrast, numerous topology optimization methodologies have been devel-
oped using evolutionary strategies that do not require gradients. An often used but
inefficient approach is to utilize genetic algorithms (GAs) or semi-stochastic tech-
niques. These methods may be more likely to find global solutions, but they require
thousands of function calls. Another non-gradient based methodology developed by
Xie and Stevens (Xie and Stevens 1997) is called Evolutionary Structural Optimization
(ESO). It is based on the concept of progressively removing inefficient material from
a structure so that it evolves into an optimal design. Another approach that requires
no gradient information and utilizes cellular automata (CA) is the Hybrid Cellular
Automaton (HCA) method. Since there is no randomness in the HCA formulation,
this method is considered to be a MP method.

3.2.1 Topology s y n t h e s is u s in g h y b r id ce llu lar aut o mat a


A cellular automaton (CA) is a discrete model studied in computability theory and
mathematics (Wolfram 2002). It consists of an regular grid of cells, or lattice, where
each cell is characterized by a finite number of states. The state of each cell at a given
time, or generation, is a function of the states of a finite number of neighboring cells,
called the neighborhood. Every cell has the same set of rules, which are applied based
on information in its neighborhood. These rules are applied to the entire CA lattice
each generation.
The notion of cellular automata was initially conceived by John von Neumann in the
late 1940’s. According to Burks (Burks 1970), the first CA proposed by von Neumann
was a two-dimensional square lattice comprised of several thousands cells. Each of
these cells had up to 29 possible states. The CA rule required the state of each cell
plus its four nearest neighbors, located directly north, south, east, west. This CA
model was so complex that it has only been partially implemented on a computer.
The von Neumann rule has the so-called property of universal computation, meaning
that there exists an initial configuration of the CA which leads to the solution of any
computer algorithm. Accordingly, any universal computer circuit (i.e., logical gate) can
be simulated by the rule of the automaton. This illustrates that complex and unexpected
behavior can emerge from a CA rule.
Cellular automata rules are applied over a number of discrete time steps on each
CA element based on information collected in its neighborhood. The rules operate
on the set of the states of neighboring cells. The rules are applied iteratively for as
many time steps as required. Therefore, the global behavior of the CAs is governed
by the set of local rules. These rules operate according to local information collected
in the neighborhood of each cellular automaton. The final state of a CA is defined by
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 291

the state of itself and states of the CAs within the neighborhood. For example, the
information collected from a neighborhood can be expressed as

1 

S̄i = Sj (15)
N̂ + 1 j=0

where S0 is the field state of the ith CA and N̂ is the number of elements in its neighbor-
hood. This can be viewed as a filtering technique that prevents numerical instabilities
of checkerboarding and mesh dependency. In practice, the size of the neighborhood is
often limited to the adjacent cells but can also be extended. Figure 11.4 depicts some
common two-dimensional neighborhood layouts. In the cellular automata paradigm,
the same neighborhood is applied for all CA in the lattice. In the context of structural
optimization, no state information exists outside of the design domain. Therefore, the
neighborhood is modified for the boundary elements to only include neighbors within
the design region.
One of the first applications of cellular automata to structural design was presented
by Inou et al. (Inou, Shimotai, and Uesugi 1994; Inou, Uesugi, Iwasaki, and Ujihashi
1998). CAs have been applied for both discrete and continuous structures. Gürdal
and Tatting (Gürdal and Tatting 2000) and Slotta et al. (Slotta, Tatting, Watson,
Gürdal, and Missoum 2002) applied cellular automata to truss structures. In that
application, a rectangular design domain was composed of an array of truss elements.
Each cell was composed of a node and the eight trusses from neighboring nodes in
an forty five degree arrangement. Kita and Toyoda (Kita and Toyoda 2000) presented
a methodology that is similar to the HCA method developed by Tovar for structural
synthesis in that it utilizes the finite element method for structural analysis. The local
update rule was based on the minimization of both the weight of the structure and the
deviation between the yield stress and the von Mises equivalent stress for each cell.
Furthermore, a two-dimensional isotropic material was considered where the thickness
of each CA was the design variable. Hajela and Kim (Hajela and Kim 2001) used a
genetic algorithm (GA) based on energy minimization to determine an appropriate CA
rule for a two-dimensional continuum.
The Hybrid Cellular Automaton (HCA) method is a computational technique that
has demonstrated the ability to act as an optimization tool for the synthesis of optimal
topologies. This approach is inspired by the biological process of bone remodeling and
was first presented by Tovar (Tovar 2004). As done in other topology optimization
methods, the design domain is discretized into material elements. To use the finite
element method for structural analysis, the design domain is represented using a finite

(a) Empty (N  0) (b) Von Neumann (N  4) (c) Moore (N  8) (d) Extended (N  24)

Figure 11.4 Typical 2-D neighborhoods for CAs. N̂ is the number of neighboring CAs.
292 Structural design optimization considering uncertainties

element model that is discretized using continuum finite elements (FE). The states of the
material elements in the design domain are represented using a lattice of CAs, where a
one-to-one correspondence between CA and FE generally exists, although this is not
a requirement. However, uniformity in the CA discritization is required. A set of local
rules is used to determine material distribution. These rules are applied to the local
information collected in the neighborhood of each CA. At a discrete position i and
time/iteration k, a CA is defined by a set of states that are operated on by a set of rules
belonging to a given neighborhood of the CA.
The state of each CA αi , is defined by the associated design variables xi (e.g., density,
thickness) and field variables Si (e.g., compliance). The field variables are computed
by a finite element analysis; hence this is a hybrid approach since each CA is provided
global information. The complete state of each cell is expressed by
 
(k)
(k) Si
αi = (k) (16)
xi

where k denotes the state applies to a specific iteration. The HCA method has been
shown to be an efficient non-gradient based technique for the design of stiff, or min-
imum compliance, structures. For the traditional linear-static problem, the algorithm
synthesizes or evolves a structure that is equivalent to solving the following problem
N
min i=0 |Si (xi ) − Si∗ |
x
subject to Kd = F (17)
0<x≤1

where the field variable state S being operated on is compliance. The idea is to drive the
state of each CA to a specified target. In the HCA method for minimum compliance
design, the density state of each cellular automaton is modified so that the elements in
the design domain have uniform compliance. The rules used in this chapter that govern
material distribution are control-based. In the case of the design for maximum stiffness,
a monotonically decreasing relationship occurs between mass and compliance (Tovar
2004). An inversely proportional relationship exists between elastic modulus and com-
pliance, i.e., when a load is applied to an elastic structure, as its modulus decreases, the
compliance increases. Therefore, in the design of stiff structures, mass must be added
to reduce the compliance of an element; to increase compliance mass is removed. The
setpoint directly controls the total mass distributed within the design domain, as there
is a one-to-one correspondence between the compliance of the structure under a given
load and the total mass of the structure.
Adapting the principles of fully stressed design (Haftka, Gürdal, and Kamat 1990),
HCA is utilized to allocate material based on the compliance of each element. Although
numerous distribution rules can be used (Tovar, Patel, Kaushik, and Renaud 2007),
a simple proportional error material update is used here. The change in relative density
of element i at the kth iteration can be expressed as


xi = KP (S̄i − Si∗ (k) )
(k) (k)
(18)
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 293

(k)
where KP is a scaling parameters where S̄i is the effective field state of a CA, which
reflects the average field state of itself and its neighborhood as expressed in Eq. (15).
When designing for minimum compliance, Si = ci . Note that the setpoint is not neces-
sarily static and can change from one iteration to the next as explained in the following
section.

3.2.1.1 M U LT I P L E L OA D I N G C O N D IT I O N S
When a design problem is posed such that loading can existed in multiple, independent
scenarios, an analysis must be performed for each loading condition, or load case. In
traditional topology optimization of static-elastic problems, a weighted sum of the
compliance or strain energy from each load case for each element is often used to
represent the final value (Bendsøe and Sigmund 1989). Thus, the final compliance
state for the ith element in the design domain is represented by a weighted sum of the
compliance for each load case


NL
ci = αL ciL (19)
L=1

where ciL is the compliance of the element for load case L and NL is the total number
of load cases. A smaller load can have more of an influence in the final structure by
give more weight to that load case through the weight parameter αL .

3.2.1.2 MA S S C O NT R O L
A mass control scheme utilized to generate topologies of a specified mass. To accom-
modate mass control, the appropriate setpoint must be determined. It has been shown
by Tovar (Tovar 2004) that for structural optimization, there exists a direct relation-
ship between the compliance in a structure when loaded and the final mass of the
structure. The higher the setpoint, the lower the mass and vice versa. The error in the
current field state of a CA and setpoint directly affects the material distribution within
the design domain. Therefore, to design for a specific mass, the corresponding setpoint
must be must determined.
A scheme for finding this target can be accomplished by simply iterating on the HCA
update rules and updating the setpoint, as shown in Fig. 11.5, until the correct mass
results after applying the design rule expressed in Eq. (18). The setpoint for the k + 1
HCA iteration is found by iterating on the update
⎛ ⎞
(k+1)
Mf
S∗ (j+1) = S∗ (j) ⎝ ⎠ (20)
Mf∗

where j is an iterator for the sub-loop on the HCA rules in Eq. (18) and Mf∗ is the mass
fraction target. The mass fraction of a design domain is defined as
N
xi
Mf = i=1
(21)
N
294 Structural design optimization considering uncertainties

Current design

S*(0)(x(k))

Apply HCA material x(k), S*( j1)


update rules

x(k1)

Convergence test no Update global


|M(k1)  Mf*|  ε setpoint
f S*(j)

yes

New design

Figure 11.5 Illustration of HCA material update for mass control using a setpoint update strategy.

where N is the number of elements in the design domain. When mass of the structure
at the kth iteration satisfies the mass target, the resulting material update control
loop is terminated and the dynamic analysis is performed on the resulting material
distribution for the k + 1 iteration unless the topology has converged based on the
stopping criterion. Thus, the mass constraint is enforced at each HCA iteration.

3.2.1.3 D I S P LA C E M E NT C O N ST RA I NT
Using the ability to control mass we can include constraints that are related to mass.
For use in the RBTO framework, the HCA method must incorporate a constraint on
which we can consider a mode of failure, i.e., a limit state function. Here, we will con-
sider the design of structures that are reliable with respect to the maximum allowable
displacement. Kočvara (Kočvara 1997) developed a linear constraint on displacement
using a minimum compliance formulation for the optimization of a truss structure.
A bi-level approach was proposed, where the primal goal was to satisfy a displacement
constraint and the secondary goal was to minimize compliance. A displacement con-
straint is formulated for continuum-based topology optimization problems by Deqing
et al. (Deqing, Yunkang, Zhengxing, and Huanchun 2000) using dual programming
approach.
The maximum displacement of a structure is a global behavior. The total mass of
structure, a global property, is used to control the displacement of the structure. First,
the relationship between displacement and mass must be quantified. Since HCA oper-
ates on local information, a single element is studied. Making the assumption of linearly
elastic material behavior and constant boundary conditions (e.g., time independent,
etc.), we can solve for the nodal displacements using the finite element method formu-
lation that assumes linear elastic material behavior and static loading. The term linear
refers to the relationship exists between the strains and stresses. To find the solution
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 295

for the displacements, a system must be in equilibrium where the potential energy is
at an extremum as 9 stated by the principle of stationary potential energy. The total
potential energy ( ) for a structure that has been discretized into finite elements can
be expressed in terms of the internal or strain energy (U) and the work done by the
external forces (W)

 1 T
=U−W = d Kd − FT d (22)
2

where K is the global stiffness matrix, d is the global vector of nodal displacements,
and F is the vector of external global forces applied at each node. The extrema of the
total potential energy of the deformable body is expressed by,
9

=0 (23)
∂d

From this condition, the resulting equilibrium equation to be solved is,

Kd − F = 0 or Kd = F (24)

The static loading assumption requires that an independent relationship among the
global stiffness matrix, K, the force vector, F, and the displacements, d, exists. Con-
structing K using the material parametrization described by Eq. (14), the displacements
can be solved for in Eq. (24). The relationship for the maximum displacement of all
nodal degrees of freedom for a two-dimensional four-node element, as a function of the
relative density (or elastic modulus), is shown in Fig. 11.6. For a linear-elastic analysis,

600

500

400

d 300

200

100

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


x

Figure 11.6 The uniaxial relationship between displacement (d) of an element and it relative dens-
ity (x) by plotting the compression of an single element with unit height and width
(E = F = 1).
296 Structural design optimization considering uncertainties

the relationship has the form

1
δmax (x) = C (25)
x
where C is a constant. This inversely proportional relationship is same as that for the
“compliance-elastic modulus’’ relationship mentioned previously.
Characterizing this local relationship, it is observed that displacement can be con-
trolled through mass. However, since the maximum displacement of a structure is a
global behavior, the displacement constraint developed in this work is applied globally
by penalizing, or reducing, the mass constraint, which is described above, until the
displacement equality constraint is satisfied. The HCA method is used to drive the
topology to the minimum mass that satisfies the displacement constraint.

4 Decoupled RBTO formulation


In the reliability-based topology optimization (RBTO) method developed in this work,
the structural synthesis is performed separately from the gradient-based reliability anal-
ysis. The HCA topology optimization occurs in sequence with the reliability analysis.
Following a topology optimization using the HCA method, a reliability analysis is
performed to find the most probable point of failure (MPP), u∗ , using the performance
measure approach (PMA) described in Eq. (9). In the optimization subproblem, the
design variable in the topology optimization, i.e., the relative density of each element, is
fixed and the uncertain parameters, or random variables, are the design variables. The
MPP is returned and used as fixed input parameters for in the topology optimization.
Convergence is achieved when a target reliability index is reached.
The general form of the RBTO, a minimum compliance problem can be expressed as
min c(x)
x
subject to Pf (V) = P(G(x, v) < 0) ≤ Pt
Kd = F
0≤x≤1 (26)
i = 1, . . . , n and j = 1, . . . , m

where G = − + max

for i density elements and j uncertain variables where Pt is a tolerance on the probability
of failure. The limit state function G states that if the performance parameter  is larger
than the limit value max then the system fails. To find the reliability index, the limit
state function is approximated at each iteration. In this chapter, PMA is employed with
a first-order approximation (FORM) of the limit state function. The optimization
algorithm is described in Fig. 11.7, where a deterministic HCA update is executed
based on the uncertain variables calculated from a reliability analysis performed for
each iteration. This process continues until convergence.

4.1 RBT O m et ho d o lo g y
In this methodology, a single limit state constraint on the maximum allowable dis-
placement of the structure is considered. For the example problems considered, the
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 297

Start

x(0), u(0)

Initial density

x(t), u(t)
Topology
optimization
min c(x)
x
s.t. Ni1xivi  V
0  x 1

x(t1)
u*(t1)
Reliability
analysis
min
u ⌿ (u, x, h)  ⌿max
s.t. ||u||  breqd
i

u*(t1)

no
Convergence test

yes

End

Figure 11.7 A flowchart of the decoupled approach to reliability-based topology algorithm. For
the example problems considered, the HCA method is used for the topology
optimization to synthesize the structure and then a reliability anaylsis is performed
using a maximum displacement constraint ( ≡ δmax ).

performance parameter  and the limit value max in the formulation (26) are
the displacements δmax and δ∗max , respectively. Two sets of random variables are
considered on this work: the elastic modulus of the material E0 and the loads Fi
on the structure. These uncertainties account for material/manufacturing uncertain-
ties and operational uncertainties. Other uncertain parameters may be included.
The specific RBTO formulation solved for the design problems presented can be
expressed as

min c(x)
x
subject to g D : Kd = F (27)
g R : −δmax + δ∗max ≤ 0
0<x≤1
298 Structural design optimization considering uncertainties

Starting with a fully dense material in the design domain, i.e., all density variables x are
at their upper bound and initial values of the uncertain variables are set to their mean
values, a topology optimization is performed subject to a maximum allowable displace-
ment, followed by a reliability analysis. In the analysis, the optimization subproblem in
(9) is solved, where the density design parameters are fixed, using uncertain variables
to determine the MPP with respect to the constraint imposed. In this implementation,
a sequential quadratic programming (SQP) algorithm is utilized to solve for the values
of random variables v at the MPP. For the optimization subproblem in the reliability
analysis, a warm-start approach is included as an improvement over previous inves-
tigations (Patel, Agarwal, Tovar, and Renaud 2005). Therefore, the sub-optimization
starts from the MPP of the previous iteration. Fixing the resulting set of uncertain
variables, a new topology optimization is executed and the process is repeated until
convergence. The algorithm is described below. Let t denote the iteration counter for
the global RBTO and k used in the previous sections represent a local counter for the
topology optimization.

Step 1. Define the design domain, deterministic material properties, constraint g R ,


and initial design, x(0) (full density). Define the random material properties
and random loading conditions.
Step 2. Initialize the design domain densities and set random variable values V as
the fixed design parameters.
Step 3. Perform topology optimization using the HCA method.
Step 4. Perform the reliability analysis to obtain random variable values at the MPP.
∗(t + 1) ∗(t)
Step 5. Check for convergence, | g g∗(t)− g | ≤ ε1 and |(u(t + 1) − u(t) )T (u(t + 1) − u(t) )| ≤
ε2 for the tolerance parameters ε1 and ε2 . If the convergence criteria are
satisfied, the final topology is obtained; otherwise, go to Step 2.

5 Numerical examples
The RBTO framework presented is applied to two example problems. The first prob-
lem is a Michell-type structure that considers a single concentrated load. The second
example considers the loading conditions of a three bar truss. A normal distribution
for each random variable vi in the reliability analysis, which is expressed by

vj − mvj
uj = (28)
σvj

where mvj and σvj denote the mean value and the standard deviation about the
jth random variable, respectively. The parameter uj represents the random variable
transformed into the standard space.
For these examples, the Poisson’s ratio is ν = 0.33. The mean values of the random
parameters are used to generate the structural topology for the first RBTO cycle. Fur-
thermore, for these problems the range of uncertainty about the mean is assumed to be
5% of the mean. This may be an unrealistically large uncertainty for the elastic modu-
lus of the material, but the large uncertainty is used to demonstrate the methodology.
The design rule scale parameter in Eq. (18) is KP = 0.2 for the topology synthesis.
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 299

5.1 Mic hell-type s tructure problem


This example considers a two-dimensional 2 m × 1 m beam structure that is fully con-
strained at one end and loaded at the free end. The elastic modulus of the material
used is E = 200 GPa and a concentrated load of F = 100 N is applied at the midpoint
of the lower boundary of the beam as shown in Fig. 11.8. These values represent the
mean values that are varied in the reliability analysis component of the methodology to
find the MPP. The design domain is discretized into 5000 elements. For this example,
a constraint on maximum allowable displacement δmax = 1 × 10−5 m is imposed. The
resulting topologies for each algorithm cycle is shown in Fig. 11.9. Around 30 FEA
analyses are required for the topology synthesis and 23 analyses are required for the reli-
ability analysis during each RBTO cycle. A summary of the HCA performance with the
deflection constraint and the subsequent reliability analyses is tabulated in Table 11.1.
As expected, the elastic modulus E is driven to a lower value and the load F is
increased to a higher value. In this example, the resulting change in structural char-
acteristics is quite significant. A mass increase of 33% is required to obtain a reliable
design as compared to the structure synthesized using the mean values of the random

Figure 11.8 The 2 m × 1 m design domain with a single load.

(a) Cycle 1: Mf  0.359 (b) Cycle 2: Mf  0.478

Figure 11.9 The resulting topologies for the Michell-type structure after each RBTO cycle for
β = 3.

Table 11.1 Summary of FEA evaluations and uncertain parameter values


for the Michell-type structure.

RBTO HCA Reliability E F


Cycle Iters FEA evals (GPa) (kN)

1 33 23 178.1 −110.2
2 29 23 178.1 −110.2
300 Structural design optimization considering uncertainties

b Mass Mf Topology

b  0 (50%)† 0.359

b  0.5 (69.15%) 0.388

b  1 (84.13%) 0.392

b  2 (97.72%) 0.431

b  3 (99.87%) 0.478


Deterministic (no uncertainty)

Figure 11.10 Comparison of Michell-type structures for varying prescribed levels of reliability with
respect to a constraint on the maximum allowable displacement δmax = 1 × 10−5 m.
The baseline run (β = 0) is 50% reliable.

Table 11.2 Validation of the Michell topologies using a 10,000 sample


Monte Carlo simulation.
Reliability Expected Actual

β = 0.5 69.15% 69.11%


β=1 84.13% 82.94%
β=2 97.72% 97.43%
β=3 99.87% 99.87%

variables. Figure 11.10 shows an increase in mass required to satisfy an increase in pre-
scribed reliability. An excellent correlation between the prescribed level of reliability
and the actual reliability of the structures is shown in Table 11.2. This confirms that
FORM accurately predicted the failure surface in each case.

5.2 T russ struc t ur e pr o b lem


In this example, we will consider the three-bar truss problem subject to three loading
conditions presented by Duysinx (Duysinx 1997), shown in Fig. 11.11. The elastic
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 301

F1
F2 F3

Figure 11.11 The 2 m × 1 m design domain with three load cases.

(a) Cycle 1: Mf  0.269 (b) Cycle 2: Mf  0.338

Figure 11.12 The resulting topologies for the “three bar’’ structure after each RBTO cycle for
β = 3.

Table 11.3 Summary of FEA evaluations and uncertain parameter values for the three bar problem.

RBTO HCA Reliability E F1 F2 F3


Cycle Iters FEA evals (GPa) (kN) (kN) (kN)

1 28 47 177.0 −100.0 −299.9 −438.4


2 27 47 177.0 −200.0 −300.0 −438.5

modulus of the material used is E = 200 GPa. The mean values for the three load
cases are as follows: −10 kN, −30 kN, and −40 kN. The maximum displacement
constraint is considered with respect to all load cases, i.e., the constraint is violated if
the maximum displacement exceeds the allowable value for any loading. The design
domain is discretized into 5000 elements. For this example, a constraint on maximum
allowable displacement δmax = 1 × 10−5 m is imposed.
The resulting topologies generated at each algorithm cycle is shown in Fig. 11.12,
where we see that the RBTO algorithm converges after 2 cycles. Fewer than 30 FEA
analyses are required for the topology synthesis and fewer than 40 analyses are required
for the reliability analysis during each RBTO cycle. A summary of the HCA per-
formance with the deflection constraint and the reliability analyses is tabulated in
Table 11.3. It is observed that the load cases F1 and F2 do not change after each
302 Structural design optimization considering uncertainties

Reliability Mass Mf Topology

b  0 (50%)† 0.269

b  0.5 (69.15%) 0.278

b  1 (84.13%) 0.288

b  2 (97.72%) 0.309

b  3 (99.87%) 0.338


Deterministic (no uncertainty)

Figure 11.13 Comparison of “three bar’’ structures for varying levels of reliability with respect
to a constraint on the maximum allowable displacement δmax = 1 × 10−5 m. The
baseline run ( β = 0) is 50% reliable.

reliability analysis since the third load case F3 dominates. Although the evolution of
the structure is more subtle in this problem, the topologies following the initial reliabil-
ity analyses distribute approximately 26% more mass compared to the initial structure
synthesized using the mean values of the random variables. Again, mass is the driver to
generating a more reliable design, as illustrated in Fig. 11.13. Using the Monte Carlo
simulation to validate the reliability of each structure, it was determined the first-order
approximation used in the reliability analysis accurately predicts the failure surface in
this problem. These results are tabulated in Table 11.4.

6 Conclusions
In this chapter, a new methodology for reliability-based topology optimization (RBTO)
is presented. A decoupled approach for reliability-based design optimization is com-
bined with the HCA method for structural synthesis. The objective could be generalized
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 303

Table 11.4 Validation of the “three bar’’ topologies using a 10,000 sample
Monte Carlo simulation.
Reliability Expected Actual

β = 0.5 69.15% 69.11%


β=1 84.13% 82.94%
β=2 97.72% 97.43%
β=3 99.87% 99.87%

and applied to different types of problems. The reliability formulation used in this
investigation is known as the performance measure approach (PMA) where the relia-
bility index β is included as a constraint in this subproblem and the random variables
are driven to the most probable point of failure (MPP) for the current structural design
with respect to a displacement constraint. The MPP is required satisfy the specified
reliability index β.
This RBTO methodology facilitates structural designs that are reliable with respect
to a specified performance parameter. Using a maximum allowable displacement con-
straint as a failure mode, we observe that the reliable topology requires more mass
for each example problem, compared to the initial deterministic topology. For both
example cases, where uncertainties in the elastic modulus and applied loads were
considered, only two algorithm cycles are required for convergence to a design. The
excellent correlation shown between the prescribed and actual radiabilities demon-
strates the First-Order Reliability Method (FORM) is sufficient for use with problem
that use the static-elastic assumptions.
The inclusion of the Hybrid Cellular Automata (HCA) method adds to the effi-
ciency of the proposed methodology for the design of structures using the static-elastic
assumptions as previous investigations. The use of the HCA method in the RBTO
framework could show great benefit other design problems, such as aeroelastic design,
where gradients are not easily computed for the topology synthesis. However, the
FORM approximation must be investigated for use with other design problems.

References

Agarwal, H. & Renaud, J.E. 2006. A new decoupled framework for reliability based design
optimization. AIAA Journal 44(7):1524–1531.
Allen, M. & Maute, K. 2004. Reliability-based optimization of aeroelastic structures. Struct.
Multidisc. Optim. 27:228–242.
Bendsøe, M.P. 1989. Optimal shape design as a material distribution problem. Comp. Mth.
Appl. Mech. Engrg. 1:193–202.
Bendsøe, M.P. & Kikuchi, N. 1988. Generating optimal topologies in optimal design using a
homogenization method. Comp. Mth. Appl. Mech. Engrg. 71:197–224.
Bendsøe, M.P. & Sigmund, O. 1989. Topology Optimization: Theory, Methods and Appli-
cations. Springer-Verlag, Berlin.
Breitung, K. 1984. Asymptotic approximations for multinormal integral. Journal of Engineering
Mechanics 110(3):357–366.
304 Structural design optimization considering uncertainties

Burks, A. 1970. Essays on Cellular Automata, Chapter Von Neumann’s self-reproducing


automata, pp. 3–64. University of Illinois Press.
Choi, K.K., Youn, B.D. & Yang, R. 2001. Moving least square method for reliability-based
design optimization. In The Fourth World Congress of Structural and Multidisciplinary
Optimization (WCSMO-4).
Deqing, Y., Yunkang, S., Zhengxing, L. & Huanchun, S. 2000. Topology optimization design
of continuum structures under stress and displacement constraints. Applied Mathematic and
Mechanics 21:1–26.
Duysinx, P. 1997. Layout optimization: A mathematical programming approach. Technical
Report DCAMM report No. 540, University of Liege.
Eschenaueuer, H.A. & Olhoff, N. 2001. Topology optimization of continuum structures:
A review. Applied Mechanics Reviews 54:331–390.
Frangopol, D.M. 1998. Probabilistic structural optimization. Progress in Structural Engineering
and Materials 1(2):223–230.
Frangopol, D.M. & Maute, K. 2003. Life-cycle reliability-based optimization of civil and
aerospace structures. Computers and Structures 81:397–410.
Gurdal, Z. & Tatting, B. 2000. Cellular automata for design of truss structures with linear and
nonlinear response. In Proceedings of the 41st AIAA Structures, Strucutural Dynamics, and
Materials Conference, Number 2000-1580, 2000, April 3–6, Atlanta, Georgia.
Haftka, R.T., Gurdal, Z. & Kamat, M.P. 1990. Elements of Structural Optimization. Kluwer
Academic Publishers, Dordrecht, The Netherlands, 2nd ed.
Hajela, P. & Kim, B. 2001. On the use if energy minimization for ca based analysis in elasticity.
Struct. Multidisc. Optim. 23:24–33.
Haldar, A. & Mahadevan, S. 2001. Probability, Reliability and Statistical Methods in
Engineering Design. New York: Wiley.
Inou, N., Shimotai, N. & Uesugi, T. 1994. Cellular automaton generating topological structures.
In 2nd European Conference on Smart Structures and Materials, Number 2361-08, 1994,
October Glasgow, United Kingdom, pp. 47–50.
Inou, N., Uesugi, T., Iwasaki, A. & Ujihashi, S. 1998. Self-organization of mechanical structure
by cellular automata. Fracture and Strength of Solids 145(9):1115–1120.
Kharmanda, G., Olhoff, N., Mohamed, A. & Lemaire, M. 2004. Reliability-based topology
optimization. Struct. Multidisc. Optim. 26:295–307.
Kita, E. & Toyoda, T. 2000. Structural design using cellular automata. Struct. Multidisc. Optim.
19:64–73.
Kocvara, M. 1997. Topology optimization with displacement constraints: a bilevel programming
approach. Struct. Optim. 4:256–263.
Liu, P.-L. & Kiureghian, A.D. 1991. Optimization algorithms for structural reliability. Structural
Stafety 9:161–177.
Maute, K. & Frangopol, D.M. 1998. Reliability-based design of mems mechanisms by topology
optimization. Computers and Structures 81:813–824.
Mogami, K., Nishiwaki, S., Izui, K., Yoshimura, M. & Kogiso, N. 2006. Reliability-based struc-
tural optimization of frame structures for multiple failure criteria using topology optimization
techniques. Struct. Multidisc. Optim. 32(4):299–311.
Moses, F. 1973. Desing for reliability: concepts and applications. Wiley.
Murotsu, Y. & Shao, S. 1989. Optimum shape design of truss structures based on reliability.
Struct. Multidisc. Optim. 2(2):65–76.
Patel, N.M., Agarwal, H., Tovar, A. & Renaud, J.E. 2005. Reliability based topology opti-
mization using the hybrid cellular automaton method. In 1st AIAA Multidisciplinary Design
Optimization Specialist Conference, 2005, April 18–21, Austin, Texas.
Royset, J.O., Kiureghian, A.D. & Polak, E. 2001. Reliability-based optimial structural design
by the decoupled approach. Reliability Engineering and Systems Safety 73:213–221.
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n 305

Rozvany, G.I.N. 1997. Topology Optimization in Structural Mechanics. Springer.


Rozvany, G.I.N., Bendsoe, M.P. & Kirsh, U. 1995. Optimality criteria: a basis for multidisci-
plinary optimization. Appl. Mech. Rev. 48:41–119.
Slotta, D., Tatting, B., Watson, L., Gurdal, Z. & Missoum S. 2002. Convergence analysis for
cellular automata applied to truss design. Engineering Computations 19(8):953–969.
Svanberg, K. 1987. The method of moving asymptotes a new method for structural optimization.
Int. J. Numer. Meth. Engrg. 24:359–373.
Tovar, A. 2004. Bone Remodeling as a Hybrid Cellular Automaton Optimization Process. Ph.D.
thesis, University of Notre Dame.
Tovar, A., Patel, N.M., Kaushik, A.K. & Renaud, J.E. 2007. Optimality conditions of the hybrid
cellular automata for structural optimization. AIAA Journal.
Tovar, A., Quevedo, W., Patel, N. & Renaud, J.E. 2006. Topology optimization with stress and
displacement constraints using the hybrid cellular automaton method. In Proceedings of the
3rd European Conference on Computational Mechanics, 2006, June 5–8, Lisbon, Portugal.
Wolfram, S. 2002. A New Kind of Science. Wolfram Media.
Xie, Y.M. & Stevens, G. 1997. Evolutionary Structural Optimization. Springer-Verlag, London.
Yuge, K. & Kikuchi, N. 1995. Optimization of a frame structure subjected to a plastic
deformation. Struct. Optim. 10:197–208.
Yuge, K., Iwai, N. & Kikuchi, N. 1999. Optimization of 2-d structures subjected to nonlinear
deformations using the homogenization method. Struct. Optim. 17(4):286–299.
Zhou, M. & Rozvany, G.I.N. 1991. The COC algorithm, part II: Topological, geometrical and
generalized shape optimization. Comp. Meth. Appl. Mech. Engrg. 89:309–336.
Chapter 12

Sample average approximations


in reliability-based structural
optimization:Theory and applications
Johannes O. Royset
Naval Postgraduate School, Monterey, CA, USA

Elijah Polak
University of California, Berkeley, CA, USA

ABSTRACT: This chapter describes recent advances in combining Monte Carlo sampling
and nonlinear programming algorithms for reliability-based structural optimization. Specifi-
cally, we present an approach where the reliability term in the problem formulation is replaced
by a statistical estimate of the reliability obtained by means of Monte Carlo sampling. This
replacement introduces a sampling error and gives rise to sample average approximations. The
chapter presents rules for adjusting the sample size effectively.

1 Introduction
Cost efficient bridges, building frames, aircraft wings, and other mechanical structures
can be achieved by formulating and solving nonlinear optimization problems. How-
ever, such problems become significantly harder to solve when a structure’s reliability
is accounted for in the problem formulation. This difficulty is caused by the fact that
the failure probability of most structures, as well as the corresponding gradient with
respect to design variables, cannot be computed exactly, but must be approximated.
The difficulty is further aggravated by the challenge of deriving suitable expressions
for the gradient of the failure probability. One possible approach to overcome these
difficulties is to estimate the failure probability and its gradient using Monte Carlo
sampling.
This chapter describes recent advances in combining Monte Carlo sampling and
nonlinear programming algorithms for reliability-based structural optimization. Other
approaches for such optimization include (successive) first-order approximations
(Madsen and Friis Hansen 1992; Enevoldsen and Sørensen 1994; Kuschel and
Rackwitz 2000; Royset et al. 2006), gradient-free heuristics, (Itoh and Liu 1999;
Nakamura et al. 2000; Beck et al. 1999), response surfaces (Gasser and Schuëller 1998;
Igusa and Wan 2003), and surrogate functions (Torczon and Trosset 1998; Eldred et al.
2002). However, a review of these approaches is beyond the scope of this chapter.
Here, we present an approach where the failure probability in the problem formula-
tion is replaced by a statistical estimate obtained by means of Monte Carlo sampling.
This replacement introduces a sampling error and gives rise to approximate opti-
mization problems. Such approximate problems are referred to as sample average
approximations.
308 Structural design optimization considering uncertainties

Even if a sample average approximation is solvable by some nonlinear programming


algorithm to sufficient accuracy, the design obtained might be far from the optimal
design of the original problem due to the induced sampling error. This deficiency can
be overcome by constructing a sample average approximation with a large sample size,
which tends to have a small sample error and hence tends to have optimal designs near
the optimal designs of the original problem. Unfortunately, a large sample size implies
high computational cost. For example, each sample point may involve a finite element
analysis of the structure at hand. Hence, applying a nonlinear programming algorithm
to a sample average approximation with a large sample size is usually impractical. On
the other hand, as we have already mentioned, a small sample size is computationally
less expensive, but may lead to designs far from an optimal one.
Intuition and empirical evidence indicate that the following adaptive approach is
efficient. Initially, consider a sample average approximation with a small sample size,
i.e., a coarse, but inexpensive approximation, and apply some optimization algorithm
to achieve a certain amount of design improvement. When this design improvement
is achieved, refine the approximation by increasing the sample size and apply the
optimization algorithm to this refined approximation. Initiate the optimization from
the improved design achieved at the coarser approximation level, i.e., the calculations
on the refined approximation is “warm started.’’ Repeat the process until an acceptable
design is achieved. This adaptive approach avoids spending excessive computational
effort on estimating the failure probability of the relatively poor designs produced
by the early iterations of the optimization algorithm. Increasingly large efforts are
expended only as better and better designs are achieved and accurate estimates of the
failure probability are needed to ensure further design improvements. This approach
tends to be efficient because coarse estimates of the failure probability (and its gradient)
are usually sufficient to steer an optimization algorithm towards better designs in the
early stages of the calculations.
We have derived a theory for the described adaptive increase in sample size (Royset
and Polak 2007; Polak and Royset 2007). In this chapter, we review some of these
theoretical results and show their application to reliability-based structural optimiza-
tion. Section 2 formally defines the reliability-based structural optimization problem.
Section 3 discusses the properties of the failure probability as a function of the design
variables and derives an expression for its gradient. The gradient is derived for general
structural systems consisting of an arbitrary number of unions and intersections of fail-
ure events. A Monte Carlo estimate of this gradient is used to direct the calculations
towards better designs.
Section 4 describes the basic algorithmic approach, which involves approximately
solving a sequence of sample average approximations with increasing sample size.
Section 5 presents sample-adjustment rules that ensure computational efficiency and
theoretical convergence. Clearly, a rapid increase in sample size may result in many
algorithmic iterations on computationally costly sample average approximations. In
fact, too rapid increase in sample size may prevent convergence to a solution. By con-
trast, a slow increase in sample size may lead to unnecessarily many iterations with
coarse approximations. Hence, it is important to balance the increase of sample size
with the progress of the optimization algorithm towards an optimal design. We present
two different sample-adjustment rules: (i) a feedback rule specifying an increase in sam-
ple size whenever the optimization algorithm’s progress falls below a threshold value
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 309

and (ii) the solution of an auxiliary optimization problem that determines the “opti-
mal’’ sample size at every iteration using estimated values of rate of convergence,
computational cost, distance to optimal design, and sampling error.
In Section 6, we illustrate the theoretical results with three numerical examples
arising in design of various mechanical structures. Structures with both a single and
with multiple limit-state functions are considered and reliability terms are included
in both objective and constraint functions. We also present an example with a non-
traditional objective: determine several “good’’ designs that are significantly different.
This objective is useful when qualitative factors such as practical, esthetic, social,
and political requirements are especially important. In such situations, the designer
may seek to generate several “good’’ designs with respect to quantitative factors (e.g.,
cost and reliability) using some optimization algorithm and then select among these
designs using his or her judgment regarding the other, qualitative factors. Finally, the
concluding remarks of this study are presented in Section 7.

2 Problem formulation
Consider the design of a mechanical structure such as a bridge, a building frame, or
an aircraft wing. Let x be an n-dimensional vector of design variables, for example
related to the size and form of the structure, and let c0 (x) and c(x) be the initial and
failure costs, respectively, of the structure given design x. Furthermore, let p(x) be the
failure probability of the structure, given design x, to be defined precisely below. Then,
the reliability-based design optimization problem takes the form

min{c0 (x) + c(x)p(x)|p(x) ≤ q, x ∈ X} (1)


x

where q is a bound on the failure probability, X is a constraint set for x defined in


terms of J constraint functions fj (x), j ∈ J = {1, 2, . . . , J}, i.e.,

X = {x|fj (x) ≤ 0, j ∈ J} (2)

The objective function in (1) consists of the initial cost plus the expected cost of fail-
ure. The constraint functions represent restrictions on shape and form of the structure,
amount and location of steel reinforcement, as well as other factors. We assume that
there are no integer restrictions on the design variables x. We also assume that the con-
straint and cost functions are fairly simple functions, e.g., analytic expressions, that can
easily be evaluated. Hence, the challenge is associated with the failure probability p(x).
When the failure cost is positive, i.e., c(x) > 0 for all x ∈ X, (1) is equivalent to the
following problem

min{c0 (x) + c(x)x0 |p(x) ≤ x0 , 0 ≤ x0 ≤ q, x ∈ X} (3)


x0 ,x

where x0 is an auxiliary design variable (Royset et al., 2006). The transformation from
(1) to (3) is beneficial for numerical reasons; the multiplication of a presumably large
failure cost c(x) with a presumably small, inaccurately estimated failure probability
p(x) in (1) may cause numerical difficulties. Hence, we always recommend solving
(3) instead of (1). Consequently, we focus primarily on problems with a deterministic
310 Structural design optimization considering uncertainties

objective function and a failure probability constraint as in (3). To simplify the notation
and without loss of generality, we consequently consider the problem:
P : min{c(x)|p(x) ≤ q, x ∈ X} (4)
x

where c(x) is some deterministic objective (cost) function.


Mechanical structures are assessed using one or more performance measures, e.g.,
displacement and stress levels at various locations in the structure. In this chapter, we
consider the general case of “system failure’’ where the (system) failure probability
is defined by a collection of performance measures. Specifically, failure occurs when
certain combinations of the performance measures are unsatisfactory.
Let gk (x, u), k ∈ K = {1, 2, . . . , K}, be a collection of K limit-state functions describing
the relevant performance measures. The functions gk (x, u) depend on the design x and
the realization u of a standard normal random m-vector U. This random vector incor-
porates the uncertainty in the structure and its environment. Note that a limit-state
function given in terms of multivariate normal (possibly with correlation) and log-
normal random vectors can be transformed into one defined in terms of a standard
normal vector using a smooth bijective mapping. A limit-state function given in terms
of random vectors governed by other distributions can also be transformed, possibly by
introducing an approximation. Hence, the limitation to a multivariate standard nor-
mal distribution is in many applications not restrictive (see e.g. Chapter 7 of (Ditlevsen
and Madsen 1996) and (Liu and Kuo 2003; Akgul and Frangopol 2003; Holicky and
Markova 2003)).
By convention, gk (x, u) ≤ 0 represents unsatisfactory performance of the k-th mea-
sure. Hence, we define the failure probability of the structure as p(x) = P[F(x)], where
the failure domain
':
F(x) = {gk (x, U) ≤ 0} (5)
i∈I k∈Ci

with Ci ⊂ K and I = {1, 2, . . . , I} defining the combinations of performance measures


that lead to structural failure. For example, suppose that a structure is defined with
three limit-state functions, i.e., K = 3, representing stress level (g1 (x, u)), displacement
at location A ((g2 (x, u)), and displacement at location B (g3 (x, u)). Also, suppose that
the structure is defined to have failed if the first performance measure (stress level)
is unsatisfactory, regardless of the displacement levels, and it is also defined to have
failed if both of the displacement measures are unsatisfactory, regardless of the stress
level. In this case, I = {1, 2}, C1 = {1}, and C2 = {2, 3}, i.e.,
F(x) = {g1 (x, U) ≤ 0} ∪ ({g2 (x, U) ≤ 0} ∩ {g3 (x, U) ≤ 0}) (6)

3 Failure probability and gradient


Since P, see (4), is a nonlinear optimization problem it would be natural to apply a stan-
dard nonlinear programming algorithm to this problem. However, such an approach
requires two assumptions to be satisfied. First, all the functions in P must be at least
once differentiable (with respect to the design variables x) with continuous gradients.
We refer to this assumption as the smoothness assumption. Second, we must be able to
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 311

compute, relatively easily, all the functions and their gradients for any given design x.
We refer to this assumption as the computability assumption. Since we assume that
the cost function c(x) and constraint functions fj (x) are all analytic functions or in
some other form satisfying our two assumptions (smoothness and computability), the
challenge is associated with the failure probability p(x). Due to the complicated form
of p(x) it is not clear whether the assumptions are satisfied. In fact, it appears unlikely
that the computability assumption is satisfied due to the m-dimensional integral in the
definition of p(x). It is also difficult to perceive situations under which the smoothness
assumption is satisfies when the limit-state functions are not differentiable. Hence, we
assume throughout this chapter that the limit-state functions gk (x, u) are differentiable
with respect to both arguments and have continuous gradients. If the limit-state func-
tions are not differentiable and/or the design variables are restricted to integers, then
the theory and algorithms derived in this chapter are not applicable.
This section rewrites the expression for the failure probability in a form that is equiv-
alent, under weak assumptions, to the original definition. As seen in the following, this
effort results in an expression that satisfies the smoothness assumption and that lends
itself to estimation of both the failure probability and its gradient. This effort would
have been unnecessary if we were only interested in computing the failure probability
and not in optimization. Standard Monte Carlo sampling (possibly with importance
sampling) would have sufficed in such a situation (see, e.g., (Ditlevsen and Madsen
1996)). However, within an optimization algorithm we also need the gradient of the
failure probability and the gradient is not easily available from the definition of the
failure probability.
In (Uryasev 1995), we find a theoretical expression for the gradient of the failure
probability. However, this expression may involve surface integrals, which are difficult
to estimate in practice. In (Marti 1996) (see also (Marti 2005)), an integral transfor-
mation is presented, which, when it exists, leads to a simple expression for the gradient
of the failure probability. However, it is not clear under what conditions this transfor-
mation exists. As in (Uryasev 1995), (Tretiakov 2002) assumes that the failure domain
F(x) is bounded and given by a union of events. With this restriction, an expression for
the gradient of the failure probability involving integration over a simplex is derived.
In principle, this integral can be evaluated by Monte Carlo sampling. However, to the
authors’ knowledge, there is no computational experience with estimation of failure
probabilities for highly reliable mechanical structures using this expression. In (Royset
and Polak, 2004a; Royset and Polak, 2004b) we find expressions for the failure proba-
bility and its gradient that can be estimated by Monte Carlo and importance sampling.
However, the expressions are limited to the case with one performance measure (i.e.,
K = 1). In Section 9.2 of (Ditlevsen and Madsen 1996), an expression for the gradient
of the failure probability is suggested, without a complete proof, for the case with
one performance measure. This expression is based on a form of p(x) that has been
found computationally efficient in applications. In (Royset and Polak 2007) a gener-
alization of this special-case formula was derived and formally proven. We proceed by
describing the expression for the failure probability given in (Royset and Polak 2007).
It can be shown that the failure domain is equivalently expressed as
 
F(x) = min max gk (x, U) ≤ 0 (7)
i∈I k∈Ci
312 Structural design optimization considering uncertainties

As in (Deak 1980) (see alternatively (Ditlevsen et al., 1987; Bjerager 1988), and Section
9.2 of (Ditlevsen and Madsen 1996)), we observe the following fact: If the standard
normal random vector U = RW and R2 is Chi-square distributed with m degrees of
freedom, then W is a random vector, independent of R, uniformly distributed over the
surface of the m-dimensional unit hypersphere. Note that W represents a direction and
R a positive length. Hence, we obtain from the total probability rule that
   

p(x) = E P min max gk (x, RW) ≤ 0W (8)
i∈I k∈Ci

Here, P[{mini∈I maxk∈Ci gk (x, RW) ≤ 0|W}] is the conditional probability of a failure
event in the random direction W for a given x. This conditional probability takes a
particular simple form if the safe domain, i.e., the complement of the failure domain
F(x)c , is “star-shaped.’’ A safe domain is star-shaped if in any direction w one passes
from the safe to the failure region only once when moving from the origin in the
u-space1 in the direction of w; see (Royset and Polak 2007) for a mathematically precise
definition. When the safe domain is star-shaped, the expression inside the expectation
in (8) equals 1 − χm (r (x, W)), where χm
2 2 2
( · ) is the Chi-square cumulative distribution
function with m degrees of freedom and r(x, W) is the minimum distance in direction
W from the origin of the u-space to the surface of F(x). This distance can expressed
in terms of the minimum distances in direction W from the origin to the surface of
{gk (x, RW) ≤ 0}, k ∈ K. Let rk (x, W) denote this distance. Then,

r(x, W) = min max rk (x, W) (9)


i∈I k∈Ci

Since χm2
( · ) and the square function (positive domain) are strictly increasing, we
find that

p(x) = E[φ(x, W)] (10)

where

φ(x, W) = max min{1 − χm


2 2
(rk (x, W))} (11)
i∈I k∈Ci

This is the new expression for the failure probability we will use in the following.
As noted earlier, this expression is equivalent to the original definition of the failure
probability under the assumption of a star-shaped safe domain, see (Royset and Polak
2007) for a proof.
We observe that when the safe domain is not star-shaped, (10) may overestimate the
failure probability. Hence, it is conservative to assume a star-shaped safe domain. For a
given design x, it is possible to obtain an indication whether the star-shape assumption
is satisfied by computing an estimate N j=1 IF (x) (uj )/N of p(x), where u1 , u2 , . . . , uN
are realizations of independent, identically distributed standard normal vectors and
IF (x) (uj ) = 1 if uj ∈ F(x), and zero otherwise. If this estimate is significantly smaller than

1 We refer to the m-dimensional space of realizations of U as the “u-space.’’


S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 313

the one of (10), then the star-shape assumption is violated. We also note that equivalent
assumptions were adopted by (Tretiakov 2002; Ditlevsen et al., 1987; Bjerager 1988)
and Section 9.2 of (Ditlevsen and Madsen 1996).
The main advantage of the new expression for the failure probability (10) over the
original expression p(x) = P[F(x)] is that a useful expression for the gradient of the
failure probability can be derived. At first glance, it appears that the gradient of p(x)
in (10) is simply the expectation of the gradient of φ(x, W) with respect to x. However,
closer examination shows that φ(x, W) is not differentiable with respect to x due to
its max-min form. Hence, we define the set of active limit-state functions K̂(x, W) as
those limit-state functions that define the surface of the failure domain F(x) in the
direction W. More precisely,

K̂(x, W) = {k ∈ K|k ∈ Ĉi (x, W), i ∈ Î(x, W)} (12)

where
  

Î(x, W) = i ∈ I min

max r k (x, W) = max r k (x, W) (13)
i ∈I k∈Ci k∈Ci
  

Ĉi (x, W) = k ∈ Ci  max rk (x, W) = rk (x, W) (14)
k ∈Ci

Using the definition of the set of active limit-state functions K̂(x, W), we derive the
subgradient of φ(x, W) as
 
∇x gk (x, rk (x, W)W)
∂φ(x, W) = conv 2fχm2 (r2k (x, W))rk (x, W) (15)
k∈K̂(x,W) ∇u gk (x, rk (x, W)W)T W

where conv{·} denotes the convex hull, fχm2 ( · ) is the Chi-square probability density
function with m degrees of freedom, and ∇x gk (x, u) and ∇u gk (x, u) the gradient of
gk (x, u) with respect to x and u, respectively. Informally, the expression in the brackets
of (15) is the gradient with respect to x of 1 − χm 2 2
(rk (x, W)) obtained using implicit
differentiation. This leads to the following expression for the gradient of the failure
probability (see (Royset and Polak 2007) for a proof)

∇p(x) = E[dφ (x, W)] (16)

where dφ (x, W) is any element of the subgradient ∂φ(x, W). We note that (16) is only
valid if the safe domain is bounded, i.e., the minimum distance in every direction
w from the origin of the u-space to the surface of F(x) is bounded from above by
some (large) number. This may not always be the case in applications. However, it is
always possible to define an artificial limit-state function gK+1 (x, U) = ρ − U, with a
sufficiently large ρ > 0, replace I by I + 1, and set CI = {K + 1}. Then, F(x) satisfies the
assumption about a bounded safe domain. This is equivalent to enlarging the failure
domain. The probability associated with the enlarged failure domain is slightly larger
than the one associated with the original failure domain. The difference, however, is no
314 Structural design optimization considering uncertainties

greater than 1 − χm 2
(ρ2 ) and therefore negligible for sufficiently large ρ. Consequently,
this boundedness assumption is not restrictive in practice.
From the above derivation we see that the failure probability is differentiable with
a continuous gradient given by (16), i.e., the failure probability satisfies the required
smoothness assumption for nonlinear optimization. However, for this to have any
practical value, we also need to be able to compute the failure probability and its
gradient, i.e., we need the computability assumption to be satisfied. Clearly, (10) and
(16) cannot, in general, be evaluated analytically, but must be estimated by Monte
Carlo sampling.
Let w1 , w2 , . . . , wN be a set of N sample points, each generated by independent
sampling from the uniform distribution on the m-dimensional unit hypersphere. Given
this sample, we define the estimate of (10):


N
pN (x) = φ(x, wj )/N (17)
j=1

Since W corresponds to a direction, this type of Monte Carlo simulation is referred to


as directional sampling (Bjerager 1988). It is well-known (see, e.g., (Rubinstein and
Shapiro 1993) for a proof) that pN (x) converges to p(x) uniformly over any closed
and bounded set, as N → ∞. Hence, at least in principle, we can obtain an accurate
estimate of the failure probability by computing (17) with a large N. (Of course, a
large sample size may be prohibitive computationally.)
We now consider an estimate of the gradient (16). Since φ(x, w) is not differentiable
with respect to x, we see that pN (x) is generally not differentiable either. However, since
φ(x, w) has a subgradient, see (15), it can be shown that pN (x) also has a subgradient
denoted by ∂pN (x), see (Royset and Polak 2007). This subgradient is given by


N
∂pN (x) = ∂φ(x, wj )/N (18)
j=1

see (15) for the expression for ∂φ(x, wj ). It is shown in (Royset and Polak 2007) that
the subgradient ∂pN (x) converges (shrinks) to ∇p(x) uniformly over any closed and
bounded set, as N → ∞. We note that there is typically no need to estimate the sub-
gradient ∂pN (x), but only one of its elements. To generate such an element, proceed as
follows: (i) obtain N sample points w1 , w2 , . . . , wN , (ii) for each sample point wj deter-
mine one active limit-state function, i.e., find one element in K̂(x, wj ), and compute
the numerical value of the vector within the brackets of (15) for the active limit-state
function, and (iii) average the numerical values over all the sample points.

4 Algorithm based on sample average approximations


In this section, we follow (Royset and Polak 2007; Polak and Royset 2007) and present
an algorithm that utilizes the sample average estimates of the failure probability and its
gradient derived in the previous section, see (17) and (18). The algorithm carries out
nonlinear optimization iterations on a sequence of sample average approximations for
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 315

the original problem P. Given the sample points w1 , w2 , . . . , wN , we define the sample
average approximation of P as the following optimization problem:

PN : min{c(x)|pN (x) ≤ q, x ∈ X} (19)


x

It is noted that the only difference between P and PN is that p(x) has been replaced by its
sample average. Intuitively, PN becomes a better approximation to P as N increases.
In fact, under weak assumptions, a global minimum of PN converges to a global
minimum of P, as N → ∞, see (Royset and Polak 2007) and more generally Chapter
6 of (Ruszczynski and Shapiro 2003) and references therein.
Since we can evaluate pN (x) for a given sample, PN satisfies our computability
assumption. However, PN does not satisfy our smoothness assumption since (17) is
generally not differentiable – it only has a subgradient (18). Hence, standard non-
linear programming algorithms may perform poorly when applied to PN . As seen in
Subsection 5.1 below, we are able to overcome this difficulty by utilizing the fact that P
satisfies the smoothness assumption. In this section, we proceed under the assumption
that there is some optimization algorithm that can effectively be applied to PN .
As discussed in Section 1, the simplest scheme for approximately solving P would
be to select some sample size N and apply some optimization algorithm to PN for a
number of iterations. The obtained design would be an estimate of the optimal design
of P. However, this may be a poor estimate if the sample size is small, and if the sample
size is large, the computational cost may be prohibitive. In (Royset and Polak 2007;
Polak and Royset 2007), the following adaptive scheme is proposed.
Conceptual Algorithm for Solving P.

Step 0. Select an initial design x0 , an initial sample size N, and sample


w1 , w2 , . . . , wN . Set iteration counter j = 0.
Step 1. Consider the sample average approximation PN and compute a new design
xj+1 by carrying out one iteration of some optimization algorithm applied to PN .
This iteration is initialized by the current design xj .
Step 2. Use some sample-adjustment rule and determine if the sample size should
be augmented. If the sample size should be augmented, replace N by some larger N
and generate additional sample points to complement the existing sample points.
Step 3. Replace j by j + 1, and go to Step 1.

The conceptual algorithm describes an adaptive scheme, but does not specify how
Steps 1 and 2 can be implemented. What optimization algorithm can be used in Step 1?
What sample-adjustment rule should be used in Step 2? At first glance, the first ques-
tion appears easier. However, as discussed above, PN may not satisfy the smoothness
assumption and standard nonlinear programming algorithms may perform poorly. In
fact, as we will see in Subsection 5.1 below, care must be taken when selecting the opti-
mization algorithm in Step 1 to ensure convergence of the overall algorithm. The second
question appears to be difficult and embodies the following fundamental trade-off.
A rapid increase in sample size may result in many iterations with large N and hence
high computational cost. As we see in Subsection 5.1 below, there is also a theoretical
concern; a rapid increase in sample size may prevent convergence to an optimal design.
316 Structural design optimization considering uncertainties

On the contrary, a slow increase in sample size may lead to unnecessarily many iter-
ations on coarse sample average approximations. The next section discusses two
approaches for implementing Step 2. We also briefly discuss the implementation of
Step 1.
There is also a third question that is not addressed in the conceptual algorithm: when
to stop the calculations? As in all nonlinear programming, this is a fundamentally
difficult questions that is substantially aggravated by the presence of sample averages.
A simple approach would be to augment the sample size until it reaches a “sufficiently
large’’ number, e.g., an N that results in a coefficient of variation for pN (x) of less
than 5%. Then, keep that sample size for a number of iterations until the optimization
algorithm in Step 1 ceases to make substantial progress from iteration to iteration.
Another approach is to simply run the algorithm until the dedicated time is consumed.
Techniques for checking whether a given design is close to optimal includes statistical
testing, see e.g. Section 6.4 of (Ruszczynski and Shapiro 2003). A further discussion
of stopping criteria and solution quality is beyond the scope of this chapter.

5 Selection of sample sizes


The conceptual algorithm presented in the previous section needs a sample-adjustment
rule (see Step 2). There are two main concerns when constructing a sample-adjustment
rule: (i) theoretical convergence and (ii) computational efficiency. This section presents
two different rules. The first rule satisfies (i), but its efficiency is sensitive to input
parameters. The second rule has weaker convergence properties, but allocates samples
optimally in some sense.

5.1 F eed b ac k r ule


The first sample-adjustment rule for Step 2 of the conceptual algorithm augments the
sample size when the progress of the optimization algorithm in Step 1 is sufficiently
small. This rule is motivated by the following observation: when the optimization
algorithm in Step 1 is making small progress towards an optimal design of the current
sample average approximation PN , the current design is probably near that optimal
design. Hence, there is little to be gained from computing even better designs for PN ;
it is better to increase the sample size N and start to calculate with a more accurate
sample average approximation.
In (Royset and Polak 2007), the progress of the optimization algorithm in Step 1 is
measured in terms of a function FN (x , x ) defined by
FN (x , x ) = max{c(x ) − c(x ) − γψN (x )+ , ψN (x ) − ψN (x )+ } (20)
where
 
ψN (x) = max pN (x) − q, max fj (x) (21)
j∈J

ψN (x)+ = max{0, ψN (x)}, and the parameter γ > 0. The function FN (x , x ) measures
how much “better’’ the design x is compared to design x . Suppose that x is a feasible
design for PN . Then, ψN (x ) ≤ 0 and ψN (x )+ = 0 and, hence,

FN (x , x ) = max{c(x ) − c(x ), ψN (x } (22)


S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 317

We see that if FN (x , x ) ≤ −ω, with ω being some positive number, then the objective
function in PN for design x is reduced with at least the amount ω compared to the
value for design x . Additionally, x is feasible for PN because ψN (x ) ≤ −ω. Suppose
that x is not a feasible design for PN . Then, ψN (x ) > 0. When FN (x , x ) ≤ −ω, the
constraint violation for PN at x is reduced with at least the amount ω compared to
the value at x because ψN (x ) − ψN (x ) ≤ −ω.
The above observation leads to the following sample-adjustment rule: If FN (xj , xj+1 )
is no larger than a threshold, then the progress is sufficient and the current sample size
is kept. (Note that FN (xj , xj+1 ) is a negative number and that it measures the decrease
in cost or constraint violation. Hence, a large negative number corresponds to a large
progress towards an optimal design.) If FN (xj , xj+1 ) is larger than the threshold, then
the progress is too small and the sample size is increased. The challenge with this
rule is to determine an appropriate threshold. In (Royset and Polak 2007), we find a
sample-size dependent threshold that results in the following sample-adjustment rule: If

 τ
FN (xj , xj+1 ) > −η ( log log N)/N (23)

then augment the sample size. Otherwise, keep the current sample size in the next
iteration. Here, η is a positive parameter and τ is a parameter strictly between 0 and 1.
Since the threshold is increasing (approaches zero from below) with increasing sample
size, the rule becomes successively more stringent. For large N, the sample size is only
increased if the optimization algorithm in Step 1 of the conceptual algorithm makes a
tiny progress (FN (xj , xj+1 ) is close to zero). This means that for large N, it is necessary
to solve the sample average approximation to near optimality before the sample size
is increased. On the other hand, for small N, the sample size is increased even if the
optimization algorithm in Step 1 is making a relatively large progress. Hence, the rule
avoids having to solve low-precision sample average approximations to high accuracy
before switching to a larger sample size. But, the rule eventually forces the algorithm
to solve high-precision sample average approximations to high accuracy.
The double logarithmic form of the threshold in (23) relates to the Law of the Iterated
Logarithm (see (Royset and Polak 2007) and references therein). It is shown in (Royset
and Polak 2007) that this exact form of the sample-adjustment rule guarantees con-
vergence of the conceptual algorithm when implemented with a specific optimization
algorithm in Step 1. This specific optimization algorithm is motivated by the Polak-He
algorithm (see Section 2.6 of (Polak 1997)) and takes the following form.
For any current design xj and current sample size N, the next iteration

xj+1 = xj + λN (xj , d)hN (xj , d) (24)

where d is any element in the subgradient ∂pN (xj ), see (18) and its subsequent
paragraph, and the stepsize λN (xj , d) is given by Armijo’s rule:

λN (xj , d) = max {βk |FN (xj , xj + βk hN (xj , d)) ≤ βk αθN (xj , d)} (25)
k∈{0,1,2,...,}
318 Structural design optimization considering uncertainties

Here, α ∈ (0, 1] and β ∈ (0, 1) are parameters, and

θN (x, d) = − min{zT bN (x) + zT BN (x, d)T BN (x, d)z/(2δ)} (26)


z∈Z

with parameter δ > 0, the J + 2-dimensional unit simplex Z given by


⎧  ⎫
⎨ J+2 ⎬
Z = z  zj = 1, zj ≥ 0, ∀j (27)
⎩  ⎭
j=1

the (J + 2)-dimensional vector (γ as in (20))

bN (x) = (γ ψN (x)+ , ψN (x)+ − pN (x) + q, ψN (x)+ − f1 (x), . . . , ψN (x)+ − fJ (x))T (28)

and the n × (J + 2)-matrix

BN (x, d) = (∇c(x), d, ∇f1 (x), . . . , ∇fJ (x)) (29)

Finally, the search direction

hN (xj , d) = −BN (xj , d)ẑ/δ (30)

where ẑ is any optimal solution of (26). The problem in (26) is quadratic and can
be solved in a finite number of iterations by a standard QP-solver (e.g. Quadprog
(Mathworks, Inc. 2004)).
Usually, the one-dimensional root finding problems in the evaluation of rk (x, w),
needed in (15), cannot be solved exactly in finite computing time. One possibility is
to introduce a precision parameter that ensures a gradually better accuracy in the root
finding as the algorithm progresses. Alternatively, we can prescribe a rule saying that
the root finding algorithm should terminate after CN iterations, with C being some
constant. For simplicity, we have not discussed the issue of root finding. In fact, this
issue is not problematic in practice. The root finding problems can be solved in a
few iterations with high accuracy using standard algorithms. Hence, the root finding
problems are solved with fixed precision for all iterations in the algorithm giving a
negligible error.
The feedback rule (23) requires the user to determine the parameters η and τ as well
as the amount of sample size increase. To avoid a quick increase in sample size and
corresponding high computational costs, the parameter τ is typically set to 0.9999.
However, it is nontrivial to determine an efficient value for the parameter η. If η is
large, then the sample size tends to be augmented frequently. Hence, η should be
small to avoid costly sample average approximations in the early iterations. However,
too small η may result in an excessive number of iterations for each sample average
approximation. Overall, in Section 6 we see empirically that the numerical value of
η may influence computing times significantly. Furthermore, neither the conceptual
algorithm nor the feedback rule specify how much the sample size should be increased –
only when to increase it. Typically, the user specifies a rule of the form: replace N
by ξN, with ξ > 1, whenever the sample size needs to be increased. Naturally, the
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 319

computationally efficiency may vary with the amount increased each time. We note
that (Royset and Polak 2007) proves that the conceptual algorithm with the sample-
adjustment rule (23) and the optimization algorithm (24) is guaranteed to converge to a
solution for any τ ∈ (0, 1), η > 0, and sample size increase. Hence, the above discussion
only relates to how fast the algorithm will converge.
As indicated in the previous paragraph, it can be difficult to select efficient values for
the parameter η as well as an efficient sample size increase every time the algorithm is
prompted by the sample-adjustment rule. Typically, some numerical experimentation
and parameter tuning for the problem at hand is needed. In the next subsection, we
describe an alternative, more complex sample-adjustment rule that avoids such tuning.

5.2 Ef f icient s cheme


In this subsection, we present the sample-adjustment scheme given in (Polak and Royset
2007), which modifies a methodology originally developed in (He and Polak 1990).
Instead of having a simple sample-adjustment rule as in Subsection 5.1 for Step 2 of
the conceptual algorithm, the scheme in (Polak and Royset 2007) consists of a pre-
calculation step that determines the “optimized’’ sample size for subsequent iterations.
In the pre-calculation step, the user selects a required accuracy of the final design (e.g.,
a feasible design with cost within 5% of the minimum cost) and solves an auxiliary
optimization problem that determines the sample size for each iteration (e.g., 100,
100, 100, 200, 200, 300, etc., sample points, for iterations 1, 2, 3, 4, 5, 6, etc.,
respectively). Hence, whenever the conceptual algorithm reaches Step 2, it simply
looks up the prescribed sample size from the output of the auxiliary optimization
problem.
The objective function of the auxiliary optimization problem, to be derived below,
is the total computational work needed to obtain a solution of required accuracy, and
the constraint is that the required cost reduction be achieved. Let a stage be a number
of iterations carried out by the conceptual algorithm for a constant sample size. The
decision variables in the auxiliary problem are (i) the number of stages, s, to be used,
(ii) the sample size Ni to be used in stage i, i = 1, 2, . . . , s, and (iii) the number of
iterations ni to be carried out in stage i. For example, 100, 100, 100, 200, 200, and 300
sample points, for iterations 1, 2, 3, 4, 5, and 6 respectively, correspond to three stages,
with stage 1 consisting of three iterations (n1 = 3) and sample size 100 (N1 = 100),
stage 2 consisting of two iterations (n2 = 2) and sample size 200 (N2 = 200), and stage
3 consisting of one iteration (n3 = 1) and sample size 300 (N3 = 300).
While the number of stages s has to be treated as an integer variable, the variables
Ni and ni can be treated as continuous variables and rounded at the end of their
optimization. In practice, it turns out that the optimal number of stages s∗ hardly
ever exceeds 10, with 3–7 being a most likely range for s∗ . Incidentally, if one assigns
the number of stages to be s > s∗ , and then solves the reduced auxiliary optimization
problem for the Ni and ni , the optimal solution will consist of several Ni being equal,
so that the total number of distinct stages is s∗ .
The auxiliary problem depends on a sampling-error bound, on the initial distance
to the optimal value, and on the rate of convergence of the optimization algorithm
applied in Step 1 of the conceptual algorithm. All of these may have to be estimated.
As a result, it may be presumptuous to call the solution of the auxiliary optimization
320 Structural design optimization considering uncertainties

problem an “optimal strategy,’’ and hence we will call it an “efficient strategy.’’ As we


will see from our numerical results, despite the use of estimated quantities, the efficient
strategy is considerably more effective than the obvious alternatives.

5.2.1 Au xi li a ry o p t im iza t io n p r o b le m
We begin by deriving the auxiliary optimization problem. First we penalize the con-
straint in P to convert it into an equivalent, unconstrained min-max problem. This
simplifies the derivation since it avoids distinguishing between feasible and infeasible
design. For a given parameter π > 0, we define

c̃(x) = c(x) + π max{0, p(x) − q, f1 (x), f2 (x), . . . , fJ (x)} (31)


c̃N (x) = c(x) + π max{0, pN (x) − q, f1 (x), f2 (x), . . . , fJ (x)} (32)

and the unconstrained problem

P̃ : min c̃(x) (33)


x

We refer to π as a penalty since it adds a positive number to the objective functions


c(x) and cN (x) for any infeasible design x. If P is calm (see, e.g., (Burke 1991; Clarke
1983)) and π is sufficiently large, then the design x is a local minimizer of P̃ if and only
if it is a local minimizer of P. Similarly, the unconstrained problem

P̃N : min c̃N (x) (34)


x

is equivalent to PN for sufficiently large π. An appropriate penalty π can be selected


using well-known techniques such as the one in Section 2.7.3 of (Polak 1997). The
implementation of such techniques is beyond the scope of this chapter and we assume
in the following that a sufficiently large penalty π > 0 has been determined so that
optimal solutions of P̃ and P̃N are feasible for P and PN , respectively.
As above, we assume that each sample point is independently generated and that
sample points are reused at later stages, i.e., for all stages i = 2, 3, . . . , s, the sample
at stage i consists of the Ni−1 sample points at stage i − 1 and of Ni − Ni−1 new,
independent sample points.
To construct an auxiliary optimization model for determining the number of stages,
the sample size at each stage, and the number of iterations to be performed at each stage,
we introduce the following assumptions. Suppose that the optimization algorithm in
Step 1 of the conceptual algorithm is linearly convergent with a rate of convergence
coefficient independent of the sample size in the sample average approximations. That
i
is, for any stage i and iteration j, the costs of the design at the next iteration, xj+1 , and

the current design, xji , relate to the cost of the optimal design xN i
of P̃Ni as follows:

∗ ∗
i
c̃Ni (xj+1 ) − c̃Ni (xN i
) ≤ θ(c̃Ni (xji ) − c̃Ni (xN i
)) (35)

where θ ∈ (0, 1) is the rate of convergence coefficient. Hence, every iteration of the opti-
mization algorithm reduces the remaining distance to the optimal value by a factor θ.
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 321

Many optimization algorithms including the Pshenichnyi-Pironneau-Polak Min-Max


Algorithm (see Section 2.4.1 of (Polak 1997)) are linearly convergent.
Next, we assume that for any design x the sampling error is given by

|c̃N (x) − c̃(x)| ≤


(N) (36)

where
(N) is a strictly decreasing positive function with
(N) → 0, as N → ∞. We
return to the form of
(N) below, but for now we only assume that such a function
exists.
To simplify the notation, we deviate from the numbering scheme of the conceptual
algorithm and let j note the iteration number of the current stage (and not from the
beginning). Then, xji is the design at iteration j of the i-th stage. Hence, we plan
to compute the designs x01 , x11 , . . . , xn11 on stage 1, x02 , x12 , . . . , xn22 on stage 2, . . . , and
designs x0s , x1s , . . . , xns s on stage s. To make use of “warm’’ starts, we set x0i = xni−1 i−1
, i.e.,
the last design of the current stage is taken as the initial design of the next stage.

Let x∗ and xN be optimal designs for P̃ and P̃N , respectively. Then, in view of (36)
we have that

c̃(x∗ ) ≤ c̃(xN
∗ ∗
) ≤ c̃N (xN ) +
(N) (37)
∗ ∗ ∗
c̃N (xN ) ≤ c̃N (x ) ≤ c̃(x ) +
(N) (38)

We refer to the distance between the cost c̃(x) of some design x and the cost c̃(x∗ ) of
an optimal design x∗ for P̃ as the cost error of design x. Here, the term “error’’ refers
to the discrepancy between x and x∗ . For any stage i = 1, 2, . . . , s, we define the cost
error after the last iteration of the i-th stage by

ei = c̃(xni i ) − c̃(x∗ ) (39)

Also let e0 = c̃(x01 ) − c̃(x∗ ). Using (36)–(38) and (35), we obtain that for all
i = 1, 2, . . . , s,

ei ≤ c̃Ni (xni i ) − c̃Ni (xN i
) + 2
(Ni ) (40)

≤ θ ni [c̃Ni (x0i ) − c̃Ni (xN i
)] + 2
(Ni ) (41)

≤θ ni
[c̃(xni−1
i−1
) − c̃(x )] + 4
(Ni ) (42)
≤ θ ni ei−1 + 4
(Ni ) (43)

Hence,


s
es ≤ e0 θ k0 (s) + 4 θ ki (s)
(Ni ) (44)
i=1

where ki (s) = sl=i+1 nl if i < s and ki (s) = 0 if i = s. We observe that (44) gives an upper
bound on the cost error after completing s stages with ni iterations and Ni sample points
at stage i. As shown in (Polak and Royset 2007), the cost error is guarantee to vanish
as the number of stages s increases to infinity. This shows that such gradual sample
322 Structural design optimization considering uncertainties

size increase can lead to asymptotic convergence. This is a valuable result, but in this
subsection we aim to determine efficient sample-adjustment schemes, i.e., schemes that
minimize the computing time to reach a specific reduction in cost error from an initial
value.
To be able to construct efficient sample-adjustment schemes we need to quantify the
computational effort associated with one iteration of the optimization algorithm used
in Step 1 of the conceptual algorithm as a function of the sample size N. Suppose that
this computational effort is given by the positive function w(N) for any design x. We
are now ready to present the auxiliary optimization problem.
Given an initial cost error e0 > 0 and a required fractional reduction in cost error
 ∈ (0, 1), we seek to determine the number of stages s as well as sample sizes Ni and
numbers of iterations ni at each stage i, i = 1, 2, . . . , s, such that the computational
effort to reach a cost error of e0 is minimized. We note that the cost error is the
discrepancy between the cost of the current design and the cost of the optimal design
of P̃. In view of (44), this optimization problem takes the following form
 s
  
s

min ni w(Ni )e0 θ k0 (s) + 4 θ ki (s)
(Ni ) ≤ e0
s,ni ,Ni
i=1 i=1

Ni+1 ≥ Ni , i = 1, 2, . . . , s − 1 (45)

s, ni , Ni integer, i = 1, 2, . . . , s

The objective function in D(e0 , ) represents the total computational effort needed to
carry out the planned iterations. The first constraint ensures that the cost error has at
least been reduced to the required level e0 and the second set of constraints ensures that
the sample size is nondecreasing. The estimation of the parameters defining problem
D(e0 , ) is discussed in the next section.

5.2.2 Im p lem en t a t io n o f a u x ilia r y o p t im iza t i o n pr o bl e m


The auxiliary optimization problem D(e0 , ) involves the work and sampling-error
functions w(N) and
(N) as well as the rate of convergence parameter θ and the initial
cost error e0 = c̃(x01 ) − c̃(x∗ ). All these quantities must be determined before D(e0 , )
can be solved. We deal with these issues one at a time.
In view of (17) and (18), the computing effort required to evaluate pN (x) and an
element of the subgradient grows linearly in N. Hence, the work associated with one
iteration of the optimization algorithm used in Step 1 of the conceptual algorithm is
proportional to N and we set the work function w(N) = N.
The (almost sure) sampling error
(N) can be determined  using the Law of the
Iterated Logarithm, see (Royset and Polak 2007). However, ( log log N)/N is a pes-
simistic estimate of the sampling error “typically’’ experienced. Since our goal is to
determine efficient number of stages, sample sizes, and numbers of iterations, it appears

to be more reasonable to assume that the sampling error is proportional to 1/ N as
proposed by classical estimation theory: For a given design x, it follows under weak
assumption from the Central Limit Theorem that pN (x) is approximately normally
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 323

distributed with mean p(x) and variance σ(x)2 /N for large N, where σ(x)2 =
Var[φ(x, W)]. Hence, for sufficiently large N,

P[|pN (x) − p(x)| ≤ 1.96σ(x)/ N] ≥ 0.95 (46)

However, we are primarily interested in the difference between c̃N (x) and c̃(x). Since
the max-function in (32) only makes the variance less, it follows that

P[|c̃N (x) − c̃(x)| ≤


(N)] ≥ 0.95 (47)

when
(N) = 1.96πσ(x)/ N. This error expression appears to be appropriate for our
auxiliary optimization problem, and we set


(N) = 1.96π max σ(x)/ N (48)

where the maximization is over all designs examined in a preliminary calculation


described below.
We determine σ(x), θ, and e0 in an estimation phase consisting of n0 iterations of
the optimization algorithm in Step 1 of the conceptual algorithm applied to P̃N0 , with
N0 being a small sample size. Let {xj0 }nj=0
0
be the iterates computed in this estimation
phase. Each time pN0 (x) is computed, the corresponding variance σ(x)2 is estimated by


N0
σ(x) = 2
(φ(x, wj ) − pN0 (x))2 /(N0 − 1) (49)
j=1

We always retain the largest σ(x)-value computed and use that in the calculation of

(N), see (48).


The rate of convergence parameter θ is estimated by the solution of the following

least-squares problem, where the optimal value c̃N0 (xN 0
) of P̃N0 is also estimated:


n0
min [(ĉ + (c̃N0 (x00 ) − ĉ)θ̂ j ) − c̃N0 (xj0 )]2 (50)
θ̂,ĉ
j=0

This least-square problem minimizes the squared error between the calculated cost at
each iteration c̃N0 (xj0 ) and the nonlinear model ĉ + (c̃N0 (x00 ) − ĉ)θ̂ j . The nonlinear model
estimates that the cost of the design at iteration j is the optimal cost ĉ plus the initial
cost error c̃N0 (x00 ) − ĉ reduced by a factor. The factor is simply the rate of convergence
coefficient raised to the power of the number of iterations. Using the results of the

least square calculations, we estimate θ by θ̂ and c̃N0 (xN 0
) by ĉ. Finally, we (coarsly)

estimate the initial cost error e0 = c̃(x0 ) − c̃(x ) by ê0 = c̃N0 (x00 ) − ĉ.
1

We have now established procedures for estimating all the unknown quantities in
D(e0 , ). D(e0 , ) is a nonlinear integer program that appears difficult to solve directly,
but this fact can be circumvented by the following observations. First, the restriction
of D(e0 , ) obtained by fixing s to a number in the range 5–10 tends to be insignificant
since more than 5–10 stages is rarely advantageous and fewer than 5–10 stages is
still effectively allowed in the model by setting Ni = Ni+1 for some i. Second, Ni ,
and to some extent also ni , tend to be large integers. Hence, a continuous relaxation
324 Structural design optimization considering uncertainties

with rounding of the optimal solutions to the nearest integers is justified. In view of
these observations, D(e0 , ) can be solved approximately using a standard nonlinear
programming algorithm.

5.2.3 Overa ll a lg o r it h m w it h e f f icie n t s a m p l e-adj us t me nt s c he me


We now summarize our approach and discuss how the auxiliary optimization problem
can be integrated in an algorithm for solving P. As indicated above, the process of
solving the auxiliary optimization problem must be preceded by an estimation phase
where parameters are determined. This leads to the following overall algorithm for
solving P approximately.
Algorithm with Efficient Sample-Adjustment Scheme.

Parameters. Number of iterations in estimation phase n0 , sample size in estimation


phase N0 , maximum number of stages s, and constraint penalty π > 0.
Data. Required fractional reduction in cost error  > 0, initial design x00 , and
independent sample points w1 , w2 , . . . .
Step 0. Compute variance estimate σ(x00 )2 using (49).
Step 1. For j = 0 to n0 − 1, perform:
0
Sub-step 1.1. Compute the next design xj+1 by starting from xj0 and carrying out
one iteration of some optimization algorithm applied to PN0 .
Sub-step 1.2. Compute the variance estimate σ(xj+1 0
)2 using (49).
Step 2. Set σ̂ equal to the largest variance estimate encountered in Steps 0 and 1.
Step 3. Determine θ̂ and ĉ as√the optimal solution of (50).
ˆ
Step 4. Set
(N) = 1.96πσ̂/ N, and determine ni and Ni by solving
 s
  
s
 ˆ i ) ≤ ê0
min ni Ni ê0 θ̂ k0 (s) + 4 θ̂ ki (s)
(N
ni ,Ni
i=1 i=1

Ni+1 ≥ Ni , i = 1, 2, . . . , s − 1 (51)

ni , Ni ≥ 1, i = 1, 2, . . . , s

Step 5. For i = 1 to s, perform:


Sub-step 5.1. Set the first design of the current stage equal to the last design of the
previous stage, i.e., x0i = xni−1
i−1
.
Sub-step 5.2. For j = 0 to ni − 1, compute the next design xj+1i
by starting from xji
and carrying out one iteration of some optimization algorithm applied to PNi .

We note that the optimization algorithm used in Sub-Step 1.1 should be identical to
the one used in Sub-Step 5.2 since the former sub-step is used to estimate the behavior
of the latter. However, any nonlinear programming algorithm can be used in Steps
3 and 4. The proposed algorithm consists of three phases: estimation of parameters
(Steps 0–3), solution of auxiliary optimization problem (Step 4), and main iterations
(Step 5). This represents the simplest implementation of our idea. Alternatively, we
can adopt a moving-horizon approach, where Step 5 is completed only for i = 1,
followed by Step 4, then by Step 5 for i = 1 again, followed by Step 4, etc. Hence,
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 325

the sample-adjustment plan is re-optimized after each stage, which may lead to an
improved plan. With re-optimization, it is also possible to re-compute σ̂, using all
previous iterates, as well as θ̂ and ĉ. Other implementations can also be imagined. In
the following numerical study, we adopt the simple implementation described above.

6 Numerical examples
We illustrate our sample-adjustment approaches using three numerical examples. The
examples are implemented in Matlab 7.0 (Mathworks, Inc. 2004) on a 2.8 GHz PC
running Microsoft Windows 2000.

6.1 Feedbac k rule and efficient s chem e


This subsection presents a comparative study of the two sample-adjustment approaches
given in Section 5. The numerical results of this subsection were reported in (Polak
and Royset 2007).

Ex a mple 1
The first example arises in the optimal design of a short structural column with a
rectangular cross section of dimensions x1 × x2 . Hence, x = (x1 , x2 ) is the design vector.
The column is subjected to bi-axial bending moments V1 and V2 , which, together with
the yield strength V3 of the material, are considered to be independent, lognormally
distributed random variables. The column is also subject to a deterministic axial force
af . This gives rise to a failure probability

p(x) = P[{G(x, V) ≤ 0}] (52)

where the random vector V = (V1 , V2 , V3 ) and G(x, V) is a limit-state function


defined by
2
4V1 4V2 af
G(x, V) = 1 − 2
− 2 − (53)
x1 x2 V3 x1 x2 V3 x1 x2 V3

As discussed in Section 2, this limit-state function can be transformed into one give
in terms of a standard normal vector U. Let g1 (x, U) be this transformed limit-state
function. Since the resulting safe domain is not bounded, we introduce an auxiliary
limit-state function g2 (x, U) = ρ − U, where ρ = 6.5 in this example. (This introduces
negligible error.) Then, we redefine the failure probability of the structure as

p(x) = P[{g1 (x, U) ≤ 0} ∪ {g2 (x, U) ≤ 0}] (54)

which is in the form considered in this chapter.


We seek a design of the column which satisfies the constraints defined by f1 (x) = −x1 ,
f2 (x) = −x2 , f3 (x) = x1 /x2 − 2, f4 (x) = 0.5 − x1 /x2 , f5 (x) = x1 x2 − 0.175, and minimize
p(x). This is problem (1) with c0 (x) = 0, c(x) = 1, and J = 5. As discussed above, pN (x)
does not satisfy the smoothness assumption. Hence, care must be taken when selecting
an optimization algorithm for Step 1 in the conceptual algorithm or in Sub-Steps 1.1
and 5.2 in the algorithm with efficient sample-adjustment scheme. For simplicity in
326 Structural design optimization considering uncertainties

these numerical tests, we ignore the fact that the smoothness assumption may be vio-
lated and use the Pshenichnyi-Pironneau-Polak Min-Max Algorithm (see Section 2.4.1
of (Polak 1997)) as the optimization algorithm for solving PN . No detrimental behav-
ior of the Pshenichnyi-Pironneau-Polak Min-Max Algorithm was observed because of
this simplification. (Note that since p(x) is smooth, pN (x) is, for practical purposes,
effectively smooth for large N.)
The parameters for the algorithm with efficient sample-adjustment scheme were
selected to be n0 = 25, N0 = 50, s = 5, and π = 2. We note that π = 2 suffices to ensure
feasibility. Finally, the required fractional
√ reduction
√ in cost error was  = 0.01 and the
initial point was chosen to be x00 = ( 0.175, 0.175).
The auxiliary optimization problem yielded a sample-adjustment strategy of three
stages with 25, 8, and 8 iterations, with sample sizes 50, 251, and 1621, respectively,
which was executed in 458 seconds. Note that this computing time includes the esti-
mation phase (30 seconds) and the solution time of the auxiliary optimization problem
(3 seconds).
For comparison, we also solve the problem using the feedback rule of Subsection 5.1
to adjust the sample size. We experiment with the thresholds
 τ
−η ( log log N)/N (55)

and

−η/ N (56)

for determining if the progress is “small’’ in Step 1 of the conceptual algorithm. We


note that (55) is the same as in (23). This threshold formula guarantees convergence as
proven in (Royset and Polak 2007). The threshold in (56) leads to a heuristic algorithm,
but offers the advantage that the threshold tends to zero faster for increasing N as
compared to (55). In the numerical tests, we set τ = 0.9999.
As mentioned above, it is difficult to select and effective value of η, so we experiment
with a range of values. Furthermore, we must determine how much the sample size
should increase when prompted by the sample-adjustment rule. In this example, we
selected five stages with sample sizes equally spaced between the minimum and max-
imum sample sizes given by the auxiliary optimization problem, i.e., 50, 443, 836,
1228, and 1621. We used the same random seed in both algorithms. We ran the algo-
rithm with the feedback rule until c̃1621 ( · ) was equal to the cost achieved in the last
iteration of the algorithm with the efficient scheme. We did not augment the sample
size beyond 1621, but continued computing iterates at that stage until the target cost-
value was achieved. This is a somewhat favorable stopping criterion for the algorithm
with the feedback rule because this algorithm might augment, prematurely, the sam-
ple size beyond 1621 resulting in long computing times. The computing times for the
algorithm with the feedback rule are summarized in Table 12.1 for various values of
the parameter η and for the two threshold formulae (55) and (56). In Table 12.1, the
row with η = ∞ gives the computing time for a fixed sample size equal to the largest
sample size 1621 for all iterations.
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 327

Table 12.1 Computing times [seconds] for the algorithm


with feedback rule for sample adjustment as applied to
Example 1. The algorithm with efficient sample-adjustment
scheme computes the same design in 458 seconds.

η Threshold

(55) (56)

∞ 980 980
10−1 1044 1036
10−2 1084 654
10−3 678 675
10−4 675 677
10−5 682 676
10−6 476 477
10−7 574 554
10−8 603 601
10−9 898 901

As seen from Table 12.1, a fixed sample size can result in poor computing times com-
pared to an adaptive scheme using a feedback rule. However, in the adaptive schemes
there is a trade-off between solving the approximating problems accurately at an early
stage (i.e., using small η), potentially wasting time, and solving the early approxima-
tions too coarsely (i.e., using large η), leading to many iterations at stages with high
computational cost. In the efficient sample-adjustment scheme of Sub-Section 5.2, the
trade-off is balanced by solving the auxiliary optimization problem. In the feedback
rule, the user needs to consider the trade-off manually by selecting a value for the
parameter η. If the right balance is found, i.e., a good η, then the feedback rule can be
efficient. In fact, the feedback rule with η = 10−6 is only marginally slower than the
efficient scheme. Of course, it is difficult to select η a priori. To illustrate this diffi-
culty, we repeated the example for the higher accuracy  = 0.005. Then, the efficient
scheme increased the sample size up to 6473 and solved the problem in 1461 seconds.
From Table 12.1 it appears that η = 10−6 is a good choice. We selected this value and
re-solved the problem using the feedback rule with five stages equally spaced in the
range [50, 6473] as above. The computing time turned out to be 4729 seconds. Hence,
η = 10−6 was not efficient in this case.

Exa mple 2
The second example considers the design of a simply supported reinforced concrete
T-girder for minimum cost according to the specifications in (American Association
of State Highway and Transportation Officials 1992), using the nine design variables
x = (As , b, hf , bw , hw , Av , S1 , S2 , S3 ), where As is the area of the tension steel rein-
forcement, b is the width of the flange, hf is the thickness of the flange, bw is the
width of the web, hw is the height of the web, Av is the area of the shear reinforce-
ment (twice the cross-section area of a stirrup), and S1 , S2 and S3 are the spacings of
328 Structural design optimization considering uncertainties

Table 12.2 Computing times [seconds] for the algorithm


with feedback rule for sample adjustment as applied to
Example 2. The algorithm with efficient sample-adjustment
computes the same design in 1001 seconds.

η Threshold

(55) (56)

∞ >36000 >36000
10−2 >12600 7416
10−3 2004 1990
10−4 2256 2342
10−5 6721 2327
10−6 1209 1608
10−7 11108 >7200

shear reinforcements in the high, medium, and low shear force zones of the girder,
respectively.
We model uncertainty using eight independent random variables collected in a vector
V. We assumed that the girder can fail in four different modes corresponding to bending
stress in mid-span and shear stress in the high, medium, and low shear force zones.
Structural failure occurs if any of the four failure modes occur. This gives rise to
four nonlinear, smooth limit-state functions Gk (x, V), k = 1, 2, 3, 4, whose exact form
is rather complicated and is given in (Royset et al. 2006). This results in a failure
probability p(x) = P[ ∪4k=1 {Gk (x, V) ≤ 0}].
As above, these limit-state functions can be transformed into ones given in terms of
a standard normal vector U. Let gk (x, U) be these transformed limit-state functions.
Since the resulting safe domain is not bounded, we introduce an auxiliary limit stage
function g5 (x, U) = ρ − U, where ρ = 10 in this example. (This introduces negligible
error.) Then, we redefine the failure probability of the structure as
 5 
'
p(x) = P {gk (x, U) ≤ 0} (57)
k=1

which is in the form considered in this chapter. We also imposed 24 deterministic,


nonlinear constraints as described in (Royset et al. 2006).
Algorithm parameters were selected to be n0 = 50, N0 = 50, s = 5, and π = 1.
Finally, the required fractional reduction in cost error  = 0.0001 and the initial point
x00 = (0.01, 0.5, 0.5, 0.5, 0.5, 0.0005, 0.5, 0.5, 0.5) were chosen.
The algorithm with the efficient sample-adjustment scheme gave three stages with
65, 20, and 20 iterations, with sample sizes 50, 373, and 2545, respectively. The total
computing time was 1001 seconds.
Again we compared this result with that obtained using the algorithm with the
feedback rule. Here, we use five stages of equally spaced sample sizes between 50
and 2545. Using the same stopping criterion as for the first example, we obtained the
computing times in Table 12.2. We observe that the computing times using the feedback
rule can be significantly longer than those achieved using the efficient scheme. We also
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 329

8.66 m
3 4 5 6

1 2

10 m 10 m

Figure 12.1 Truss for Example 3.

note that an approach with a fixed sample size of 2545 for all iterations takes more
than 10 hours (see the first row in Table 12.2).

6.2 Alternative objective functions


We conclude this chapter by demonstrating how our solution methodology can also
solve other problems than P (and (1) and (3)). Typically, engineers need to account for
not only quantitative factors such as cost and reliability, but also esthetic, social, and
political requirements. Most esthetic, social, and political requirements are qualitative
in nature and cannot easily be incorporated into numerical models. Even quantitative
factors may not fully represent reality due to imprecise models and lack of data. In this
subsection, we show how multiple optimization models can be formulated and solved
to account for this situation.
We adopt an approach originally proposed in (Brill Jr. 1979) for public sector plan-
ning: determine a small set of design alternatives that satisfy the stated requirements,
are “good’’ with respect to the stated objective, and are also dispersed in the design
space. Instead of searching for one optimal design or an efficient frontier, as in single-
and multi-objective objective optimization, respectively, this approach seeks several
design alternatives (e.g., 3–12) that the engineer and the decision maker can further
assess using qualitative objectives. As pointed out in (Brill Jr. 1979), the best design
from the perspective of the decision maker may not be located on the efficient frontier,
as assumed by a multi-objective optimization formulation, due to the fact that not all
objectives are included in the multi-objective formulation. Furthermore, by seeking a
dispersed set of design alternatives, the engineer and decision maker are presented with
a wide range of alternatives which may stimulate new considerations and ideas about
designs, objectives, and constraints. See also (White 1996; Drezner and Erkut 1995)
for similar approaches. We illustrate this approach with an example.

Ex am ple 3
Consider the simply supported truss in Figure 12.1. The truss is subject to a random
load L in its mid-span. L is lognormally distributed with mean 1000 kN and standard
330 Structural design optimization considering uncertainties

deviation 400 kN. Let Sk be the yield stress of member k. Members 1 and 2 have
lognormally distributed yield stresses with mean 100 N/mm2 and standard deviation
20 N/mm2 . The other members have lognormally distributed yield stresses with mean
200 N/mm2 and standard deviation 40 N/mm2 . The yield stresses of members 1 and 2
are correlated with correlation coefficients 0.8. However, their correlation coefficients
with the other yield stresses are 0.5. Similarly, the yield stresses of members 3–7 are
correlated with correlation coefficients 0.8, but their correlation coefficients with the
yield stresses of members 1 and 2 are 0.5. The load L is independent of the yield
stresses. Let V = (S1 , S2 , . . . , S7 , L).
The design vector x = (x1 , x2 , . . . , x7 ), where xk is the cross-section area (in
1000 mm2 ) of member k. The truss fails if any of the members exceed their yield stress.
(We ignore the possibility of buckling.) This gives rise to seven limit state functions:

Gk (x, V) = Sk xk − L/ζk , k = 1, 2, . . . , 7 (58)

where ζk is factor given by √the geometry and loading of√the truss. From Figure 12.1,
we determine that ζk = 1/(2 3) for k = 1, 2, and ζk = 1/ 3 for k = 3, 4, . . . , 7. Using a
Nataf distribution (see (Ditlevsen and Madsen 1996), Section 7.2), we transform these
limit-state functions into limit-state functions given in terms of a standard normal
random vector U. Let gk (x, U) denote these transformed limit-state functions. Since
the resulting safe domain is not bounded, we introduce an auxiliary limit state function
g8 (x, U) = ρ − U, where ρ = 20 in this example. (This introduces negligible error.)
Then, we redefine the failure probability of the structure as

 
'
8
p(x) = P {gk (x, U) ≤ 0} (59)
k=1

which is in the form considered in this chapter. We impose the constraint that the
failure probability should be no larger than 0.001350, i.e., p(x) ≤ q = 0.001350. We
also impose the 14 deterministic constraints 0.5 ≤ xk ≤ 2, k = 1, 2, . . . , 7, that limit the
allowable area of each member to be between 500 mm2 and 2000 mm2 .
We initially seek a design of the truss that minimizes the cost of  the truss, i.e., we
aim to solve P. Since all members are equally long, the cost c(x) = 7k=1 xk . We use the
conceptual algorithm implemented with the feedback rule (23) for sample-adjustment,
with parameters η = 0.002 and τ = 0.9999, and optimization algorithm (24) for Step 1,
with parameters α = 0.5, β = 0.8, γ = 2, and δ = 1. The sample size is initially 375 and
is increased by a factor of 4 every time it is prompted by the sample-adjustment rule.
However, the sample size is not increased beyond 24000. We start the calculations
with initial design x0 = (1.000, 1.000, . . . , 1.000) and stop when a feasible solution for
P24000 is found. The resulting design is given in the first row of Table 12.3.
With the motivation that a decision maker may want to be presented with a small set
of good designs, from which he or she may select, we formulate an optimization model
that generates substantially different designs. Specifically, suppose that we have a set of
existing design alternatives xd , d ∈ D. Let ĉ be the smallest cost over all existing design
alternatives, i.e., ĉ = mind∈D c(xd ). Then, the following optimization model provides a
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 331

Table 12.3 Alternative designs for Example 3. The first row gives the optimal
design, but the subsequent rows are at most 10% more costly.

Design Dispersion

x1 x2 x3 x4 x5 x6 x7

1.138 1.156 1.118 1.107 1.119 1.113 1.108 –


1.169 2.000 1.089 1.096 1.096 1.103 1.091 0.8451
1.982 1.164 1.100 1.100 1.102 1.104 1.092 0.8449
1.124 1.146 1.110 1.946 1.109 1.111 1.100 0.8393
1.121 1.145 1.113 1.108 1.109 1.949 1.100 0.8367
1.123 1.147 1.107 1.109 1.947 1.111 1.101 0.8286
1.122 1.146 1.944 1.109 1.109 1.110 1.104 0.8269
1.123 1.146 1.106 1.108 1.110 1.110 1.941 0.8331
1.087 1.595 1.536 1.104 1.107 1.119 1.098 0.6085

design that is no more costly than aĉ, with a > 1, and that is as “different’’ compared
to the existing designs xd as possible:

max{x0 |p(x) ≤ q, x ∈ X, c(x) ≤ aĉ, x − xd  ≥ x0 , d ∈ D} (60)


x0 ,x

Here, x0 is an auxiliary design variable that we seek to maximize. The last set of
constraints in (60) ensures that the difference (measured in the Euclidean distance)
between the new design x and the existing designs xd are all no smaller than x0 .
Hence, (60) maximizes the smallest difference between a new design and the existing
designs, while ensuring that the new design is feasible and no more costly than aĉ.
We note that (60) is in the form P (after redefining the cost and constraint functions)
and, hence, it can be solved by the conceptual algorithm described in Section 4. Using
the same algorithm parameters as in the beginning of this example, we obtain the
designs reported in Table 12.3. In this table, the first row reports the optimal design.
The second row is obtained by solving (60) with a = 1.1 and D consisting only of the
design in the first row. We observe that the design in the second row is substantially
different than the one in the first row, even though it is no more than 10% more costly.
The last column of Table 12.3 shows that the second design lies 0.8451 “away’’ from
the first design measured in the Euclidean distance.
The remaining rows in Table 12.3 are computed in a similar manner, but with D now
consisting of all the designs in the rows above. We note that all the designs cost no more
than 10% more than the minimum cost. It is seen from Table 12.3 that the minimum
cost design (row 1) distributes the material evenly between the different members.
However, good designs can also be achieved by selecting one of the members to have
cross-section area close to 2 (rows 2–8). Moreover, good designs can be found by setting
two members to approximately 1.5 (last row). Naturally, it becomes harder and harder
to find a “different’’ design as the set of existing designs D grows, i.e., the last column
of Table 12.3 tends to decrease for later designs. Hence, after some solutions of (60)
with steadily increasing D, the designs we generate will not be substantially different
332 Structural design optimization considering uncertainties

compared to the ones already computed. This is an interactive process, which should
be ended whenever a useful set of designs have been generated and further calculations
will provide only limited insight.

7 Conclusions
We have presented an approach for solving reliability-based optimal structural design
problems using Monte Carlo sampling and nonlinear programming. The approach
replaces failure probabilities in the problems by Monte Carlo estimates with increasing
sample sizes, and solves the resulting approximate problems with increasing precision.
We have also described rules for adjusting the sample sizes, which ensure theoretical
convergence and computational efficiency. The numerical examples show empirically
that the sample-adjustment rules can reduce computing times substantially compared
with an implementation using a fixed sample size.
The approach in this chapter is directed towards reliability-based structural opti-
mization problems where the design variables are not restricted to be integers and
the relevant limit-state functions are differentiable with continuous gradients. Further-
more, the approach requires many limit-state function evaluations, which (currently)
prevent its application to problems involving, e.g., computationally intensive finite
element analysis. We note, however, that the sample-adjustment rules described in this
chapter dramatically reduce the number of limit-state function evaluations compared to
an approach with a fixed sample size. Consequently, the results of this chapter open the
possibility for solving, to high accuracy, many previously intractable reliability-based
structural optimization problems.

References

Akgul, F. & Frangopol, D.M. 2003. Probabilistic analysis of bridge networks based on
system reliability and Monte Carlo simulation. In A. Der Kiureghian, S. Madanat &
J.M. Pestana (eds), Applications of Statistics and Probability in Civil Engineering, Rotterdam,
Netherlands, pp. 1633–1637. Millpress.
American Association of State Highway and Transportation Officials (1992). Standard specifi-
cations for highway bridges. Washington, D.C.: American Association of State Highway and
Transportation Officials. 15th edition.
Beck, J.L., Chan, E., Irfanoglu, A. & Papadimitriou, C. 1999. Multi-criteria optimal structural
design under uncertainty. Earthquake Engineering & Structural Dynamics 28(7):741–761.
Bjerager, P. 1988. Probability integration by directional simulation. Journal of Engineering
Mechanics 114(8):1288–1302.
Brill Jr., E.D. 1979. The use of optimization models in public-sector planning. Management
Science 25(5):413–422.
Burke, J.V. 1991. Calmness and exact penalization. SIAM J. Control and Optimization
29(2):493–497.
Clarke, F. 1983. Optimization and nonsmooth analysis. New York, New York: Wiley.
Deak, I. 1980. Three digit accurate multiple normal probabilities. Numerische Mathematik
35:369–380.
Ditlevsen, O. & Madsen, H.O. 1996. Structural reliability methods. New York, New York:
Wiley.
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n 333

Ditlevsen, O., Oleson, R. & Mohr, G. 1987. Solution of a class of load combination problems
by directional simulation. Structural Safety 4:95–109.
Drezner, Z. & Erkut, E. 1995. Solving the continuous p-dispersion problem using nonlinear
programming. The Journal of the Operational Research Society 46(4):516–520.
Eldred, M.S., Giunta, A.A., Wojtkiewicz, S.F. & Trucano, T.G. 2002. Formulations for
surrogate-based optimization under uncertainty. In Proceedings of the 9th AIAA/ISSMO Sym-
posium on Multidisciplinary Analysis and Optimization, Paper AIAA-2002-5585, Atlanta,
Georgia.
Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering.
Structural Safety 15(3):169–196.
Gasser, M. & Schuëller, G.I. 1998. Some basic principles in reliability-based optimization (RBO)
of structures and mechanical components. In Stochastic programming methods and technical
applications, K. Marti & P. Kall (eds), Lecture Notes in Economics and Mathematical Systems
458, Springer-Verlag, Berlin, Germany.
He, L. & Polak, E. 1990. Effective diagonalization strategies for the solution of a class of optimal
design problems. IEEE Transactions on Automatic Control 35(3):258–267.
Holicky, M. & Markova, J. 2003. Reliability analysis of impacts due to road vehicles. In A. Der
Kiureghian, S. Madanat & J.M. Pestana (eds), Applications of Statistics and Probability in
Civil Engineering, Rotterdam, Netherlands, pp. 1645–1650. Millpress.
Igusa, T. & Wan, Z. 2003. Response surface methods for optimization under uncertainty. In
Proceedings of the 9th International Conference on Application of Statistics and Probability,
A. Der Kiureghian, S. Madanat & J. Pestana (eds), San Francisco, California.
Itoh, Y. & Liu, C. 1999. Multiobjective optimization of bridge deck maintenance. In Case Studies
in Optimal Design and Maintenance Planning if Civil Infrastructure Systems, D.M. Frangopol
(ed.), ASCE, Reston, Virginia.
Kuschel, N. & Rackwitz, R. 2000. Optimal design under time-variant reliability constraints.
Structural Safety 22(2):113–127.
Liu, P.-L. & Kuo, C.-Y. 2003. Safety evaluation of the upper structure of bridge based on concrete
nondestructive tests. In A. Der Kiureghian, S. Madanat & J.M. Pestana (eds), Applications
of Statistics and Probability in Civil Engineering, Rotterdam, Netherlands, pp. 1683–1688.
Millpress.
Madsen, H.O. & Friis Hansen, P. 1992. A comparison of some algorithms for reliability-based
structural optimization and sensitivity analysis. In Reliability and Optimization of Structural
Systems, Proceedings IFIP WG 7.5, R. Rackwitz & P. Thoft-Christensen (eds), Springer-
Verlag, Berlin, Germany.
Marti, K. 1996. Differentiation formulas for probability functions: the transformation method.
Mathematical Programming 75:201–220.
Marti, K. 2005. Stochastic Optimization Methods. Berlin: Springer.
Mathworks, Inc. 2004. Matlab reference manual, Version 7.0. Natick, Massachusetts:
Mathworks, Inc.
Nakamura, H., Miyamoto, A. & Kawamura, K. 2000. Optimization of bridge maintenance
strategies using GA and IA techniques. In Reliability and Optimization of Structural Systems,
Proceedings IFIP WG 7.5, A.S. Nowak & M.M. Szerszen (eds), Ann Arbor, Michigan.
Polak, E. 1997. Optimization. Algorithms and consistent approximations. New York, New
York: Springer-Verlag.
Polak, E. & Royset, J.O. 2007. Efficient sample sizes in stochastic nonlinear programming.
J. Computational and Applied Mathematics. To appear.
Royset, J.O., Der Kiureghian, A. & Polak, E. 2006. Optimal design with probabilistic objective
and constraints. J. Engineering Mechanics 132(1):107–118.
Royset, J.O. & Polak, E. 2004a. Implementable algorithm for stochastic programs using sample
average approximations. J. Optimization. Theory and Application 122(1):157–184.
334 Structural design optimization considering uncertainties

Royset, J.O. & Polak, E. 2004b. Reliability-based optimal design using sample average
approximations. J. Probabilistic Engineering Mechanics 19(4):331–343.
Royset, J.O. & Polak, E. 2007. Extensions of stochastic optimization results from problems
with simple to problems with complex failure probability functions. J. Optimization. Theory
and Application 133(1):1–18.
Rubinstein, R. & Shapiro, A. 1993. Discrete Event Systems: Sensitivity Analysis and Stochastic
Optimization by the Score Function Method. New York, NY: Wiley.
Ruszczynski, A. & Shapiro, A. 2003. Stochastic Programming. New York, New York: Elsevier.
Torczon, V. & Trosset, M.W. 1998. Using approximations to accelerate engineering design opti-
mization. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symp. on Multidisciplinary
Analysis and Optimization, AIAA Paper 98-4800, St. Louis, Missouri.
Tretiakov, G. 2002. Stochastic quasi-gradient algorithms for maximization of the probabil-
ity function. A new formula for the gradient of the probability function. In Stochastic
Optimization Techniques, New York, pp. 117–142. Springer.
Uryasev, S. 1995. Derivatives of probability functions and some applications. Annals of
Operations Research 56:287–311.
White, D.J. 1996. A heuristic approach to a weighted maxmin dispersion problem. IMA Journal
of Mathematics Applied in Business and Industry 7:219–231.
Chapter 13

Cost-benefit optimization for


maintained structures
Rüdiger Rackwitz & Andreas E. Joanni
Technical University of Munich, Munich, Germany

ABSTRACT: In this chapter the theoretical and practical issues for setting up effective cost-
benefit optimization formulations for existing aging structures are presented. These formulations
include deterioration and failure models as well as inspection and repair models. An elaborate
optimization methodology, based on renewal theory that uses systematic reconstruction or repair
schemes after suitable inspection is formulated, in which life-cycle cost perspectives are used is
implemented for maintained concrete structures.

1 Introduction
Many civil engineering structures are exposed not only to loads but also to the technical
or natural environment. They are aging because of wear, corrosion, fatigue and other
phenomena. At a certain age they need to be inspected and, possibly, repaired or
replaced. Many aging phenomena are rather complex and all but fully understood in
their physical and chemical context. For concrete structures the most important aging
phenomena in temperate climates are corrosion due to carbonation and/or chloride
attack, for steel structures it is rusting and fatigue. Moreover, the concepts for cost-
benefit optimization of such structures are not very well developed, although it is
known that the cost for maintenance can be considerable and, in the long term, can
even exceed the cost of the initial investment. It should be clear that only a rigorous life-
cycle consideration can fully account for all cost involved, and that design rules and
maintenance strongly interact. While the techniques for design optimization appear
sufficiently developed, no clear concepts exist for optimizing maintenance.
In this contribution suitable failure models for physically based deterioration phe-
nomena are first reviewed. Their computation is essentially based on FORM/SORM
(see, for example, (Rackwitz 2001)) which can be shown to be accurate enough
for the purpose under discussion. Several schemes for computing first passage time
distributions are discussed. Failure time models for series systems are also given.
This is followed by some remarks about classical renewal theory, Bayesian updating,
inspection and repair models.
Then, the well-known renewal theory (Rosenblueth and Mendoza 1971; Rackwitz
2000) for cost-benefit optimization of structures is outlined. It is extended and general-
ized to optimal and integrated inspection and maintenance strategies. When setting up
suitable maintenance strategies we follow closely the concepts developed in classical
reliability theory as described, for example, in (Barlow and Proschan 1965; Barlow
and Proschan 1975) which we find still very valid and which, to our knowledge,
336 Structural design optimization considering uncertainties

have not been applied to structures so far (see, however, (Van Noortwijk 2001)). In
particular, we study minimal, age-dependent and block repairs and maintenance by
inspection and repair. The models are generalized for maintenance optimization of
series systems. Some special optimization techniques are briefly reviewed. An example
illustrates aspects of the theory. Clearly, the considerations are no more valid if other
than economic reasons exist to repair and/or retrofit an existing structure.

2 Preliminaries

2.1 F ai l ure m od e ls wit ho ut d e t er io r at i o n


As a matter of fact, there are very few exact, time-variant failure models available which
are amenable to practicable computation. In some cases consideration of (stationary
or non-stationary) time-variant actions and time-variant structural state function is
necessary. Let G(X(t), t) be the structural state function such that G(X(t), t) ≤ 0 denotes
failure states and X(t) a random process. Examples of such processes are the Gaussian
and related processes and the rectangular wave renewal processes. But X(t) can also
include simple random variables. Then, the failure time distribution can be computed
numerically by the outcrossing approach. A well-known upper bound is
 t
F(t) ≤ ν(τ)dτ ≤ 1 (1)
0

with the outcrossing rate (more specifically, the downcrossing rate)

1
ν(τ) = lim P({G(X(t), t) > 0} ∩ {G(X(t +
), t +
) ≤ 0}) (2)

→0

This upper bound is only tight for small probabilities. Frequently, an asymptotic result
is used (Cramér and Leadbetter 1967)
  t
F(t) ≈ 1 − exp − ν(τ)dτ (3)
0

with
  t
f (t) ≈ ν(t) exp − ν(τ)dτ (4)
0

Equation (3) implies a non-homogeneous Poisson process of failure events with


intensity ν(t). For stationary failure processes Equation (3) reduces to a homo-
geneous Poisson process and simplifies somewhat. In general, computations are
done by first transforming the original process and/or random variables into the
so-called standard space of uncorrelated standard normal variates (Hohenbichler and
Rackwitz 1981) which enables to use FORM/SORM (see, for example, (Rackwitz
2001)) provided that the dependence structure of the two events {G(X(t), t) > 0} and
{G(X(t +
), t +
) ≤ 0} can be determined in terms of correlations coefficients. Some
computational details are given in (Streicher and Rackwitz 2004). However, the rele-
vant conditions must be fulfilled, i.e. the outcrossing events must become independent
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 337

and rare asymptotically. For example, the independence property is lost if X(t) contains
not only (mixing) random processes but also simple random variables. Therefore, in
many cases this approach yields only crude approximations. An alternative approach
will be discussed in the next subsection.

2.2 Failure mo dels for deterioration


Obviously, the outcrossing approach can also be applied if there is deterioration. It
appears as if it performs better if the outcrossing rate is increasing with time. For
aging structures a closed-form failure time (first passage time) distribution is hardly
available except for some special, usually oversimplifying cases. The log-normal,
inverse Gaussian or Weibull distribution function with a suitable deterioration mech-
anism for the mean (or other parameters) has been used. They, at most, can serve as
approximations. Realistic failure models must be derived from physical multi-variable
deterioration models (cumulative wear, corrosion, fatigue, etc.). For (monotonically
and continuously) deteriorating structures a widely used failure model is as follows.
Let G(X, t) = g(U, t) be the (differentiable) structural state function of a structural
component with G(X, t) = g(U, t) ≤ 0 the failure domain. X is a vector of random vari-
ables and time t is a parameter. Transition from X to U denotes the usual probability
transformation from the original into the standard space of variables (Hohenbichler
and Rackwitz 1981). Within FORM/SORM the probability of the time to first
failure is

F(t) = P(T ≤ t) = P(g(U, t) ≤ 0) ≈ (−β(t))C(t) (5)

for t ≥ 0 and the failure density is

∂F(t) ∂β(t) ∂C(t)


f (t) = ≈ −ϕ(β(t)) C(t) + (−β(t))
∂t ∂t ∂t
) *
− ∂t∂ g(u∗ ,t) ∂C(t)
= −ϕ(β(t)) C(t) + (−β(t)) (6)
∇u g(u∗ , t) ∂t

T is the time to first entrance into a failure state. ( · ) and ϕ( · ) denote the univariate
standard normal distribution function and corresponding density, respectively. β(t) is
the (geometrical) reliability index. C(t) is a correction factor evaluated according to
SORM and/or importance sampling which can be neglected in many cases. In Equa-
tion (6) it frequently can be assumed that C(t) does not vary with t. Clearly, this
model does not take account of the randomness in the deterioration process caused
by a (large) number of small disturbances which, however, is small to negligible for
cumulative deterioration phenomena, at least for larger t.
A numerical computation scheme for first-passage time distributions under less
restrictive conditions than the outcrossing approach can also be given. It is based
on the following lower bound formula
) n *
'
F(t) = P(T ≤ t) ≥ P P(G(X(ti ), ti ) ≤ 0) (7)
i=1
338 Structural design optimization considering uncertainties

with t = tn and ti < t denoting a narrow not necessarily regular time spacing of the
interval [0, t]. As demonstrated by examples in (Au and Beck 2001), the lower
bound

F(t) = P(T ≤ t) = 1 − P(G(X(θ), θ) > 0 for all θ in [0, t])


) n * ) n *
' '
≥ P {g(U(θi ), θi ) ≤ 0} ≈ P {α(θi )T U(θi ) + β(θi ) ≤ 0}
i=0 i=0
) *
:
n
= 1−P {Zi ≤ β(θi )} = 1 − n+1 (β; R) (8)
i=0

to the first-passage time distribution turns out to be surprisingly accurate for all values
of F(t), if the time-spacing τ = θi − θi−1 is chosen sufficiently close and where θi = iτ
and t = θn . Here again, a probability distribution transformation from the original
space into the standard space is performed and the boundaries of each failure domain
are linearized. The last line represents a first order approximation (Hohenbichler and
Rackwitz 1983) where n (·; ·) is the n-dimensional standard normal integral with
β = {β(θi )} the vector of reliability indices of the various components in the union
and the dependence structure of the events is determined in terms of correlation
coefficients R = {ρij = α(θi )T α(θj )}. Suitable computation schemes for the multinormal
integral even for high dimensions and arbitrary probability levels have been proposed,
for example in (Hohenbichler and Rackwitz 1983; Gollwitzer and Rackwitz 1988;
Pandey 1998; Ambartzumian et al. 1998; Genz 1992). It would appear that slight
improvements can be achieved if the probabilities for the individual events are deter-
mined by SORM (or any other suitable improvement) and an equivalent value of
βe (θ) is computed from βe (θ) = −−1 ((−β(θ))CSORM ). This computation scheme is
approximate but quite general if the correlation structure of the state functions in
the different points in time can be established. In (Au and Back 2001) a Monte
Carlo method is used to compute Equation (7) which can be recommended if high
accuracy requirements are imposed – at the expense of in part considerable numerical
effort.
The special case of equi-dependent (equi-correlated) components is worth mention-
ing. In this case we simply have (see, for example (Dunnett and Sobel 1955))

 ∞ 
t √ 
βi − ρτ
Fe (t) = 1 − ϕ(τ)  √ dτ (9)
−∞ i=1
1−ρ

For equi-reliable components (no variation of resistance quantities with time) this
result simplifies further. The corresponding values of the density function needed
when taking Laplace transforms as required later are most easily calculated by f (θi ) =
(F(θi ) − F(θi−1 ))/τ or a higher order differentiation rule. For equi-reliable components
Equation (9) has a decreasing risk function.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 339

The results obtained so far carry over to systems without any further conceptual
difficulty. Only the numerical computations become more involved. Any system can
be reduced to a minimal cut set system so that its failure probability is represented as
⎛ ⎞
's :
mi
Pf (t) = P(T ≤ t) = F(t) = P ⎝ {Tij ≤ t}⎠ (10)
i=1 j=1

Assume that the failure times of the parallel systems can be determined which, in
general, can involve quite some numerical effort. The remaining series system then is
computed as
) s * ) s *
' : s
Pf (t) = P(T ≤ t) = F(t) = P {Ti ≤ t} = 1 − P {Ti > t} ≤ P(Ti ≤ t)
i=1 i=1 i=1 (11)

where usually the failure and survival events are dependent. The upper bound in Equa-
tion (11) is less useful for larger, low reliability systems. Equation (8) can be combined
with Equation (11), especially if the parallel systems can be represented sufficiently
well by equivalent, linearly bounded failure domains of the components (Gollwitzer
and Rackwitz 1983). Some specific results for the computation of series systems are
given in (Streicher and Rackwitz 2004). The failure densities are obtained by differen-
tiation. Note that, by definition, a series system fails if any of its components fails. In
passing it is also noted that the formulation in Equation (11) also includes failure due
to extreme disturbances. And it should be clear that the series system model must be
applied if several hazards are present.
Deterioration of structural resistance is frequently preceded by an initiation phase. In
this phase failure is dominated by normal (extreme-value) failure. Structural resistance
is virtually unaffected. Only in the succeeding phase resistances degrade. Examples
are crack initiation and crack propagation or chloride penetration into concrete up
to the reinforcement and subsequent reduction of the reinforcement cross-section by
corrosion and, similarly, for initial carbonation and subsequent corrosion. In many
cases the initiation phase is much longer than the actual degradation phase. Let Ti
denote the random time of initiation, Te the random time to normal (first-passage
extreme-value) failure and Td the random time from the end of the initiation phase to
deterioration failure with degraded resistance. Then,

F(t) = P(T ≤ t) = P[({Ti > t} ∩ {Te ≤ t}) ∪ ({Ti ≤ t} ∩ {Te < Ti })


∪({Ti ≤ t} ∩ {Te > Ti } ∩ {Ti + Td ≤ t})]
(12)
= P[{Ti > t} ∩ {Te ≤ t}] + P[{Ti ≤ t} ∩ {Te < Ti }]
+ P[{Te > Ti } ∩ {Ti + Td ≤ t}]

Note, extreme-value failure during the initiation phase and failure in the deterioration
phase are mutually exclusive. Assume that Ti is independent of the other two variables.
340 Structural design optimization considering uncertainties

If the variables Te and Td can also be assumed independent, the following formula can
be used
 t
F(t) = Fe (t)F i (t) + fi (τ)[Fe (τ) + (1 − Fe (τ))Fd (t − τ)]dτ (13)
0

2.3 T h e renew al mo d el
A sufficiently general setting is to assume that the structure fails at a random time in
the future. After failure or serious deterioration it is systematically renewed by recon-
struction or retrofit/repair. Reconstruction, repair or retrofit reestablish all (stochastic)
structural properties. The times between failure (renewal) events have identical distri-
bution functions F(t), t ≥ 0 with probability densities f (t) and are independent. The
sequence of failures and renewals then forms an ordinary renewal process. Renewal
theory allows for a useful refinement which will be found to be important for the
problem under discussion, namely the distribution of the time to the first event can
have distribution function F1 (t) = F(t), t ≥ 0 (see (Cox 1962) for details). The process
of renewals is then denoted by modified or delayed renewal process. The independence
assumption between failure times needs to be verified carefully. In particular, one has
to assume that loads and resistances in the system are independent for consecutive
renewal periods and there is no change in the design rules after the first and all subse-
quent failures (renewals). Even if designs change failure time distributions must remain
the same. But the model allows for a different design rule for the initial design which
can be one of the reasons for F1 (t) = F(t). Throughout the chapter the point process
of renewals is an orderly point process, that is multiple occurrences of renewals in a
small time interval are excluded (Cox and Isham 1980).
The renewal function for a modified renewal process which will be used extensively
later on is (Cox 1962)

 ∞
 ∞

E[N(t)] = M1 (t) = np(N(t) = n) = n(Fn (t) − Fn+1 (t)) = Fn (t)
n=1 n=1 n=1
∞ 
 t  t
= F1 (t) + Fn (t − u)dF(u) = F1 (t) + M1 (t − u)dF(u) (14)
n=1 0 0

with N(t) the random number of renewals and Fn (t) = P(N(t) ≥ n) = P(Tn ≤ t) the dis-
tribution function of the time to the n-th renewal. The renewal intensity (or, if applied
to failure processes, the unconditional failure rate) is obtained upon differentiation

P(one renewal in [t, t + dt]) dM1 (t) 
m1 (t) = lim = = fn (t) (15)
dt→0 dt dt
n=1

For ordinary processes the index ‘1’ is omitted. The last expression in Equation (14)
is called ‘renewal equation’. As pointed out in (Cox 1962), m(t) (or m1 (t)) has a limit

1
m(t → ∞) = lim m(t) = (16)
t→∞ E[Tf ]
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 341

for f (t) → 0 if t → ∞. In approaching the limit m(t) can be strictly increasing,


strictly decreasing or oscillate in a damped manner around 1/E[Tf ]. Ordinary
renewal processes then tend to be large around t = E[Tf ], 2E[Tf ], . . . and small around
t = 0, 32 E[Tf ], 53 E[Tf ], . . .. For a Poisson process with parameter λ it is constant, i.e.
m(t) = λ. If there are oscillations they die out more rapidly for larger dispersions of
the failure time distribution. In many examples oscillations have been found when
the risk function is increasing. Also, in many cases the failure rate is increasing for
small t. Only for some special models, especially those with very large coefficient of
variation of failures times, m(t) is decreasing. The transient behavior of m(t) will later
be of interest. Unfortunately, Equation (14) has closed-form solutions for only very
few special mathematical failure models (see (Streicher et al. 2006) for a list of rele-
vant references) and otherwise can be computed directly only with extreme numerical
effort. In general, Equation (14) or Equation (15) have to be determined numerically.
A particularly suitable numerical method is proposed in (Ayhan et al. 1999). It makes
use of the upper and lower sum in Riemann-Stieltjes integration for the discrete version
of Equation (14). ( t Because M(t) is non-decreasing, we have the following bounds for
M(t) = F(t) + 0 M(t − s)dF(s)


k
MLB (kτ) = F(kτ) + MLB ((k − i)τ)
F(iτ)
i=1


k
≤ M(kτ) ≤ F(kτ) + MUB ((k − i + 1)τ)
F(iτ) = MUB (kτ) (17)
i=1

for equal partitions of length τ in [0, t] with


F(iτ) = F(iτ) − F((i − 1)τ) and nτ = t. The
resulting system of linear equations is solved easily. If the first failure time distribution
is different from the others one obtains by one additional convolution

 t
M1 (t) = F1 (t) + F1 (t − s)dM(s) (18)
0

which, in turn, is bounded by


k
M1,LB = F1 (t) + inf F1 (t − x)(MLB (iτ) − MLB ((i − 1)τ)) ≤ M1 (kτ)
(i−1)τ≤x≤iτ
i=1


k
≤ M1,UB ≤ F1 (t) + sup F1 (t − x)(MUB (iτ) − MUB ((i − 1)τ))
i=1 (i−1)τ≤x≤iτ
(19)

m1 (t) is obtained by numerical differentiation. The computation methods in Equa-


tions (17) and (19) are useful whenever interest lies in the (unconditional) failure rate
or risk acceptance questions. Other approximation methods have also been proposed.
342 Structural design optimization considering uncertainties

For aging components with increasing risk function the following bounds on the
renewal function are given in (Barlow and Proschan 1965, p. 54)

t t tF(t) t
−1 ≤ (t − 1 ≤ M(t) ≤ ( t ≤ (20)
0 (1 − F(τ))dτ 0 (1 − F(τ))dτ
E[Tf ] E[T f]

The sharper upper bound in Equation (20) turns out to be remarkably close to the
exact result for small t. Under suitable conditions one also has

d d tF(t)
m(t) = M(t) ≤ (t (21)
dt dt 0 (1 − F(τ))dτ

Again, the upper bound for Equation (21) is found to be very close to the exact result
up to approximately E[T]. It approaches the limit 1/E[T] for large t. The lower bound
obtainable from Equation (20) by differentiation is generally less useful. Equation (21)
can be used with advantage in Sections 4.4 and 4.5.

2.4 U pd a ti ng t he pr o b ab ilis t ic mo d el
There are many types of updating of a probabilistic model depending on the type of
information collected during the experimental and numerical investigations. In gen-
eral, one can distinguish between variable updating and event updating. In a Bayesian
context information is collected about a variable by taking (independent) samples and
testing them. This leads to an improved estimate of the parameters of the distribution
of a variable. Let xn be values of a sample of size n and θ a parameter (vector), then
an improved posterior distribution is

 L(xn | θ)f (θ)
f (θ | xn ) = (  (22)
θ L(x n | θ)f (θ)dθ


where L(xn | θ) is the likelihood function and f (θ) the prior density. The Bayesian or
predictive density function is


f (x | xn ) = f (x | θ)f (θ | xn )dθ (23)
θ

For many important distributions analytical results are available (Aitchison and
Dunsmore 1975).
Updating by events is generally more difficult. We  show this for the model from
Equation (5) and previous informative events B = i=1 Bi . For example, such events
could be the knowledge about the maximum load in the past, some measured damage
indicator or just the knowledge that the structure has survived up to the present time.
Then, we have two types of observations,
 namely equalities and inequalities which
require different treatment. For B = i=1 bi (X, t0 ) ≤ 0 it is

P({g(X, t) ≤ 0} ∩ i=1 {bi (X, t0 ) ≤ 0})
F(t | B) =  (24)
P( i=1 bi (X, t0 ) ≤ 0)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 343

It is assumed that the observation events B can always be written in the form given. In
most cases the observation and decision point is t0 = 0. Within FORM one can write
for one observation event
2 (−βg (t), −βb (t0 ), ρ)
F(t | B) = (25)
(−βb (t0 ))

where 2 (x, y, ρ) is the two-dimensional normal integral and ρ = αTg αb with αg , αb the
two normalized gradients of the limit state functions. This scheme applies analogously
if more than one event has to be considered. For B = {b(X, t0 ) = 0} we have


( βb ) *
∂βb −∞ P(Zg ≤ βg | Zb (t0 ) = z)ϕ(z)dz −βg (t) + ρ(t, t0 )βb (t0 )
F(t | B) = = 
ϕ(βb (t0 )) 1 − ρ(t, t0 )2
(26)

3 Cost-benefit optimization

3.1 G eneral
It is generally accepted that the ultimate target to be achieved in structural design
including proper maintenance is to maximize the net benefit derived from the structure
over its lifetime, subject to constraints related to safety and serviceability. For technical
facilities the following objective has been proposed by (Rosenblueth and Mendoza
1971) based on earlier proposals in economics for cost benefit analysis:

Z(p) = B(p) − C(p) − D(p) (27)

A facility is financially optimal if Equation (27) is maximized. It is assumed that all


quantities in Equation (27) can be measured in monetary units. p is the vector of all
safety relevant parameters. B(p) is the (expected) benefit derived from the existence
of the facility, C(p) is the cost of design and construction and D(p) is the (expected)
cost in case of failure. Later we will also include all expenses for maintenance in D(p).
Statistical decision theory dictates that expected values are to be taken. In the following
it is assumed that C(p) and D(p) are differentiable in each component of p.
The facility has to be optimized during design and construction at the decision point
which is taken as t = 0. Now it is a well-established principle of cost-benefit analysis
that future costs and benefits must be discounted, using a compound interest formula.
A continuous discounting function is assumed for analytical convenience which is
accurate enough for all practical purposes.

δ(t) = exp [−γt] (28)

γ is a time-independent, time-averaged interest rate. In most cost-benefit analyses a tax


and inflation-free discount rate should be taken. If a discrete discount rate γ  is given,
one converts with γ = ln (1 + γ  ). The principles of choosing appropriate discount rates
are thoroughly discussed in (Rackwitz et al. 2005).
344 Structural design optimization considering uncertainties

Cost and benefits may differ for the different parties involved having different eco-
nomic objectives, e.g. the owner, the builder, the user and society. Also, the discount
rate may vary among the different parties in their cost-benefit analysis. A facility makes
sense only if Z(p) is positive within certain parameter ranges for all parties involved.

3.2 D eri v ati o ns


A complete cost-benefit analysis must include not only the direct and indirect cost for
possible failure and for maintenance of the structure to be built, but also the cost for all
future realizations if the concepts of sustainability are applied (Rackwitz et al. 2005).
But this is just the situation for the application of renewal theory. It is assumed that
structures will be systematically reconstructed after failure and/or maintained. This
rebuilding strategy is in agreement with the principles of life cycle engineering and also
fulfills the demand for sustainability (Rackwitz et al. 2005). Clearly, it rests on the
assumption that future preferences are the same as the present preferences.
For regular renewal processes some objective functions based on the renewal model
are already derived in (Rosenblueth and Mendoza 1971; Rackwitz 2000; Streicher and
Rackwitz 2004) and elsewhere. For existing structures the time to first failure is gener-
ally different from the other failure times due to additional experimental and numerical
investigations and subsequent updating of the structural state and/or due to repair or
retrofit of the existing structure. But there can also be other reasons for assuming
f1 (t, p) = f (t, p). Therefore, we derive our model for cost-benefit optimization in full
generality. The objective function is given by Equation (27). The expected damage
cost D(p) are derived as follows. The discrete cost associated with failure including the
reconstruction or repair cost are denoted as CV,1 at the first renewal and CV = CV,n
at subsequent renewals. Let θi = ti − ti−1 be the times between renewals with density
f (t, p) whereas θ1 = t1 has density f1 (t, p). The time to the n-th renewal is Tn = ni=1 θi .
Systematic reconstruction is assumed. The discounted expected damage cost are then
∞  
 n
D(p) = E CV,n exp −γ θk
n=1 k=1
 ∞

 
n
= E[CV,1 exp[−γ θ1 ]] + E CV,n exp[−γ θk ]
n=2 k=1
 ∞

 
n−1
= E[CV,1 exp[−γ θ1 ]] + E CV,n exp[−γ θ1 ] exp[−γ θk ] exp[−γ θn ]
n=2 k=2


= E[CV,1 exp[−γ θ1 ]] + E[ exp[−γ θ1 ]]E[ exp(−γ θ)]n−2 E[CV,n exp[−γ θn ]]
n=2
E[CV exp[−γ θ]]
= E[CV,1 exp[−γ θ1 ]] + E[ exp[−γ θ1 ]]
1 − E[ exp[−γ θ]]
CV,1 E[exp[−γ θ1 ]]
=
1 − E[exp[−γ θ]]
−CV,1 E[exp[−γ θ1 ]]E[exp[−γ θ]] + E[exp[−γ θ1 ]]CV E[exp[−γ θ]]
+ (29)
1 − E[exp[−γ θ]]
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 345

where we have made use of the relation s = ∞ n=k aq
n−k
= 1−q
a
for k < ∞.
(∞ ∗
(∞
E[exp[−γ θ1 ]] = 0 exp[−γt]f1 (t, p)dt = f1 (t, p) and E[exp[−γ θ]] = 0 exp[−γt]
f (t, p)dt = f ∗ (t, p) is also denoted as Laplace transform of f1 (t, p) and f (t, p). If f (t, p)
is a probability density it is f ∗ (0, p) = 1 and 0 < f ∗ (γ, p) ≤ 1 for all γ ≥ 0.
Equation (27) can be rewritten in case of systematic reconstruction after failure with
CV,1 = (C1 (p) + L) as well as CV = (C(p) + L) as

(C1 (p) + L)f1∗ (γ, p)


Z(p) = B(p) − C(p) −
1 − f ∗ (γ, p)
(C1 (p) + L)f ∗ (γ, p)f1∗ (γ, p) − (C(p) + L)f1∗ (γ, p)f ∗ (γ, p)
+ (30)
1 − f ∗ (γ, p)

for the modified renewal process. L is the monetary loss in case of failure including
direct failure cost, loss of business and, possibly, the cost to reduce the risk to human
life and health (or, better, the compensation cost). If only C1 (p) = C(p) the two terms
in the numerator of the forth term cancel. This is usually the case for existing and
systematically renewed structures and, therefore

f1∗ (γ, p)
Z(p) = B(p) − Cini (p) − (C(p) + L) (31)
1 − f ∗ (γ, p)

It has to be mentioned that the design parameters p can be different after the first
renewal compared to the initial design. Also, the cost for the initial design Cini (p) can
be different from the reconstruction cost C(p). The term

f1∗ (γ, p)
m∗1 (γ, p) = (32)
1 − f ∗ (γ, p)

is also denoted by the Laplace transform of the renewal intensity. If f1 (t, p) = f (t, p),
f1∗ (t, p) in Equation (31) must be replaced by f ∗ (t, p).
The benefit B(p) is also discounted down to the decision point. For a benefit
rate b(t) unaffected by possible renewals and negligibly short times of reconstruction
(retrofitting) one finds
 ∞
B= b(t) exp[−γt]dt (33)
0

Clearly, the integral must converge imposing some restriction on the form of b(t). If
the benefit rate b = b(t) is constant one can integrate to obtain
 ∞
b
B= b exp[−γt]dt = (34)
0 γ

The upper integration limit is extended to infinity because the full sequence of life cycle
benefits is considered.
A model which represents realistically the observation that with increasing age of a
component its suitability for use diminishes according to b(t) has been established in
(Hasofer and Rackwitz 2000). Decreasing benefit was associated with obsolescence in
346 Structural design optimization considering uncertainties

this reference. But b(t) can have any form. At each renewal the benefit rate starts again
at b(0) for systematic reconstruction. The total benefit is already given in (Streicher
2004) and is repeated here in full generality.
 ∞
)   *
 
i−1 θi
B(p) = E exp −γ θk exp[−γ τ]b(τ)dτ
i=1 0
k=1
 θ1
=E exp[−γ τ]b(τ)dτ
0
 ∞   
 i−1 θi
+ E exp[−γ θ1 ] exp[−γ θk ] exp[−γ τ]b(τ)dτ
i=2 k=2 0
 θ1 ∞

=E exp[−γ τ]b(τ)dτ + E[exp[−γ θ1 ]] E[exp[−γ θ]]i−2
0 i=2
 θ
×E exp[−γ τ]b(τ)dτ
0
 θ1

E[exp[−γ θ1 ]]E[ 0 exp[−γ τ]b(τ)dτ]
= exp[−γ τ]b(τ)dτ + (35)
0 1 − E[ exp[−γ θ]]

Equation (35) can be simplified for the case of systematic reconstruction after
failure to
 ∞ t
B(p) = exp[−γτ]b(τ)dτ f1 (t, p)dt
0 0
 ∞
f1∗ (γ, p) t
+ exp[−γτ]b(τ)dτ f (t, p)dt
1 − f ∗ (γ, p) 0 0
 ∞  ∞
f1∗ (γ, p)
= BD (t)f1 (t, p)dt + BD (t)f (t, p)dt (36)
0 1 − f ∗ (γ, p) 0

with
 t
BD (t) = exp[−γτ]b(τ)dτ (37)
0

For f1 (t, p) = f (t, p) Equation (36) simplifies to:


 ∞
1
B(p) = BD (t)f (t, p)dt (38)
1 − f ∗ (γ, p) 0

For completeness, the objective function is also given for the case where the
component is given up after failure or a finite service time ts
 ts  ts
Z(p) = BD (t)f1 (t, p)dt − C(p) − L exp[−γt]f1 (t, p)dt (39)
0 0
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 347

Because the failure densities, in general, are known only numerically and pointwise,
the corresponding Laplace transforms have to be taken numerically. Suitable tech-
niques are presented in (Streicher and Rackwitz 2004) and Section 5. The formulae
are easily extended for systems with several components and/or multiple failure modes
in series as demonstrated by Equation (11) (see also (Streicher and Rackwitz 2004)).
In particular, one component of the system can model replacement due to obsoles-
cence. Non-constant discounting is discussed in (Rackwitz et al. 2005). Optimization
of Equation (31) with respect to the design parameter p can be performed by one of
the available algorithms (see Section 5).
Application to existing, aging but maintained structures requires a few more
remarks. It is assumed that the structure is already in use for some time. At a special
point in time it will be decided to inspect and possibly repair or retrofit the structure.
The cost which occur at this decision point are CR (p). Clearly, all cost incurred before
that point are irrelevant if the decision is to keep the structure rather than demolish-
ing and rebuilding it. The value of CR (p) can be zero if the structure is left as is but
the probabilistic model for the time to first failure f1 (t, p) possibly is updated. Then,
renewal of the structure is a question as to when the possibly updated failure rate is
no more acceptable. The modified density f1 (t, p) of the time to first failure has to be
determined depending on the repair/retrofitting actions and the information collected
about the actual state of the structure. CR (p) generally differs from C(p), the recon-
struction cost after failure, or even exceeds it if retrofitting is more expensive than
reconstruction. A maintenance plan for the existing structure has to be designed. After
the first renewal due to future failure the regular failure time density f (t, p) is valid.

3.3 Applicatio n to s tationary Pois s onia n di s turbanc e s


Unfortunately, analytic Laplace transforms are available only for a few analytic failure
models, for instance the exponential, uniform, gamma, normal and inverse normal
distribution. The important exponential distribution with parameter λ corresponding
to a Poisson process has f1∗ (γ) = f ∗ (γ) = λ/γ + λ and, therefore, m∗ (γ) = λ/γ. A very
useful generalization is when a modified renewal process models disturbance (loading)
events (Hasofer 1974; Rosenblueth 1976). Such disturbances generally are extreme
events like shocks, explosions, earthquakes, storms or floods. The distribution func-
tions between events are G1 (t) and G(t), respectively. If such an event occurs the failure
probability is Pf (p). By definition, the occurrence of disturbance events and the failure
events are independent. The density function of the time to the first failure event then is



f1 (t, p) = gn (t)Pf (p)Rf (p)n−1 (40)
n=1

i.e. the first failure event can occur after the first, second, third, . . . disturbance event
and where Rf (p) = 1 − Pf (p). The density of the n-th event can be obtained by recursive
convolution so that in terms of Laplace transforms

gn∗ (γ, p) = gn−1



(γ, p)g ∗ (γ, p) = g1∗ (γ, p)[g ∗ (γ, p)]n−1 (41)
348 Structural design optimization considering uncertainties

Application to the renewal intensity yields




m∗1 (γ, p) = g1∗ (γ)gn−1

(γ)Pf (p)Rf (p)n−1
n=1

 Pf (p)g1∗ (γ)
= g1∗ (γ)[g ∗ (γ)]n−1 Pf (p)Rf (p)n−1 = (42)
1 − Rf (p)g ∗ (γ)
n=1

For the regular renewal process m∗1 (γ, p) has to be replaced by m∗ (γ, p). Let reconstruc-
tion and damage cost be C(p) and L, respectively. Also, as a special case, let the times
between disturbances be the (exponential failure time distributions with (failure) rate

λPf (p). Therefore, E(e−γt ) = −∞ e−γt (λPf (p))e−λPf (p)t dt = λPf (p)/γ + λPf (p). Then, if
only failures due to such disturbances are considered it is (Rackwitz 2000)

λPf (p)
Z(p) = B − Cini (p) − (C(p) + L) (43)
γ
 
For a series system it is Pf (p) = P( sk=1 P(Fk (p))) = 1 − P( sj=1 F j (p)) in Equation (11)
where Fk (p) is the failure event in the k-th mode and F k (p) its complement. Then, the
following generalization is possible
⎡ ⎛ ⎞⎤
λ :s
Z(p) = B − Cini (p) − (C(p) + L) ⎣1 − P ⎝ F j (p)⎠⎦ (44)
γ
j=1

The benefit B and the initial cost Cini (p) as well as the damage cost are related to
the whole system. If there are n different, independent hazards each with rate λi one
derives
⎡ ⎛ ⎞⎤
n
λi :s
Z(p) = B − Cini (p) − i=1 (Ci (p) + Li ) ⎣1 − P ⎝ F ij (p)⎠⎦ (45)
γ
j=1

These generalizations also apply analogously for the more complicated cases discussed
below.

4 Preventive maintenance

4.1 Ma i n tenanc e s t r at eg ie s
Repair after failure is but the simplest maintenance strategy. For aging components, i.e.
components with increasing risk function (conditional failure rate) r(t) = f (t)/1 − F(t),
i.e. r (t) > 0, the risk of failure with potentially large consequences increases with age
and alternative maintenance strategies have been proposed in order to reduce expected
failure consequences. The most important alternative is called preventive maintenance
at random or fixed times. Preventive maintenance actions can be replacements or
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 349

(perfect) repairs. Preventive repairs occur only if corrective renewals have not occurred
before due to failure or obsolescence. Note that preventive maintenance is usually
suboptimal for non-aging components, i.e. with constant or decreasing risk function.
A first strategy repairs a system (component) at age a or after failure, whichever comes
first. In (Barlow and Proschan 1965) this strategy is denoted by age replacement. It
requires knowledge of the age a of a component. (Barlow and Proschan 1965) also
investigate so-called block repairs. In this maintenance strategy the components in a
system are repaired either after failure or all at once at a given time d irrespective
of their actual age. It is clear for increasing risk functions and, in fact, is shown in
(Barlow and Proschan 1965) that the total number of repairs is smaller for age repairs
than for block repairs. However, the number of failures (with large consequences) is
larger in the first strategy and so, possibly, the total cost. Block repairs also may be
organizationally easier. Sometimes they are necessary, i.e. whenever a single repair of a
component prevents the whole system from functioning. While knowledge about the
actual deterioration state of a component is irrelevant for the block repair strategy,
this may be vital for the age repair strategy. An improvement is when repairs are only
performed if inspections indicate that they are necessary. Otherwise further inspections
and possible repairs are postponed to a later time. A strategy where repairs are preceded
by inspections is also denoted as condition-based strategy. In practice, mixtures of these
maintenance strategies will also be found.

4.2 Inspections
Inspections should determine the actual state of a component in order to decide on
repair or leave it as is. But inspections can rarely be perfect. A decision about repair
can only be reached with certain probability depending on the inspection method used.
The repair probability depends on the magnitude of one or more suitable damage
indicators (chloride penetration depth, crack length, abrasion depth, etc.) measured
during inspection. For cumulative damage phenomena the damage indicators increase
with time and so does the repair probability PR (t). The parameter t is the time elapsed
since the beginning of the deterioration process. For example, the repair probability
may be presented as

PR (t) = P(S(t, X) > sc ) = P(sc − S(t, X) ≤ 0) (46)

with S(t, X) a suitable, monotonically increasing damage indicator, X a random vector


taking into account of all uncertainties during inspection and sc a given threshold
level. If this is exceeded a decision for repair is taken. The vector X usually also
includes a random variable modeling the measurement error. Frequently, the damage
indicator function S(t, X) reflects the damage progression and has a similar form as
the failure function. It involves, at least in part, the same random variables. In this
case failure and no repair/repair events become dependent events. It is, of course,
possible to consider multidimensional damage indicators and derive repair decisions
from an arbitrary combination thereof. A discussion of the details of the efficiency of
various inspection methods and the corresponding repair probabilities is beyond the
scope of this chapter. They depend on the particular deterioration phenomenon under
consideration.
350 Structural design optimization considering uncertainties

4.3 Repai r m od el
After failure of a system or component it is repaired unless it is given up after failure or
it is repaired systematically in the age-dependent maintenance strategy or it is repaired
after an indicative inspection in the condition-based maintenance strategy. The name
repair is used synonymously for renewal, replacement or reconstruction. Repairs, if
undertaken, restore the properties of a component to its original (stochastic) state, i.e.
repairs are equivalent to renewals (AGAN = As Good As New) so that the life time
distribution of the repaired component is again F(t). The repair times can either be
assumed negligibly short or have finite length.
The model is a somewhat idealized model. It rests on a number of assumptions the
most important of which is probably that repairs fully restore the (stochastic) properties
of the component. Imperfect repairs cannot be handled because the renewal argument
repeatedly used in the following breaks down. In the literature several models for
imperfect repairs are discussed which only partially reflect the situations met in the
structures area. An important case is when minimal repairs not essentially changing
the initial lifetime are done right after an inspection. If one generalizes this model to a
model where a renewal (perfect repair) occurs with probability π but minimal repair
with probability 1−π, one has essentially the model proposed in (Brown and Proschan
1983). This model, in fact, resembles the one studied herein with π = PR (t).
Negligibly short times of inspection and repair are most often only a more or less
good approximation. Consideration of finite, random renewal times in the age-repair
strategy appears possible but is complicated because inspections and probably also
failures cannot occur during repairs. No benefit can be earned during repair times.
Another important case is when repairs are delayed, for example due to budget restric-
tions. It appears possible to handle this case by adding a random delay time to the
random repair time. During a delay time the component can still degrade or fail while
this is unlikely to happen during repair. Finite renewal times are not considered in this
chapter. Some more but still first results are given in (Joanni and Rackwitz 2006). It
turns out that for realistic repair times their influence is very small.
Inspection/repair at strictly regular time intervals as assumed below is also not very
realistic. However, as will be shown in the examples, the objective function is rather
flat in the vicinity of the optimal value so that small variations will not noticeably
change the results.
Repair operations necessarily lead to discontinuities (drops) in the risk function, and
similarly in the renewal intensity. They can substantially reduce the number of failures
and, thus, corrective renewals. In an effective maintenance scheme the majority of
renewals will, in fact, be preventive renewals.

4.4 Ag e-d epend ent r e pair s


It is convenient to start with the general case of replacements (repairs, renewals) at
random times Tr with distribution Fr (t) or after failure at random times Tf with distri-
bution Ff (t, p). The renewal time then is the minimum of these times with distribution
function

F(t, p) = 1 − (1 − Ff (t, p))(1 − Fr (t)) = 1 − F f (t, p)F r (t) (47)


C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 351

for independent times Tf and Tr with density

f (t, p) = ff (t, p)F r (t) + fr (t)F f (t, p) (48)

and where the notation F(x) = 1 − F(x) is used. Application of Equation (29) then gives
for the damage term of an ordinary renewal process

(C(p) + L)fF∗ (γ, p) + R(p)fF∗ (γ, p)


r f
D(p) = (49)
1− (fF∗ (γ, p) + fF∗ (γ, p))
f r

and, similarly, for the benefit term with the model in Equation (35)
(∞ (∞
BD (t)ff (t, p)F r (t)dt + 0 BD (t)fr (t)F f (t, p)dt
B(p) = 0
(50)
1 − (fF∗ (γ, p) + fF∗ (γ, p))
f r

(∞ (∞
where fF∗ (γ, p) = 0 exp[−γt]ff (t, p)F r (t)dt and fF∗ (γ, p) = 0 exp[−γt]fr (t)F f (t, p)dt
r f

are the modified complete Laplace transforms of ff (t, p)F r (t) and fr (t)F f (t, p), respect-
ively. R(p) is the cost of repair and BD (t) is as in Equation (37). The case of random
maintenance actions has hardly any practical application except if there is continuous
monitoring of the system state. Then, the time until intervention by repair is random
and can be defined as the first passage time of a given threshold by the continuous
observation process.
Alternatively, assume maintenance actions at (almost) fixed intervals a, 2a, 3a, . . . so
that fr (t) = δe (a) and Fr (t) = He (a) (δe (x) = Dirac’s delta function, He (a) = Heavyside’s
unit step function. Equation (49) then specializes to

(C(p) + L)f ∗∗ (γ, p,a) + R(p) exp[−γa]F(p,a)


DM (p,a) = (51)
1 − (f ∗∗ (γ, p,a) + exp[−γa]F(p,a))

and similarly Equation (50) to


(a
BD (t)f (t, p)dt + BD (a)F(p,a)
BM (p,a) = 0
(52)
1 − (f ∗∗ (γ, p,a) + exp[−γa]F(p,a))
(a
with f ∗∗ (γ, p,a) = 0 exp[−γt]f (t, p)dt the incomplete Laplace transform of f (t, p)
and F(p,a) the probability of survival up to a. The quantity BD (t) is given in Equa-
tion (37). Note that the Laplace transform of a deterministic repair time fr (t) = δe (a) is
f ∗ (γ) = exp[−γa]. The repair cost R(p) should be substantially smaller than C(p) + L
so that it is worth making preventive repairs and, thus, avoiding the large failure and
reconstruction cost in case of failure. Equation (51) goes back to some early work in
(Cox 1962; Barlow and Proschan 1965; Fox 1966). In (Van Noortwijk 2001) parallel
results are developed for discrete failure models and discrete discounting schemes.
Next, assume that the structure is already in use. At a special decision point in time
inspection, retrofit or repair at cost CR (p) takes place. Depending on the state of the
structure and the action which is done, an updated failure time density f1 (t, p) for
352 Structural design optimization considering uncertainties

the time to the first renewal is calculated. Therefore, a new cost benefit optimization
is necessary in order to find optimal replacement intervals and design variables. The
first replacement interval a1 with f1 (t, p) is different from the subsequent intervals a
with ordinary failure time density f (t, p). It will further be assumed that for the first
renewal the optimized parameter p, which is also valid for all subsequent renewals,
is calculated without having regard to the special parameters realized in the existing
structure. If the structure undergoes a complete renewal at the decision point it is even
possible to introduce the design variables p already in that structure. Then, the existing
design variables p have to be augmented by the additional variables.
The expected damage cost are then determined according to Equation (49) as

DMa1−a (p, a1 , a = (C(p) + L)f1∗∗ (γ, p, a1 ) + R(p) exp[−γa1 ]F 1 (p, a1 )


f1∗∗ (γ, p, a1 ) + exp[−γa1 ]F 1 (p, a1 )
+ × ((C(p) + L)f ∗∗ (γ, p, a)
1 − (f ∗∗ (γ, p, a) + exp [−γa]F(p, a))
+ R(p) exp[−γa]F(p, a)) (53)

For constant benefit rate b(t) = b the benefit is as in Equation (34). The expected
benefit for a non-constant rate b(t) as in Equation (35) is (Streicher 2004)
 a1
BMa1−a (p, a1 , a) = BD (t)f1 (t, p)dt + BD (a1 )F 1 (p, a1 )
0

f1∗∗ (γ, p, a1 ) + exp[−γa1 ]F 1 (p, a1 )


+
1 − (f ∗∗ (γ, p, a) + exp[−γa]F(p, a))
 a 
× BD (t)f (t, p)dt + BD (a)F(p, a) (54)
0

with BD (t) from Equation (37). The cost for continuous monitoring and/or mainten-
ance could alternatively also be taken into account in the benefit term by replacing b(t)
with b(t) − c(t). The objective function then is

ZMa1−a (p, a1 , a) = BMa1−a (p, a1 , a) − CR (p) − DMa1−a (p, a1 , a) (55)

Repair is interpreted as preventive renewal (replacement of an aging component after


a finite time of use a). Renewal after failure is called corrective renewal. Equation (55)
can be subject to optimization not only with respect to the design parameter p but also
with respect to the inspection/repair intervals a1 and a, respectively. Optimal inspection/
repair intervals do not always exist, as pointed out already in (Fox 1966). They exist
for failure models with increasing risk function (Fox 1966). If they do not exist, then
it is preferable to wait with renewal until failure unless the failure rate exceeds a given
value. When optimizing Equation (55) it is, of course, important that the owner, builder
or other party does not only enjoy the benefits but also carries the cost of construction,
the cost of failures and the cost for preventive maintenance. Only then, a joint opti-
mization of design and maintenance makes sense. If one is only interested in optimal
maintenance it is still possible to optimize the cost for preventive and corrective repairs
with respect to the repair intervals keeping the design parameter p constant.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 353

4.5 Bloc k repairs


The damage cost for block repairs are composed of the (discounted) cost of planned
systematic renewals at time d (or d1 for the first interval, where the time to the first fail-
ure has the updated failure time density f1 (t, p)) plus the (discounted) cost of failure(s)
before d (or d1 ). Therefore,

DB (p, d1 , d) = R(p)e−γd1 + (C(p) + L)[f1∗∗ (γ, p, d1 ) + m∗∗


1 (γ, p, d1 )]

e−γd1
+ [R(p)e−γd + (C(p) + L)f ∗∗ (γ, p, d)[1 + m∗∗ (γ, p, d)]]
1 − e−γd
(56)

∗∗
(d ( d1 −γt
where f(1) (γ, p, d1 ) = 0 (1) e−γt f(1) (t, p)dt, m∗∗
(1) (γ, p, d1 ) = 0 e m(1) (t, p)dt with
m1 (t, p) for the updated failure rate to the first renewal until d1 and m(t, p) in subse-
quent intervals until d as the renewal intensities in Equation (15) (Cox 1962). m1 (t, p)
and m(t, p) are given by m1 (t, p) = ∞ f
n=1 1,n (t, p) and m(t, p) = ∞
n=1 fn (t, p), respect-
ively (see Equation (15)). Here and in the following the notation x(1) means either x or
x1 whatever is relevant. Remember, integration of m(1) (t) is simply the mean number
of renewals in [0, d(1) ] but here discounting is introduced additionally. The computa-
tion of m(1) (t) is the numerically expensive part (see Equation (17), Equation (19) or
Equation (21)). Note that all components are repaired at time d(1) with certainty and
cost R(p) but some components are already renewed earlier because they failed.
For f1 (t, p) = f (t, p) and d1 = d Equation (56) simplifies to

R(p)e−γd + (C(p) + L)f ∗∗ (γ, p, d)[1 + m∗∗ (γ, p, d)]


DB (p, d) = (57)
1 − e−γd

For benefit rates unaffected by renewals one simply has the results in Equation (34)
or (33). The benefit term for the case in Equation (35) is for finite integration intervals
[0, d].
 d1  d1
BB (p, d1 , d) = BD (t)f1 (t, p)dt + m∗∗
1 (γ, p, d1 ) BD (t)f (t, p)dt
0 0
  
e−γd1 d
∗∗
d
+ BD (t)f (t, p)dt + m (γ, p, d) BD (t)f (t, p)dt
1 − e−γd 0 0
(58)

with BD (t) in Equation (35). For f1 (t, p) = f (t, p) and d1 = d Equation (58) simplifies to
(d (d
BD (t)f (t, p)dt + m∗∗ (γ, p, d) BD (t)f (t, p)dt
BB (p, d) = 0 0
(59)
1 − e−γd

The length d (and/or d1 ) of a replacement interval can also be subject to optimization


with respect to benefits and cost. In general, there is little difference between age-
dependent and block repairs unless the failure cost are very large.
354 Structural design optimization considering uncertainties

4.6 Inspec ti o n and r epair


In the structures and many other areas any expensive maintenance operation is pre-
ceded by inspections involving cost I if damage progression and/or changes in system
performance are observable. We understand that the inspections are essential inspec-
tions leading eventually to decisions about repair or no repair. If there are inspections
at times a(1) , 2a(1) , 3a(1) , . . . there is not necessarily a repair because aging processes and
inspections are uncertain or the signs of deterioration are vague. Repairs occur only
with a certain probability PR (t), for example according to Equation (46). For cumula-
tive damage phenomena this probability should increase with time as in Equation (46)
and should depend on the actual observed damage state. As mentioned before the
same (physical or chemical) damage process determines an (observable) damage state
but also failure. For this reason inspection results and thus repair events and failure
events are dependent. In fact, only if inspections address the same damage process,
specifically the same realization, can we expect to make reasonable decisions about
repair or no repair. Such dependencies makes an analytical and numerical treatment
complicated but still computationally manageable.
The objective is

ZIR (p, a1 , a) = BIR (p, a1 , a) − CR (p) − DIR (p, a1 , a) (60)

where in generalizing Equation (53)

N2
DIR (p, a1 , a) = N1 + N3 (61)
D

with:

⎛ ⎞
∞ 
 d ⎝:
na1 n−1
N1 = (C(p) + L) exp[−γt] P {R(ja1 )} ∩ {T1 ≤ θ}⎠ dt
(n−1)a1 dθ
n=1 j=0
|θ=t
⎛ ⎞

 :
n−1
+I exp[−γ(na1 )]P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {T1 > na1 }⎠
n=1 j=0
⎛ ⎞

 :
n−1
+ (I + R(p)) exp[−γna1 ]P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {T1 > na1 }⎠ (62a)
n=1 j=0
⎛ ⎞
∞ 
 d ⎝:
na1 n−1
N2 = exp[−γt] P {R(ja1 )} ∩ {T1 ≤ θ}⎠ dt
(n−1)a1 dθ
n=1 j=0
|θ=t
⎛ ⎞

 :
n−1
+ exp[−γna1 ]P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {T1 > na1 }⎠ (62b)
n=1 j=0
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 355
⎛ ⎞
∞ 
 d ⎝:
na n−1
N3 = (C(p) + L) exp[−γt] P {R(ja)} ∩ {T ≤ θ}⎠ dt

n=1 (n−1)a j=0
|θ=t
⎛ ⎞
∞ :
n−1
+I exp[−γ(na)]P⎝{R(na)} ∩ {R(ja)} ∩ {T > na}⎠
n=1 j=0
⎛ ⎞

 :
n−1
+ (I + R(p)) exp[−γna]P⎝{R(na)} ∩ {R(ja)} ∩ {T > na}⎠ (62c)
n=1 j=0
⎛ ⎞
∞ 
 d ⎝:
na n−1
D=1− exp[−γt] P {R(ja)} ∩ {T ≤ θ}⎠ dt

n=1 (n−1)a j=0
|θ=t
⎛ ⎞

 :
n−1
− exp[−γna]P⎝{R(na)} ∩ {R(ja)} ∩ {T > na}⎠ (62d)
n=1 j=0

and
CR (p) = cost of investigating and/or retrofitting an existing structure
C(p) = reconstruction cost after failure
L = direct damage cost after failure
R(na(1) ) = repair event at the j-th inspection
R(ja(1) ) = no-repair event at the j-th inspection
PR (ja(1) ) = probability of repair after the j-th inspection
PR (ja(1) ) = (1 − PR (ja(1) )) = probability of no repair after the j-th inspection
a(1) = deterministic inspection interval
I = cost per inspection
R(p) = repair cost for preventive maintenance.

The first term N1 in Equation (61) is the replacement cost after first failure or repair,
N3 the replacement cost for subsequent renewal cycles. In both cases the replacement
cost include the cost of failure, the cost of inspections given that no failure and no
repairs have occurred before and the third term accounts for the cost of inspection
and repair given that no failure occurred before. Here, one has to extend the renewal
interval to 2a(1) , 3a(1) , . . . if an inspection is not followed by repair and no failure
occurred. Since PR (a(1) ) < 1 it is usually sufficient to consider only a few terms in
the sums. The higher order terms vanish for PR (a(1) ) → 1 and are significant only for
relatively small a(1) .
As concerns numerical computions consider the fractional Laplace transform of the
failure density given dependencies between no repair and failure events, that is (Joanni
and Rackwitz 2006).
∗∗∗
f(1) (γ, p, (n − 1)a(1) ≤ t ≤ na(1) )
⎛ ⎞
 na(1)
d ⎝:
n−1
= exp[−γt] P {R(ja(1) )} ∩ {T(1) ≤ θ}⎠ dt (63)
(n−1)a(1) dθ
j=0
|θ=t
356 Structural design optimization considering uncertainties

where T(1) is the random time to failure. Here again, the intersection probabil-
ities can be determined by FORM/SORM but alternative methods such as Monte
Carlo
n−1 simulation can also be used. Remember that a typical intersection event
j=0 {R(ja(1) )} ∩ {T(1) ≤ t} after the probability distribution transformation into

standard space is given by n−1 j=0 {sc − S(ja(1) , UR ) > 0} ∩ {g(1) (UF , t) ≤ 0} according to
Equations (5) and (46), for example. UR and UF denote the variables in the random
vector defining the damage indicator (including measurement error) and the variables
defining failure, respectively. Because UR and UF have some components in common
the events are dependent. Within FORM/SORM the event boundaries are now lin-
earized in the most likely failure point(s) and the correlation coefficients between the
respective state functions are computed. The dependencies can be taken into account by
evaluating the corresponding multivariate normal integrals. The differentiation under
the integral that is necessary for evaluation of Equation (63) is best done numerically,
but can also be performed analytically under certain conditions.
For F1 (t) = F(t) the damage term in Equation (61) simplifies to

N3
DIR = . (64)
D
The benefit is given by Equation (33) or (34) if it is unaffected by renewals. It
has a similar structure as Equation (61). Generalizing Equation (54) for the model in
Equation (35) one obtains

B2
BIR (p, a1 , a) = B1 + B3 (65)
D
and

∞ 
 d ⎣:
na1 n−1
B1 = B∗D (t) P P({R(ja1 )} ∩ {T1 ≤ θ})|θ=t dt
(n−1)a1 dθ
n=1 j=0
⎛ ⎞⎤
∞ :
n−1
+ B∗D (na1 )P ⎝ {R(ja1 )} ∩ {T1 > na1 }⎠⎦ (66a)
n=1 j=0

∞ 
 d ⎣:
na1 n−1
B2 = P P({R(ja1 )} ∩ {T1 ≤ θ})|θ=t dt + exp[−γ(na1 )]
(n−1)a1 dθ
n=1 j=0
⎛ ⎞⎤
:
n−1
× P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {T1 > na1 }⎠⎦ (66b)
j=0
⎛ ⎞
∞ 
 na :
n−1
d
B3 = B∗D (t) P⎝ {R(ja)} ∩ {T ≤ t}⎠ dt
(n−1)a dθ
n=1 j=0
|θ=t
⎛ ⎞

 :
n−1
+ B∗D (na)P⎝ {R(ja)} ∩ {T > na}⎠ (66c)
n=1 j=0
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 357

where BD (t) is given in Equation (37) and


 t

BD (t) = exp [−γ τ]b(τ)dτ. (67)
(n−1)a(1)

For F1 = F(t) an analogous simplification as in Equation (64) is possible.


For independent repair and failure events the intersection signs must simply be
replaced by product signs simplifying the numerical computations considerably. The
question is when the independence assumption becomes at least approximately true.
This must depend on the case under consideration. Dependencies become weaker for
larger measurement errors during inspections and for smaller dependencies between
damage indicators and failure processes.

4.7 Preventive maintenance for s eries s y s te ms


By definition, a series system fails if any of its components fails. Consequently, all of
its components have to renewed. This requires only a few modifications of the theory
developed in Section 4.6. For a system with s components we have
⎛ ⎞
∞  na1
 dP ⎝ : :
n−1 s
N1s = (C(p) + L) exp[−γt] × (−1) {R(ja1 )} ∩ {Tm1 > θ}⎠ dt
(n−1)a1 dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+I exp[−γ(na1 )]P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {Tm1 > na1 }⎠
n=1 j=0 m=1
⎛ ⎞

 :
n−1 :
s
+ (I + R(p)) exp[−γna1 ]P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {Tm1 > na1 }⎠
n=1 j=0 m=1
(68a)
⎛ ⎞
∞ 
 dP ⎝ : :
na n−1 s
N2s = exp[−γt] × (−1) {R(ja1 )} ∩ {Tm1 > θ}⎠ dt
(n−1)a dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+ exp[−γna1 ]P⎝{R(na1 )} ∩ {R(ja1 )} ∩ {Tm1 > na1 }⎠ (68b)
n=1 j=0 m=1
⎛ ⎞
∞ 
 dP ⎝ : :
na n−1 s
N3s = (C(p) + L) exp[−γt] × (−1) {R(ja)} ∩ {Tm > θ}⎠ dt
(n−1)a dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+I exp[−γ(na)]P⎝{R(na)} ∩ {R(ja)} ∩ {Tm1 > na1 }⎠
n=1 j=0 m=1
⎛ ⎞

 :
n−1 :
s
+ (I + R(p)) exp[−γna]P⎝{R(na)} ∩ {R(ja)} ∩ {Tm > na}⎠ (68c)
n=1 j=0 m=1
358 Structural design optimization considering uncertainties
⎛ ⎞
∞ 
 dP ⎝ : :
na n−1 s
Ds = 1 − exp [−γt] × (−1) {R(ja)} ∩ {Tm > θ}⎠ dt
(n−1)a dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+ exp[−γna]P ⎝R(na) ∩ R(ja) ∩ {Tm > na}⎠ (68d)
n=1 j=0 m=1

in Equation (61) with N1 , N2 , N3 and Dd replaced by N1s , N2s , N3s and Ds , respectively.
Similar modifications have to made for the benefit term.

⎛ ⎞
∞ 
 na1 :
n−1 :
s
dP ⎝ {R(ja1 )} ∩
B1s = B∗D (t) × (−1) {Tm1 > θ}⎠ dt
(n−1)a1 dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+ B∗D (na1 )P ⎝{R(na1 )} ∩ {R(ja1 )} ∩ {Tm1 > na1 }⎠ (69a)
n=1 j=0 m=1
⎛ ⎞
∞ 
 dP ⎝ : :
na1 n−1 s
B2s = ×(−1) {R(ja1 )} ∩ {Tm1 > θ}⎠ dt
(n−1)a1 dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+ exp [−γ(na1 )]P ⎝{R(na1 )} ∩ {R(ja1 )} ∩ {Tm1 > na1 }⎠ (69b)
n=1 j=0 m=1
⎛ ⎞
∞ 
 na :
n−1 :
s
dP ⎝ {R(ja)} ∩
B3s = B∗D (t) × (−1) {Tm > θ}⎠
(n−1)a dθ
n=1 j=1 m=1
θ=t
⎛ ⎞

 :
n−1 :
s
+ B∗D (na)P⎝{R(na)} ∩ {R(ja)} ∩ {Tm > na}⎠ (69c)
n=1 j=0 m=1

 
Here, we have used again P( s=1 E ) = 1 − P( sm=1 Em ) in order to retain operations
solely on intersections.
The series system is a realistic assumption for many but not for all civil engineering
systems. For example, if one bridge of several bridges in a road connection between
A and B fails or a river dam breaks at a certain point the infrastructure or flood
protection system fails but only the failed bridge or dam section must be restored in
order to make the system functional again. This may require certain modifications in
the models outlined so far. If the block maintenance regime is chosen, all components
in the systems will be restored. But if the age-dependent regime with inspection and
repair is chosen any repair action may also be associated to a specific component.
An analytical treatment will then become rather difficult and complex because the
components in the system will have different ages. More complex systems can involve
considerable numerical effort.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 359

5 Some remarks about suitable optimization methods

5.1 G eneral
It is necessary to speak a little about the technical aspects of optimization. When
designing and applying appropriate optimization techniques to the objective functions
derived in the foregoing sections one faces the problem that, in fact, two optimization
tasks have to be solved: (i) Optimization with respect to the design parameter p and
(ii) Optimization with respect to the standard vector u to find the (local) reliability
index, at least if FORM/SORM methods are applied. More specifically, the reliability
optimization has to be solved for each step in the design parameter optimization. Even if
one assumes differentiability of the objective and in the stochastic model as well as in the
structural state function and uniqueness of the solution point(s), overall optimization
still is a formidable task requiring quite some numerical effort. In the recent litera-
ture one distinguishes between one-level and bi-level optimization methods. For the
bi-level method one optimizer solves the cost benefit optimization and another, pos-
sibly different optimizer solves the reliability optimization. In the one-level approach
both optimization tasks are solved simultaneously by a suitable optimizer. In the fol-
lowing we shall briefly comment on both concepts. Both usually work and it is a matter
of taste to select one or the other. If the abovementioned smoothness properties do not
hold, then other optimization procedures are in order.

5.2 Bi-level optimization


In order to obtain the set of parameters for which the objective function Z(p) becomes
optimal, the so-called bi-level approach can be chosen. Here, the optimization task
in standard normal space for computation of the required reliability statistics corres-
ponding to a fixed parameter set p is carried out separately using one of the sequential
quadratic programming or similar methods. The results, in turn, serve as input to
the main optimization loop for the parameter set p for which any of the available
optimization methods can be employed. Alternatively, a direct search optimization
method developed by (Powell 1994) can be applied. It does not require derivatives.
This approach proved to be robust and reliable and only slightly more expensive than
other methods. For the main optimization loop, lower and upper bounds should be
imposed on the parameters, and it usually turns out to be advantageous to scale the
optimization domain such that its shape becomes a hypercube.

5.3 One-level optimization


Let p be a parameter vector which enters in both the cost function and the limit
state function g(u, p) = 0. Benefit, construction and damage function as well as the
limit state function(s) are differentiable in p and u. The conditions for the appli-
cation of FORM/SORM hold. In the so-called β-point u∗ the optimality conditions
(Kuhn-Tucker conditions) are (Kuschel and Rackwitz 1997):
g(u, p) = 0
u ∇u g(u, p)
= − (70)
u ∇u g(u, p)
360 Structural design optimization considering uncertainties

The geometrical meaning of Equation (70) is that the gradient of g(u, p) = 0 is per-
pendicular to the vector of direction cosines of u∗ . The basic idea mentioned first in
(Madsen and FriisHansen 1992) and elaborated in (Kuschel and Rackwitz 1997) now
is to use these conditions as constraints in the cost optimization problem thus avoiding
a bi-level optimization. It will turn out that this concept is crucial for further numerical
analysis.
For example, for the model in Equation (43) this leads to:

λPf (p)
Z(p) = B − C(p) − (C(p) + L) (71)
γ

subject to g(u, p) = 0
ui ∇u g(u, p) + ∇u g(u, p)i u = 0; i = 1, . . . , n − 1
hk (p) ≤ 0, k = 1, . . . , q

where hk (p) ≤ 0, k = 1, . . . , q are some constraints on the admissible parameter range.


One may also add a constraint on the failure rate λPf (p). It is important to reduce
the set of the gradient conditions in the Kuhn-Tucker conditions by one. Otherwise
the system of Kuhn-Tucker conditions is overdetermined. It is also important that the
remaining Kuhn-Tucker conditions are retained under all circumstances, for example,
if one or more gradient Kuhn-Tucker conditions become co-linear with one or more
of the other constraints. Otherwise the β-point conditions are not fulfilled.
λP (p)
If there are multiple failure modes (C(p) + L) γf must simply be replaced by
λ  s
γ
(C(p) + L)(1 − P( j=1 F j (p))) (see Equation (44)). In this case

⎡ ⎛ ⎞⎤
λ⎣ :
s
Z(p) ≤ B − C(p) − (C(p) + L) 1 − P⎝ F j (p)⎠⎦ (72)
γ
j=1

subject to gk (uk , p) = 0; k = 1, . . . , s
ui,k ∇u gk (uk , p) + ∇u gk (uk , p)i uk  = 0;
i = 1, . . . , nk − 1; k = 1, . . . , s
h (p) ≤ 0,  = 1, . . . , q

where the Kuhn-Tucker conditions have to be fulfilled separately for each failure mode.
Note that there are s distinct independent vectors uk . It may be noted that all failure
mode equations are fulfilled simultaneously.
For (locally) non-stationary problems, especially aging problems and for problems
with non-Poissonian failures, it is possible to propose a numerical solution. More
precisely, the Laplace transform is taken numerically and each value of the failure
density is computed by FORM/SORM. The same scheme, however, applies to the full
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 361

Laplace transform of non-stationary problems as well.

f ∗ (γ, p)
Z(p) ≈ B − C(p) − (C(p) + H) (73)
1 − f ∗ (γ, p)
g(uj , p,tj ) = 0 for j = 0, 1, . . . , m
ui,j ∇u g(uj , p,tj ) + ∇u g(uj , p,tj )i uj  = 0 i = 1, . . . , n − 1; j = 0, . . . , m
h (p) ≤ 0,  = 1, . . . , q

where


m
f ∗ (γ, p) ≈
wj exp[−γtj ]fT (tj , p) (74)
j=0

with m the number of time steps and wj the weights for numerical integration of Equa-
tion (74). In order to solve the optimization problem a suitable optimization algorithm
is required. Unfortunately, off-shelf sequential quadratic programming methods turned
out to have problems, possibly due to the many equality constraints. Based on sequen-
tial linear programming methods a new optimization algorithm JOINT 5 (Pshenichnyj
1994) has been developed from an earlier algorithm proposed by Enevoldsen and
Sørensen (Enevoldsen and Sørensen 1992). This turned out necessary because the tasks
in Equations (71), (72) and (73) require special precautions which are not necessarily
available in most of the off-shelf algorithms. For example, the algorithm includes a reli-
able and robust slow down strategy to improve stability of the algorithm instead of an
exact (or approximate) line search which too often is the reason for non-convergence.
A special ‘extended’ equation system is solved in case of failure in the quadratic sub-
algorithm, e.g. due to linear dependence of the linearized constraints. In addition, the
algorithm contains a careful active set strategy (for further details see (Streicher 2004)).
As in the bi-level method a suitable scaling of the objective is advantageous.
Gradient-based methods need first derivatives of the objective and all active
constraints. In case of cost optimization under reliability constraints first order Kuhn-
Tucker optimality conditions for a design point are restrictions to the optimization
problem. These equations are given in terms of the first derivatives of the limit state
function. The gradients of these conditions involve second derivatives. Thus, the solu-
tion of the quadratic subproblem needs second derivatives, i.e. the complete Hessian of
g(u, p). The determination of the Hessian in each iteration step is laborious and can be
numerically inexact. In order to avoid this, an approximation by iteration is proposed.
The Hessian is first preset with zeros. Note that linear limit state functions always have
a zero Hessian matrix. This implies loss of efficiency, but the overall numerical effort
needs not to rise, because calculation of the Hessian is no more necessary. In order
to improve the results in case of strongly nonlinear limit state functions, it is possible
to evaluate the Hessian after the first optimization run and restart the algorithm. For
the improved solution the starting point is the solution of the previous run and the
Hessian matrix is fixed for the whole run. This iterative improvement with subsequent
restarts continues until the results differ only with respect to a given precision which is
usually after very few steps. The results can be simultaneously improved by including
362 Structural design optimization considering uncertainties

second-order corrections during reiteration (see Kuschel and Rackwitz 2000). Any
other more exact improvement can be taken into account in a similar manner.
The techniques proposed enable the solution of quite general problems. They are
based on a one-level optimization but rather strong requirements on differentiability
of the objective, limit state functions and other restrictions must be made. Also, a
possibly substantial increase of the problem dimension must be expected in extreme
cases and, hence, much computing time will be necessary.
In passing it is worthwhile to remark that for the bi-level approach a proof of
convergence is not yet available whereas it is available for the one-level approach.

6 Illustrating example – Chloride attack in an


existing building
The following, slightly academic example shows several interesting features and is
an appropriate test case. Chloride attack due to salting and subsequent corrosion,
for example, in the entrance area of a parking house or in a concrete bridge is con-
sidered. A simplified, approximate model for chloride concentration in concrete is
C(x, t) = Cs (1 − erf( 2√xDt )) where Cs = surface chloride content (measured ≈ 0.5 cm
below surface and extrapolated), x = depth and D = diffusion parameter. A suitable
criterion for the time to the start of chloride corrosion of the reinforcement is:
 
c
Ccr − Cs 1 − erf √ ≤0 (75)
2 Dt
where Ccr = critical chloride content and c = concrete cover. Inversion gives the
initiation time
 
c2 −1 Ccr −2
Ti = erf 1− (76)
4D Cs

The stochastic model is

Variable [unit] Distr. function Parameters

C cr % Uniform 0.4, 0.6


Cs % Uniform 0.8, 1.2
c cm Log-normal mc , 0.8
cm2
D year
Uniform 0.5, 0.8

The planned concrete cover is mc = 5.0 cm. By drilling small holes and analyzing
chemically the drill dust one has determined a chloride concentration of 0.4 at a depth
of 3 cm with measurement error 0.05. Applying Equation (26) and truncation at t = 12
years gives an updated distribution function of the time to the start of corrosion as
shown in Figure 13.1, where it is compared with the initiation time distribution. It is
seen that chloride penetration occurred slightly more rapid than expected in renewed
structures. During initiation time the structure can fail due to time-variant, stationary
extreme loading. It is assumed that each year there is an independent extreme realiza-
tion of the load. Load effects are normally distributed with mean 2.0 and coefficient of
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 363

1.0

0.8

0.6

0.4

0.2
Regular distribution
Updated distribution

24 48 72 96 120 144
Time [years]

Figure 13.1 Updated distribution for first failure time and subsequent failure time distributions.

variation of 40%. Structural resistance is also distributed normally with mean 3-times
as large as the mean load effect and coefficient of variation 30% (p = 6.0). Once cor-
rosion has started mean resistance deteriorates with rate δ(t) = 1 − 0.07t + 0.00002t 2 .
The distribution and density functions of the time to first failure are computed using
SORM in Equation (13) with the failure time distributions in the initiation phase and
in the deterioration phase determined by Equation (7). The structural states in two
arbitrary time steps have constant correlation coefficient of ρ = ρij = 0.973. First, the
mean times of the various distributions in Equation (13) are determined. One finds
E[Ti ] = 51 and E[Td ] = 9.4. The mean of Te does not exist. Using the distribution in
Equation (13) one determines E[T] = 61. These mean times indicate that virtually no
failures occur during initiation. The risk functions for both distributions assuming the
repair probabilities in Figure 13.2 are first increasing but decrease slightly for larger t.
Visual inspections, inspections by half-cell potential measurements and chemical
analyses are performed at regular intervals a(1) . They are followed by renewals (repairs)
with probability
  ) * 
mc,(1)
PA (a(1) ) = P r(1 + 0.05UR ) − Cs,R 1 − erf  ≤0 (77)
2 DR a(1)

shown in Figure 13.2 if a chloride concentration of r at the reinforcement was observed.


The term (1 + 0.2UR ) models the measurement error with UR a standard normal vari-
able. Repair times are assumed negligibly short. Remember, the existing structure is
already 12 years old and has suffered from chloride attack during the whole period.
The first inspection is undertaken after 5 years. For all subsequent renewed structures
the first inspection is after 8 years. Erection cost are C(mc , mr ) = C0 + C1 m2c + C2 mr ,
inspection cost are I = 0.02C0 , and we have C0 = 106 , C1 = C2 = 104 , L = 10C0 ,
γ = 0.03. For preventive repairs the cost are R(mc , mr ) = 0.6C(mc , mr ) · mr is the safety
364 Structural design optimization considering uncertainties

1.0

0.8

Probability of repair
0.6

0.4

0.2
Regular probability, r  0.42
Updated probability, r  0.43

16 32 48 64 80
Time [years]

Figure 13.2 Repair probabilities.

8
1 unit
7 2 units
Expected maintenance cost [106 MU]

5 units
6

6 12 18 24
Replacement age a [a]

Figure 13.3 Age replacement.

factor separating the means of load effect and resistance. The benefit is determined
by using a decaying rate b(t) = b exp [−0.0001t 2 ], b = 0.15C0 , in the model in Equa-
tion (35). All cost are in appropriate currency units. It is noted that the physical and
cost parameters are somewhat extreme but not yet unrealistic. When optimizing with
respect to the inspection interval the Laplace transforms are taken numerically using
Simpson’s integration formula. We first show the total cost (preventive and correc-
tive) for the case of systematic age-dependent repairs and system sizes of s = 1, 2 and 5
(Figure 13.3). As expected, the total cost are higher for larger systems and the optimum
replacement interval decreases.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 365

8
1 unit, r  0.41
2 units, r  0.36
Expected maintenance cost in [106 MU]
7
5 units, r  0.33
6

5
First inspection after 8 years
4

6 12 18 24
Inspection interval a [a]

Figure 13.4 Total cost for inspection and repair.

1 * 6
1
n1, r 0.43, r0.42, D 1.55×10
* 6 n2, r 0.40, r0.36, D 1.84 ×10 1
n5, r 0.35, r0.32, D  2.30×106
*
24 24 24
Inspection interval a [a]

Inspection interval a [a]


3.1
Inspection interval a [a]

2.2

1
1

18
1

18 1.8 18
2.5
2.3 1
2

2.

12 12
2.4

12
3.2
2.8

1.9
3.6

4.4
2.7
2.9
1.6

6 6
4

6 12 18 24 6 12 18 24 6 12 18 24
Inspection interval a [a] Inspection interval a [a] Inspection interval a [a]

Figure 13.5 Expected maintenance cost of an existing n-unit structure in [106 MU], with periodic
inspections at an interval of a1 and a beginning after 5 and 8 years, respectively.

Figure 13.4 shows the results for the inspection/repair strategy. Here, we have also
optimized the repair thresholds r. They become more stringent for larger systems. Also,
the optimum inspection/replacement intervals are much smaller than in the simple age-
dependent case. The differences in cost between systematic age-dependent repairs and
repairs after inspections are not large in this example. By parameter changes it is, how-
ever, easy to make them larger. The result of an optimization with respect to a1 and a is
shown in Figure 13.5 for mc = 5 and mr = 6. One sees that the contour lines are spaced
more narrow for a1 than for a. The optima with respect to a and a1 are rather flat.
If, however, the repair probabilities are much smaller than given in (2) no optimum
would be found. The inspection intervals depend strongly on the system size.
366 Structural design optimization considering uncertainties

7 Conclusions
The theory developed earlier for optimal design and maintenance of aging structural
components and systems is extended to optimal repair and retrofit of existing struc-
tures. It is assumed that structures are maintained (inspected and repaired with certain
probability) at regular time intervals and systematically reconstructed after failure.
Age-dependent and block repairs are studied assuming negligibly short repair times.
Three models for the benefit are discussed. Due to updating by additional investiga-
tions, the time to first failure usually has different probabilistic characteristics than all
other times. Appropriate objective functions for cost-benefit optimization are derived.
It is pointed out that inspections and possible repair events and failure events must
address the same realization of the damage process if preventive maintenance makes
at all sense. Even if the risk function initially was increasing, maintenance operations
will let the risk function drop. Perfect inspections and repairs will reduce the risk func-
tion down to zero. For imperfect inspections the risk function will drop down to finite
values. This generally requires the numerical computation of the renewal intensity by
differentiating the renewal function for which tight bounds can be given.

References

Aitchison, J. & Dunsmore, I.R. 1975. Statistical Prediction Analysis. New York: Cambridge
University Press.
Ambartzumian, R., Der Kiureghian, A., Ohaniana, V. & Sukiasiana, H. 1998. Multinor-
mal probability by sequential conditioned importance sampling: Theory and application.
Probabilistic Engineering Mechanics 13(4):299–308.
Au, S.-K. & Beck, J.L. 2001. First excursion probabilities for linear systems by very efficient
importance sampling. Probabilistic Engineering Mechanics 16(3):193–207.
Ayhan, H., Limón-Robles, J. & Wortman, M.A. 1999. An approach for computing
tight numerical bounds on renewal functions. IEEE Transactions on Reliability 48(2):
182–188.
Barlow, R.E. & Proschan, F. 1965. Mathematical Theory of Reliability. New York: John
Wiley & Sons.
Barlow, R.E. & Proschan, F. 1975. Statistical Theory of Reliability and Life Testing: Probabilistic
Models. New York: Holt, Rinehart & Winston.
Brown, M. & Proschan, F. 1983. Imperfect repair. Journal of Applied Probability 20:
851–859.
Cox, D.R. 1962. Renewal Theory. Monographs on Applied Probability and Statistics. London:
Chapman & Hall.
Cox, D.R. & Isham, V. 1980. Point Processes. Monographs on Applied Probability and
Statistics. London: Chapman & Hall.
Cramér, H. & Leadbetter, M.R. 1967. Stationary and Related Stochastic Processes. New York:
John Wiley & Sons.
Dunnett, C.W. & Sobel, M. 1955. Approximations to the probability integral and cer-
tain percentage points of multivariate analogue of Student’s t-distribution. Biometrika 42:
258–260.
Enevoldsen, I. & Sørensen, J. 1992. Optimization algorithms for calculation of the joint design
point in parallel systems. Structural and Multidisciplinary Optimization 4(2):121–127.
Fox, B. 1966. Age replacement with discounting. Operations Research 14(3):533–537.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s 367

Genz, A. 1992. Numerical computation of multivariate normal probabilities. Journal of


Computational and Graphical Statistics 1:141–149.
Gollwitzer, S. & Rackwitz, R. 1983. Equivalent components in first-order system reliability.
Reliability Engineering 5:99–115.
Gollwitzer, S. & Rackwitz, R. 1988. An efficient numerical solution to the multinormal integral.
Probabilistic Engineering Mechanics 3(2):98–101.
Hasofer, A. 1974. Design for infrequent overloads. Earthquake Engineering and Structural
Dynamics 2(4).
Hasofer, A.M. & Rackwitz, R. 2000. Time-dependent models for code optimization. In
R.E. Melchers & M.G. Stewart (eds), Proceedings of the 8th International Conference on
Applications of Statistics and Probability (ICASP8), Sydney, Australia, December 1999,
Vol. 1, Rotterdam/Brookfield, pp. 151–158. CERRA: A.A. Balkema.
Hohenbichler, M. & Rackwitz, R. 1981. Non-normal dependent vectors in structural safety.
ASCE Journal of the Engineering Mechanics Division 107(6):1227–1249.
Hohenbichler, M. & Rackwitz, R. 1983. First-order concepts in system reliability. Structural
Safety 1(3):177–188.
Joanni, A.E. & Rackwitz, R. 2006. Stochastic dependencies in inspection, repair and failure
models. In C. Guedes Soares & E. Zio (eds), Proceedings of the European Safety and Reliability
Conference, Estoril, Portugal, September 2006, London, pp. 531–537. Taylor & Francis.
Kuschel, N. & Rackwitz, R. 1997. Two basic problems in reliability-based structural
optimization. Mathematical Methods of Operations Research (ZOR) 46(3):309–333.
Kuschel, N. & Rackwitz, R. 2000. Time-variant reliability-based structural optimization using
SORM. Optimization 47(3/4):349–368.
Madsen, H.O. & Friis-Hansen, P. 1992. A comparison of some algorithms for reliability-
based structural optimization and sensitivity analysis. In R. Rackwitz & P. Thoft-Cristensen
(eds), Proceedings of the 4th IFIP WG 7.5 Working Conference on Reliability and Optimiza-
tion of Structural Systems, Munich, Germany, September 1991, Berlin, pp. 443–451. IFIP:
Springer.
Pandey, M.D. 1998. An effective approximation to evaluate multinormal integrals. Structural
Safety 20(1):51–67.
Powell, M.J.D. 1994. A direct search optimization method that models the objective and con-
straint functions by linear interpolation. In S. Gomez & J.-P. Hennart (eds), Proceedings of
the 6th Workshop on Optimization and Numerical Analysis, Oaxaca, Mexico, January 1992,
Dordrecht, pp. 51–67. Kluwer Academic Publishers.
Pshenichnyj, B.N. 1994. The Linearization Method for Constrained Optimization, Volume 22
of Computational Mathematics. Berlin: Springer.
Rackwitz, R. 2000. Optimization – the basis of code making and reliability verification.
Structural Safety 22(1):27–60.
Rackwitz, R. 2001. Reliability analysis – a review and some perspectives. Structural
Safety 23(4):365–395.
Rackwitz, R., Lentz, A. & Faber, M.H. 2005. Socio-economically sustainable civil engineering
infrastructures by optimization. Structural Safety 27(3):187–229.
Rosenblueth, E. 1976. Optimum Design for Infrequent Disturbances. ASCE Journal of the
Structural Division 102(9):1807–1825.
Rosenblueth, E. & Mendoza, E. 1971. Reliability optimization in isostatic structures. ASCE
Journal of the Engineering Mechanics Division 97(6):1625–1642.
Streicher, H. 2004. Zeitvariante zuverlässigkeitsorientierte Kosten-Nutzen-Optimierung für
Strukturen unter Berücksichtigung neuer Ansätze für Erneuerungs- und Instandhal-
tungsmodelle. PhD dissertation, Technische Universität München, Munich, Germany.
In German.
368 Structural design optimization considering uncertainties

Streicher, H., Joanni, A. & Rackwitz, R. 2006. Cost-benefit optimization and target relia-
bility levels for existing, aging and maintained structures. Structural Safety. Accepted for
publication.
Streicher, H. & Rackwitz, R. 2004. Time-variant reliability-oriented structural optimization and
a renewal model for life-cycle costing. Probabilistic Engineering Mechanics 19(1–2):171–183.
Van Noortwijk, J.M. (2001). Cost-based criteria for obtaining optimal design decisions. In
Proceedings of the 8th International Conference on Structural Safety and Reliability, Newport
Beach, CA, U.S.A., June 2001. CD-ROM.
Chapter 14

A reliability-based maintenance
optimization methodology
Wu Y.-T.
Applied Research Associates Inc., Raleigh, NC, USA

ABSTRACT: Many mechanical and structural systems, including aircraft, ship, car, oil and
gas pipeline, utilize a structural integrity program to monitor and sustain structural integrity
throughout the service life. Developing optimal maintenance plans under various uncertainties
requires probabilistic analyses of damage accumulations, damage detections, and mitigation
actions. Given the wide spectrum of the options and the complexities in modeling, the most prac-
tical way to conduct maintenance optimization is by random simulations, preferably efficient
sampling methods. This chapter presents a reliability-based maintenance optimization (RBMO)
methodology with a focus on computational strategies that involve physics-based models.
In particular, a two-stage importance sampling (TIS) approach that drastically reduces com-
putational time is described. Stage 1 computes failure probability and systematically generates
failure samples, given no inspections. The failure samples are then repeatedly used in Stage 2 for
inspection optimization. The RBMO methodology is demonstrated using analytical examples
as well as applications related to aircraft and helicopter structural components.

1 Introduction
For economical and reliability/safety reasons, many mechanical and structural sys-
tems apply maintenance practices to sustain structural integrity and reliability over
the design life or extend the life beyond the original design for un-anticipated reasons.
Since fatigue and fracture is one of the main failure modes for such systems, this chap-
ter will focus on fracture failure analysis even though the methodology is applicable
to more general damage accumulation models including corrosion.
Most existing computational fracture mechanics methods and tools used in the
design of structures apply safety margins to deterministic models. With the real-
ization that many design parameters including defect or flaw characteristics, crack
growth law, crack detection, loads, and usages are uncertain, various conservative
assumptions are often employed to help ensure structural integrity. As an example,
a comparison between deterministic and probabilistic damage tolerance analyses is
shown in Table 14.1. The safety-factor based approach applies bounds, either explic-
itly or implicitly, to key design variables. The probabilistic approach, on the other hand,
requires relatively more precise characterizations of the input uncertainties based on
data and expert knowledge.
In the more traditional safe-life design approach (Palmberg et al. 1987), the fatigue
and facture life of a structure is assumed to be governed by crack initiation time, and
370 Structural design optimization considering uncertainties

Table 14.1 Example of deterministic vs. probabilistic damage tolerance.

Deterministic Probabilistic

Reliability principals Bounds, safety factors Probability & confidence


Flaw/defect size A given crack size Distribution of crack size
Existence Probability = 1 0<= Probability<= 1
Inspection schedule Life/N Max. risk reduction
Safety measure Safety margin Reliability = 1 − Pf
Other variables Bounds, safety factors Distributions

the products are designed to survive their design life with a safety margin. The safe-life
approach is typically used for structures that are either very difficult to repair or, if
failed, may cause severe consequences. These products are designed and built to work
without the requirements of any repairs. One drawback is that the over-conservatism
can cause high cost and poor performance (e.g., due to weight increase), and there
are no provisions to extend the service life even though the product may still have
a considerable remaining life after the design life. In addition, safe-design has been
proven to be un-conservative in cases where there are initial defects that cannot be
detected. To overcome these drawbacks, an alternative approach is to use damage
tolerant design which recognizes and allows defects, with a provision that a plan needs
to be in place to monitor damage and implement mitigation actions such as repair,
replacement, load reduction, corrosion rate control, and other methods.
Damage tolerant design is technically more challenging because it requires using
capable detection devices to catch damage just in time – not too early when dam-
age cannot be detected effectively, and not too late when the damage has grown to
such a dangerous size that a failure is likely before the next inspection. Scheduled
inspections have been applied for structures such as aircraft wings, engine blades and
disks, and oil/gas transmission pipelines for safety, performance or economical reasons
(Berens et al. 1991; Wu et al. 2002; Cunha et al. 2006.). However, because of lack
of models and computational tools, inspection schedules are usually over-simplified.
For example, the easiest scheduling approach is based on equally dividing the total
service life by a number of inspections. This approach, which is simple, is clearly ill-
informed, especially for aging structures where the system deteriorates more rapidly
toward the end of the service life. An optimal scheduling requires a reliability-based
approach which is the focus of this chapter. Note that in this chapter “reliability-based’’
and “risk-based’’ are often used interchangeably even though “risk’’ usually involves
consequences of failures such as risk = (failure probability) × (cost of failure). Using
this simplified definition, and assuming the same cost of failure, reducing risk means
reducing probability-of-failure.

2 Reliability-based methodology

2.1 Rel i a b i l i ty-b as e d d amag e t o ler anc e f r a m e w o r k


Building on an earlier NASA Probabilistic Structural Analysis Methods program
(Millwater et al. 1996) and recent FAA research work for risk assessment of aircraft
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 371

turbine engines (Leverant et al. 1997; Wu et al. 2002; Millwater et al. 2000), a frame-
work for reliability based damage tolerance (RBDT) has been developed in a feasibility
study (Wu et al. 2004; Wu et al. 2005), summarized in Figure 14.1, that considers a
wide range of uncertainties including:

• Random or uncertain parameters in material (e.g., threshold of the stress-intensity


factor, modulus of elasticity)
• Defect or flaw (including size, shape, and location, and the frequency of
occurrence)
• Loading, type of usage (with frequency of occurrence)
• Finite element model (including modeling error)
• Crack growth model (including modeling error)
• Random inspection time
• Probability of detection (POD).

A typical maintenance program includes inspection schedules, probability of detection


curves, repairs, replacements, and other mitigation methods that are useful for slowing
down or stopping the damage accumulation process. For brevity, the term “inspection
optimization’’ is loosely used to mean optimizing the schedules of inspections associ-
ated with a maintenance program that defines repair and replacement requirements.
Unless specifically mentioned, “repair’’ is a term loosely used to include replacement
as a special case.
RBDT models initial defects and other time-independent uncertain variables as ran-
dom variables, tracks the distributions of the growing defects, and simulates detections
and repairs of the defects by modifying defect probability distributions. A failure is
assumed to occur when the defect size exceeds the critical size for fracture. Opti-
mization of inspection schedules is based on minimizing probability of failure or,
more generally, risk. In practical applications, constructing and updating crack size
distribution may be computationally difficult. Alternatively, it is relatively straight-
forward to conduct RBDT using random simulation such as Monte Carlo. However,
the downside is that standard Monte Carlo can be highly time-consuming. There-
fore, approximate methods and more efficient sampling methods are of significant
interest. To demonstrate RBDT for practical applications, a prototype RBDT soft-
ware was developed that integrated a finite element stress analysis module, a fracture
mechanics module, and a probabilistic module. The modular design is illustrated in
Figure 14.2.

3 Probabilistic analysis methods

3.1 Frac ture failure limit-s tate functio n


Under cyclic loading, an initially small flaw may grow and cause a fracture when
the stress-intensity factor K reaches the fracture toughness Kc . The fracture limit
state is:

K(X1 , . . . , Xn , Ns ) = Kc (1)
Integrated FE stress, FM life, and Probabilistic Analyses

Two-Stage failure sampling


Principal structural Crack growth
Element DT model
Updated distribution Most likely failure point
Crack after maintenance Joint
– FE stress model probability
density
– FM life model Initial
Failure
flaw
samples

1st insp. 2nd insp.

Time
Failure
samples

Usage
load spectra Without inspection
Inspection planning Max. risk
With inspection
1 Prob. of reduction
Initial At insp. POD
fracture

Flaw or FOD Stress


Cumulative
probability Time
Flight hours

Material Modeling error Critical size

Inspection time 0
Crack size

Figure 14.1 A reliability based damage tolerance framework.


A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 373

User
command

FE
stress
module
Prob. Generic
analysis function
module interface
FM
life
module
Plug-in modules

Figure 14.2 RBDT software modular design.

The stress-intensity factor is a function of service life, Ns , and the random variable
vector X that includes all but the inspection-related random parameters. Alternatively,
the limit state can be expressed as:

g(X , NS ) = Nf (X ) − NS (2)

where Nf is cycles-to-failure.

3.2 Probability of failure without ins pe c ti o ns


The probability-of-failure without inspection, pof , can be formulated as:
 
pof = Pr [Nf (X ) ≤ Ns ] = ··· fX (X )dX (3)
Nf ≤Ns

in which fX (X ) is the joint probability density function of X . Many methods originat-


ing from Structural Reliability could be used to compute pof (see e.g., books by Ang &
Tang 1984; Madsen et al. 1986; Thoft-Christensen & Murotsu 1986; Melcher 1987;
Ditlevsen & Madsen 1996; Nikolaidis et al. 2005). Relative to with-inspection, this
integral is relatively easier to compute as many approximation methods are suitable
and effective. For well-behaved functions, some of the more widely used approxi-
mation methods include First-Order Reliability Method and Second-Order Reliability
Method (Schuëller 1998; Rackwitz 2001; Der Kiureghian 2005) which are based on
approximation after transforming the original random variables to the standard nor-
mal variables, u. For well-behaved but implicitly defined g-functions (such as finite
element models) requiring extensive computations, the Advanced Mean Value (AMV)
method and its extension, AMV+, provide a quick estimate of the response CDF using
n + 1 + number of CDF levels (Wu et al. 1990).
Denoting the original CDF of X as FX (x) and the standard normal CDF of u as (u),
−1
the transformation between X and u is done by using xi = FX i
((ui )) for independent
374 Structural design optimization considering uncertainties

random variables and Rosenblatt and other transformations for correlated random
variables (Rosenblatt 1952; Ang & Tang 1984; Liu & Der Kiureghian 1986). In the
u-space, it is more convenient to estimate the probability using linear or quadratic
approximate g-functions.
In the independent Gaussian space, Equation 3 becomes:
 
pof = Pr [Nf (X ) ≤ Ns ] = ··· φu (u)du (4)
Nf ≤Ns

in which φu (u) is the standardized-normal Joint PDF (JPDF). Figure 14.3 shows the
JPDF of a bivariate Gaussian after removing the JPDF from the failure region that
represents the density volume of pf . When the g-function is well-behaved, the pf volume
may be represented well by a volume-cut centered at the maximum JPDF point, or
Most Probable Point, MPP (in structural reliability literatures, this point has been
called Design Point, Minimum Distance Point, or Most Likely Failure Point).
The MPP-based results can be represented as:

pof = ( − β) · r1 (5)

In which β is the distance from the origin to the MPP, and r1 is an error factor (larger
or smaller than 1). The FORM solution assumes r1 = 1.
Equation 4 can be recast into Equation 6, a form suitable for using Monte Carlo
simulation.
 
pf = · · · I(u)φu (u)du = E[I(u)]
o
(6)

where E[.] is the expectation operator and φu (u) is the JPDF of standardized normal
distribution. By sampling u-vector K times and taking the sampling average of the indi-
cator function defined as I(u) = 1 for g <= 0 and 0 for g > 0, the probability integral can

Joint PDF

Max. JPDF | g <= 0

Limit state: g = 0

b u1
Failure (g ⴝ 0)
random samples

u2
pf   (b)•r1

Figure 14.3 MPP-based method for computing pf .


A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 375
K
be estimated using the sampling average. By assuming the estimator pof = k=1 Ik (u)/K
follows a binomial distribution, sampling error can be estimated using:

1 − pf
γ(%) = 100 · (−−1 (α/2)) · (7)
pf · K

where γ(%) is the estimated error bound√at a confidence level of 1 − α. Equation 7


shows that the error is proportional to 1/ K.
The MPP-based methods are suitable if the g-function can be well represented by
a linear or quadratic surface, at least around the MPP. The fact that FORM/SORM
cannot consistently provide accurate results has prompted the development of effi-
cient sampling approaches to supplement the FORM/SORM methods (Harbitz
1986; Bucher 1988; Ditlevsen et al. 1989; Hohenbichler & Rackwitz 1988;
Karamchandani & Cornell 1991; Kale et al. 2005). Typically these hybrid methods
require the MPP solution to guide the selection of a sampling density to put more
weight to where the failure is more likely. The selected importance sampling density
h(u) must first satisfy the basic PDF requirement:
 
··· hu (u)du = 1 (8)

Using h(u) as the sampling density, Equation 4 can be written as:


   
φu (u) φu (u)
pof = ··· · hu (u)du = E (9)
Nf ≤Ns
hu (u) hu (u)

To achieve high efficiency, the shape of h(u) is usually selected to resemble the original
failure density function and the ideal support is one that is slightly greater than the
failure domain. An extreme case is h(u) = φu (u) for g <= 0 and h(u) = 0 for g > 0,
resulting in:
   
φu (u) φu (u)
··· I · du = E[pof ] = pof (10)
φu (u)/pof pof

which is a constant with zero variance. In practice, approaching the above limiting
condition means one needs to somehow know pof already, which is not the case. Nev-
ertheless, the limiting case suggests that the sampling variance is approaching zero
when h(u) is approaching φu (u) and therefore one should select h(u) such that, while
satisfying Equation 8, its shape mimics the shape of φu (u) in the failure domain and
is zero otherwise. Examples of the importance sampling densities include those using
the same φu (u) but with a slightly conservative limit state.
While many researchers suggest that the MPP methods, including the hybrids, are
effective for many practical problems, it should be emphasized that with these meth-
ods, though featuring excellent “local’’ approximations, there is no guarantee that the
failure domain is sufficiently explored and modeled properly. This lack-of-assurance
376 Structural design optimization considering uncertainties

issue is somewhat similar to the situation of optimizing a function where there are mul-
tiple local minimums, and where optimizers could settle on a local minimum without a
mechanism to jump out and search for a global minimum. However, in optimization,
a local minimum is at least better than other solutions in an explored region, whereas
in probabilistic analysis, a local MPP-based solution could have a large error in pof .
Therefore, at the expense of losing efficiency, a more reliable h(u) should be selected
that is less local and has the ability to self-correct the error in case the initially found
MPP is not representative of the failure density.

3.3 Pro b ab i l i ty o f failur e wit h ins pec t i o n s

3.3.1 Prob a b i li ty o f d e t e ct io n m o d e l
Optimal inspection schedules depend on how effective detection devices are and how
fast defects grow. The effectiveness of a detection device is typically characterized using
a POD(a) curve which is defined as the probability of detecting a defect with a size
greater than or equal to a. The simplest POD model is a step function: POD is one if
the defect is greater than a threshold and zero otherwise. The complementary function
of POD is probability of non-detection, PND, i.e., PND(a) = 1 − PND(a).
POD depends on the physics of the detection method (e.g., eddy current, magnetic
flux, ultrasonic, vibrations) and other factors such as the damage mechanism, the
geometry of the component, defect shape, location and accessibility, presence of coating
or insulation, equipment sensitivity setting, measurement error, etc. Creating PODs
by experiments using manufactured products are often difficult due to cost and time
constraints. As an example of the attempts to overcome the issue, recently a number
of industries in the Netherlands have initiated a joint industry project to develop a
physics based POD model and corresponding software tool (Volker et al. 2004) for
pipeline and chemical plants, among other applications.
Sometimes damage data are available to correlate the detector-measured and the
actual defect sizes. One example is that the dimensions of pipeline metal loss (e.g.,
depth, length, and width) can be estimated using an in-line inspection machine such
as an MFL (magnetic flux leakage) device; when larger defects are found and some
pipeline sections excavated, actual defect sizes may become available. Such verification
data can be used to develop a statistical model that expresses the actual size as a
function of the measured size with a sizing error term. Such an error term can be
included as an additional random variable in RBMO. Alternatively, Non-Destructive
Examination (NDE) equipment manufactures may provide a sizing-error bound at a
specified confidence level. For example, MFL providers might specify the accuracy
as: +/−15% depth/wall-thickness at 80 confidence level for general corrosion (e.g.,
Rosen 2004). This information can be used to develop a random error term by using,
e.g., a Gaussian distribution.
In the remaining portions of the chapter, we will focus on a simplified defect
detection-and-repair model that assumes a binary result, detected or not-detected,
and further assumes that a repair is always needed when a defect is detected. This
is a reasonable model for safety-critical structures such as aircraft engines but could
be too conservative for other types of structures, such as oil pipelines, where if a
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 377

detected corrosion depth is a small fraction of the pipe wall thickness, the repair
decision can wait until the next inspection. In the latter case, a repair threshold can
be selected that corresponds to a tolerable risk and an “equivalent POD’’ can be
developed by truncating the left “tail’’ such that POD = 0 for defects less than the
threshold.

3.3.2 MPP-Based methods


Using the MPP-Based methods, the first step is to compute pof for Nf ≤ NI , where NI
is the first inspection time. Immediately after the survived defected parts have been
inspected, detected, measured, and repaired or replaced, the defect distribution will
need to be updated to reflect a “healthier’’ population, and then the time is reset for a
new probabilistic analysis for the next duration before reaching the next inspection time
or the end of the service life, whichever comes first. The summation of the probability-
of-failure over the time increments is the cumulative probability of failure (White et al.
2002). The effectiveness of a maintenance plan can be measured by the difference
between the cumulative probability of failure and pof .
Updating fX (X ) after each maintenance is a difficult task using analytical meth-
ods. One approach is to make an additional approximation in FORM calculation
(Harkness et al. 1994). Other approaches include FORM-based pf conditioned on
inspection results where the detected-and-measured crack size is modeled as a ran-
dom variable (Madsen et al. 1987). Based on the conditional-probability formulation,
the FORM solution for system reliability updating were developed and demonstrated.
While these methods can be used in certain applications, they are difficult to generalize
especially to deal with more complex maintenance situations such as when the inspec-
tion schedule is random and when different detection devices (with different PODs) are
to be applied to different defect locations. In contrast, the random simulation approach
only requires tracking the defect population, and the process to simulate maintenance
can be implemented easily without sophisticated computations. The only drawback
is the need for a large number of samples to achieve accurate results, particularly for
small pf problems. This provided the motivation for developing efficient sampling
methods including TIS.

3.3.3 Two-st a ge i mportance sampl i ng (TIS)


In the two-stage importance sampling approach, the first stage computes pof and gener-
ates failure samples without inspections. Stage 2 simulates inspection and repair using
failure samples from Stage 1. The approach is built on the assumption that any ran-
dom sample from a standard Monte Carlo (MC) method that is safe in Stage 1 will be
safe in Stage 2. This assumption is valid as long as the maintenance program is sound
and well executed, so that what is safe in Stage 1 part would not become unsafe after
inspection and repair. The concept of TIS is illustrated in Figure 14.4.
Using the TIS approach, the probability-of-failure with inspection, pWf
, is:

K − NRepaired
f = pf ·
pW o
(11)
K
378 Structural design optimization considering uncertainties

Defect size

Critical crack size

Samples removed if Crack growth curves


Defect population detected based on POD
that would fail
before service
life, assuming no
inspections

2nd inspection
Defect PDF
1st inspection
Service life N

Time (flight hours)

Figure 14.4 Illustration of two-stage importance sampling method.

Equation 11 shows that the risk is reduced by repairing (including replacing) those
weaker parts before they fail.
The reason TIS is efficient is because the computation of the indicator functions
in the safe region in a full MC simulation is skipped completely. The computational
challenge is to systematically generate samples directly from the failure domain. Three
methods will be discussed.

3.3.3.1 E XA CT FA I L U R E-R E G I O N SA M P L I N G BY N U M E R I CA L I NT E G RAT I O N


The first method, most suitable for a small number of random variables in addition to
inspection-related variables (such as inspection time and POD), is to generate random
samples using conditional distributions constrained by the limit state. To illustrate,
consider three random variables with a joint PDF of f (x1 , x2 , x3 ). The probability of
failure can be formulated using a three-dimensional integral:

pf = f (x1 , x2 , x3 )dx1 dx2 dx3 (12)
g(x)<=0

Now define a “truncated’’ distribution by letting f (x) = 0 in the region where g(x) > 0.
In the failure region ! = [g(x) ≤ 0], the joint PDF becomes:

f (x1 , x2 , x3 )
f! = (13)
pf
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 379

which can be used to derive the following marginal and conditional PDFs:

fX1 (x1 ) = f! (x1 , x2 , x3 )dx2 dx3
((
fX2 (x2 ) f! (x1 , x2 , x3 )dx1 dx3
fX2 |x1 (x2 ) = = (14)
fX1 (x1 ) fX1 (x1 )
((
fX3 (x3 ) f! (x1 , x2 , x3 )dx1 dx2
fX3 |x1 ,x2 (x3 ) = =
f (x1 , x2 ) fX2 |x1 (x2 ) · fX1 (x1 )

The above PDFs lead to:

FX1 (x1 ) = pC
f (x1 )

FX2 |x1 (x2 ) = pC


f (x2 |x1 ) (15)
FX3 |x1 ,x2 (x3 ) = pC
f (x3 |x1 , x2 )

Finally, the inverses of the above marginal and conditional CDFs are used to generate
failure samples in the following sequence (See, e.g., Ang & Tang 1984):
−1
x1 = FX1
(U1 )
−1
x2 = F X2
(U2 |x1 ) (16)
−1
x3 = FX3
(U3 |x1 , x2 )

where Ui (i = 1 : 3)are uniform random variables. Note that the computation of Equa-
tion 15 can be done in the transformed u-space without explicitly using Equation 14.
The above random number generation process involves n-fold integrations in the fail-
ure region. Thus, the increased complexity in higher-dimensional integration limits
the method to simple g-functions with a few random variables. Nevertheless, when
the implementation is practical, this approach provides effective generation of failure
samples for TIS. The method has been used in an application involving aircraft engine
disk design (Wu et al. 2002).

3.3.3.2 M P P-BA S E D I M P O RTA N C E SA M P L I N G W IT H C O N S E RVAT IV E L I M IT STAT E


The second method is MPP based, but with a conservative limit state designed to help
correct MPP and FORM errors. The advantage relative to Method 1 is the capability
to generate independent failure samples easily regardless of the number of random
variables. The method is efficient and is recommended for users familiar with the MPP
limitations.
Using MPP with a conservative limit state, the TIS approach consists of four steps:
Step 1. Using FORM or an equivalent method, compute the inspection-free risk pof .
The FORM solution is pof = (−β).
Step 2. A second FORM analysis is conducted using an adjusted limit state defined as:

g(X , NS ) = Nf (X ) − A · NS (17)
380 Structural design optimization considering uncertainties

where A is a “safety factor’’ introduced to generate conservative samples in Step 3


to check the solution from Step 1. A is always greater than 1 so that the adjusted
failure region contains the true failure region and the adjusted failure probability, pA
f
,
is greater than pof .
The value of A can be created based on the FORM result to anticipate a larger pf , say
30% larger, by parallel-shifting the MPP tangent surface towards the origin to reduce
β. For example, let βFORM = 3, pof = ( − 3) = 0.00135. To increase pf by 30% requires
the adjusted β to be βA = −−1 (0.00135 ∗ 1.3) = 2.919. Assume that the service life is
NS = 20 000, and at the shifted MPP the calculated life is N = 22 000, then the safety
factor A ≈ 22 000/20 000 = 1.1.
To speed up Step 2, we can use the first MPP as the initial guess to search for the
second MPP.
Step 3. Generate failure samples in the adjusted failure region using the (hyper-)
tangent surface at the second MPP.
The generation of the u-samples can be made more conveniently by an MPP-plane
rotation so that the vector from the origin to the MPP coincides with the nth coordinate.
The rotation matrix can be constructed using a plane rotation procedure such as the
Gram-Schmidt process. The sample-generation procedure is as follows: (i) generate
a “tail’’ sample of un from a one-dimensional normal pdf φ(u) truncated at u = βA ,
(ii) generate independent u1 to un−1 from φ(u), (iii) rotate the u sample back to the
original u-space.
The fracture lives of all the generated samples are computed. These samples will in
general include failures samples as well as typically a small fraction of safe samples,
as illustrated in Figure 14.5. The crack growth histories of the failure samples should
be saved for Stage 2 analysis. The larger the safety factor A is, the more conservative
the limit state is, and the larger the number of safe samples will be. In addition, the

A = 1 (FORM)
Short life
A=1 domain
(Exact)

Long life MPP


domain
u2

Nf < Ns

bA Nf < A * Ns

A >1

u1, Crack size

Figure 14.5 Compute pof using conservative limit state and 2-stage importance sampling.
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 381

distances of the failed samples can be sorted and the sample with the smallest distance
is the sample-based MPP.
Step 4. Re-compute pof using the samples. The simulated samples with lives shorter
than NS are used to compute a new pof using:

No. of failures without inspections


pof = (−βA ) · (18)
Total number of samples

and the result is compared with the result from Step 1. If the new result is similar to the
first result, it would provide an increased confidence that FORM gives a good estimate.
In general, with a sufficient number of samples, the result using Equation 18 is better
than the Step 1 FORM result because Equation 18 is capable of handling nonlinear g-
functions. In addition, the sample-based MPP can be compared with the FORM-based
MPP from Step 1 to help accept or reject the MPP from Step 1. On the other hand,
if the two results are significantly different, it would suggest the rejection of the first
FORM solution and one might try Step 2 with a larger A value and more samples. If
results do not converge after using more conservative A values, other methods should
be used.
Step 5. Compute pW f
using random simulations.
For each sample, a random number, U, between 0 and 1 is generated. For a simulated
defect size a, POD(a) is compared with U; the defect is detected if POD(a) > U, and
the inspected part is either repaired, replaced, or passed (if the defect size is considered
safe). If the part is repaired or replaced, a new defect size is randomly drawn from
a proper distribution and the simulation repeats until the end of the service life is
reached.
The purpose of the simulations is to compute conditional failure probability, pcf ,
i.e., the probability-of-failure with inspections conditioned on the total number of
simulation samples. The final probability-of-failure with inspections is:

No. of failures with inspections A


f = pf · pf =
pW · pf
c A
(19)
Total number of TIS samples

3.3.3.3 E XA CT FA I L U R E-R E G I O N SA M P L I N G BY M C M C
A more recent approach to generating failure samples, practically without the con-
straints in the number of variables and the non-linearity of the limit states, is the
Markov Chain Monte Carlo, or MCMC, method (Gamerman 1997; Gentle 1998;
Robert & Casella 2004). The unique feature of MCMC is that the samples are gen-
erated sequentially using the ratio of the PDFs of the current and the next candidate
states. This feature allows the generation of samples using f (x1 , x2 , . . . , xn ) without
integrations. For example, using the Metropolis-Hasting algorithm, the procedure for
generating failure samples can be designed as follows:

(1) Explore the space to find a starting point in the failure region.
(2) A “proposal’’ distribution, q(xNew |xCurrent ), is selected to generate a random move
to a candidate point xNew .
(3) Compute f (xNew ), f (xCurrent ), q(xCurrent |xNew ), and q(xNew |xCurrent ).
382 Structural design optimization considering uncertainties

(4) Compute g(xNew ) and let f (xNew ) = 0 if g(xNew ) > 0.1 2


f (xNew ) q(xCurrent |xNew )
(5) Accept the new point with a probability of ρ = min f (x Current )
· q(xNew |xCurrent )
,1 .
(6) Reject the candidate point and save the current point as the “next’’ point with a
probability of 1 − ρ.
Step 4 ensures that candidate points in the safe domain will be rejected with a proba-
bility of one; consequently all the generated points will be in the failure domain. Steps
5 and 6 can be executed using a uniform random number generator. The effectiveness
of the algorithm depends on the proposal distribution, which defines how to randomly
move around in the failure region and the acceptance rate. A simple proposal distri-
bution is a uniform distribution centered at the current point. In this case the proposal
distribution is symmetrical and q(xCurrent |xNew )/q(xNew |xCurrent ) = 1; as a result, the
ratio of the PDFs of the current and candidate points drives the random movement in
a way that ensures that the frequency of visits at any point will be asymptotically pro-
portional to the JPDF in the failure region. The selection of the range of the proposal
distribution, which characterizes the step sizes of the random moves, can significantly
affect the rejection rate and the convergence rate towards the target distribution, and
therefore needs to be tuned. Additionally, a “burn-in’’ period (in which the samples
would be thrown away) may be needed to improve the quality of the samples, espe-
cially if the number of samples used is relatively small. For higher dimension problems,
the use of f (xNew )/f (xCurrent ) could decrease the acceptance rate drastically and slow
down the converging process. To address the issue, a modified M-H algorithm has
been proposed (Au and Beck, 2001) in which a one-dimensional symmetrical proposal
distribution is used in combination with individual ratio f (xi_New )/f (xi_Current ) to allow
for random movements in some of the variables and increase the acceptance rate.
The M-H algorithm has been used in one of the RBMO example presented below.
Note that the M-H algorithm itself does not provide an answer to the pf calculation,
but the generated samples can be used with a pf computed from other methods such
as importance sampling.

3.3.4 F a il u re-sa m p le b a s e d a n a ly s is f o r s in g l e i ns pe c t i o n
Before applying the above methods to general inspection optimization applications,
we first analyze a simple but practical case where only one inspection is feasible due to
economical and other constraints. The objective of inspection optimization is to find
the best inspection time that minimizes pf .
Define the inspection time as t1 and the service time t2 . The probability of failure
at t1 , prior to inspection, is pf (t1 ), which is the lower bound of pof (t2 ). The defect
population includes the stronger parts that would not fail by t2 and the weaker parts,
defined here as the “critical parts,’’ that would fail by t2 . The probability of the parts
that would survive t1 is pof (t2 ) − pf (t1 ).
Given the critical parts, which have a probability of pof (t2 ), there are three fail-
ure paths as shown in Figure 14.6, an event diagram. In summary, a defect of
size a from the critical parts can (1) fail by t1 with a probability of pf (t1 )/pof , (2)
survive t1 , escape inspection with PND(a), and fail by t2 with a probability of
[pof − pf (t1 )] · E[PND(a(t1 ))]/pof , and (3) survive t1 , be detected with POD(a), be
replaced by a part from the original population, and fail by t2 with a probability of
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 383

a(t1) > a* (F) - > Fail path 1

pf (t1)
a(0) > a*(0)
Reach Insp. Time t1 - > Fail path 2

[pof (t2) a(t2) > a* (F)


[pof pf (t1)]•E[PND(a)]
Missed Detection (t2)

NDE at t1
a(t1) < a*
a*: Critical defect size
a(t): Defect size of critical part at t Detected/Replaced
F: Fail by t2 (t2 - t1)
t1: Time of inspection - > Fail path 3
a (t2-t1) > a * (F )
t2: End of service life
[pof  pf (t1)]•E[POD(a)]•pf (t2  t1)

Figure 14.6 Probabilities of failure events for one inspection.

[pof − pf (t1 )] · E[POD(a(t1 ))] · pf (t2 − t1 )/pof . Therefore, the total pf with inspection can
be summarized in Equation 20.

f = pf (t1 ) + [pf − pf (t 1 )] · E[PND(a(t1 ))]


pW o
(20)
+ [pof − pf (t 1 )] · E[POD(a(t1 ))] · pf (t 2 − t 1 )

The difference between pW


f
and pof is the amount of risk reduction, pr , which is:

pr = [pof − pf (t1 )] · E[POD(a(t1 ))] · [1 − pf (t2 − t1 )] (21)

In Equations 20 and 21, pof is computed from Stage 1 of TIS; the failure samples are
used to compute pf (t 1 ) and pf (t 2 − t 1 ), and E[POD(a(t1 ))] can be computed for any t1
using the defect growth history data. Therefore, the Stage 1 failure samples, including
defect growth histories, should be saved to calculate risk reduction for any inspection
time without additional stress or life analysis.
Since pf (t 2 − t 1 ) < pof , the last product term in Equation 21 is 1 − pf (t 2 − t 1 ) > 1 − pof ,
which is approximately 1 for small pof . This suggests that for small pof , a replacement
using an original part can be approximated by assuming a “perfect repair,’’ meaning
the part is “fail-proof,’’ and therefore pf (t 2 − t 1 ) = 0. In practice, this condition can
be achieved if the flawed parts can be detected in time and can be either mitigated to
eliminate the re-occurrence of failure (e.g., a corroded pipe section is wrapped with a
corrosion-free composite sleeve) or replaced with new or better-grade parts that are
guaranteed to survive the remaining service life. In the undesirable scenario where a
bad repair is likely, the impact of the repair can be simulated using a worse-than-new
distribution, and a fresh analysis is needed to compute pf (t 2 − t 1 ).
If pf (t 2 − t 1 ) can be neglected, risk reduction becomes a product of two time-
dependent terms, where the first term, [pof − pf (t 1 )], is the risk-reduction potential,
a monotonically decreasing function of time, and the second term, E[POD(a)], is a
monotonically increasing function. This suggests that in practice, optimal inspection
384 Structural design optimization considering uncertainties

time is neither at time zero (when the risk reduction potential is at the highest but
E[POD(a)] is relatively small because of small initial defects), nor at the end of ser-
vice life (when the risk-reduction opportunity is approaching zero). Equation 21 also
implies that better POD(a) will create more risk reduction and the best inspection time
is earlier with a better POD(a).
An example of single-inspection analysis using an event-tree analysis is shown in
Figure 14.7. In this example, the initiating event is the critical parts with pof = 0.01.
There are four failure paths. The first two are the same as mentioned above. Given the
critical parts, 30% of the parts fail by t1 . At inspection, 10% of the survived critical
parts is missed and subsequent failures by t2 cannot be avoided. The last two are the
results of two types of repairs, each with a 50% chance. The first is replacement by
original part and the second is repair with worse-than-new part. Both the replace-
ment/repair parts only need to survive a time of (t2 − t1 ). The result (see the right-hand
side column in Figure 14.7 showing pf contribution) demonstrates that the risk con-
tribution related to pf (t 2 − t 1 ) is insignificant (less than 0.5%) as expected. The use of
Equation 21 will be demonstrated further in the RBDT examples described below.

3.3.5 TIS error a n a ly s is s in g le in s p e ct io n


A typically small error is inherent in the TIS approach due to ignoring the samples that
are originally in the safe region (Wu & Shin 2005). Such an error would arise if an
originally safe part is unfortunately repaired to a worse condition. This scenario could
be the result of poor workmanship and lack of a quality assurance process.
The probability of failure due to ignoring the “safe’’ parts is a product of the proba-
bility of safe parts and the conditional probabilities of a sequence of events (detected,
repaired if detected, bad repair that causes failure if repaired, and failure before service
life) that lead to a failure:

p∗f (t2 ) = P(Safe) · POD · P(Repair|Detected) · P(Bad Repair) · pf (t2 − t1 ) (22)

Using the relations P(Safe) ≤ 1(which is close to 1 for high reliability parts) and
pf (t2 − t1 ) ≤ pof , and also assuming the worst case that POD = 1 (worse in the sense
that more chances are created for bad repairs), the TIS error with respect to pof is
dominated by two factors:

p∗f
TIS Error = ≤ P(Repair|Detected) · P(Bad Repair) (23)
pof

The first factor, P(Repair|Detection), is expected to be small (at least for high reliabil-
ity products) because it is unlikely that a large percentage of products will be repaired
regardless of the detected defect sizes and knowing that repairs may produce negative
(un-safe) results. The second term is also expected to be small assuming a good quality
control procedure is in place. In the unlikely worst case scenario, the error is 100% if
every defect is detected, every detection leads to a decision to repair, and every repair is
a bad repair. In summary, the above error analysis suggests that the TIS error is small
if the safe parts are ignored, which is the basis for the high efficiency of TIS compared
with standard Monte Carlo sampling.
--> Fail path 1 Pf % Pf

a(t 1 ) > a* (F) 30 % 30% 0.003 80.8%


pf = 1% if no inspection
a (0) > a*(0) 1% Reach insp. time t1
--> Fail path 2
Critical parts a (t2 ) > a* (F) 10 0% 7% 0.0007 18.9%
PND
Missed 10 % Reach sevice time t2

a(t2) < a* 0%

a(t1) < a* 70% Inspection at t1


--> Fail path 3

a( t2 t1) > a* (F) 0.15% 0.047% 4.73E-06 0.1%

Replacement 50 % Reach t2

a( t2 t1) < a* 99.85%


POD
Detected 90% Repair/Replacement
--> Fail path 4
a*: Critical defect size a( t2 t1) > a* (F) 0.25% 0.079% 7.88E-06 0.2%
a(t): Defect size of critical part at t
t1: Time of inspection Repair 50% Reach t2
t2: End of service life
F: Fail by t2 a( t2 t1 ) < a* 99.75%
Replacement: Original parts
Repair: Better than before but worse than replacement Total risk 0.00371 100%
Risk reduced 0.00629 63%

Figure 14.7 An event tree analysis example for one inspection.


386 Structural design optimization considering uncertainties

3.3.6 G en era l re p a ir a n d m u lt ip le in s p e ct io ns
The above analysis procedure for a single inspection can, in principle, be easily
extended for multiple inspections, assuming replacement by original-part (i.e., same
defect distribution) or ideal repair. However, when the effect of bad-repair needs to
be carefully studied, a full set of MC samples is recommended. When there are mul-
tiple inspections, a brute-force MC for inspection optimization would require a set of
MC runs for every selected option of inspection schedules. Clearly this is computation-
ally very challenging. Recently, a recursive-probability-integration (RPI) procedure has
been developed (Shiao 2006; Shiao & Wu 2004) to more rapidly calculate probability
of failure for any number of inspections and types of distributions where the book-
keeping of the risk contributions from every inspection result becomes very tedious.
In the RPI approach, the sum of the probabilities of failures from a potentially very
large number of failure paths (created by multiple inspections and repairs) is formu-
lated using a condensed formula that involves recursive calculations at every branch
(with sub-branches and sub-sub branches, etc.). The formulation provides a systematic
way to manage failure paths.
Similar to the TIS approach, RPI also uses saved Monte Carlo crack growth histories
to compute all the probabilities after each inspection and repair. RPI requires a baseline
MC for the original defect distribution and an additional MC for each new repair
distribution, the number of which is typically very small.
The computational efficiency can be improved further by integrating RPI with the
conditional expectation method (CEM), where the random variables are separated into
two groups, X 1 and X 2 , and the failure probability is formulated as:
 
pf = . . . P[g(X 1 , X 2 ) ≤ 0]fX1 fX2 dX 1 dX 2 = E[P[g c (X 2 |x1 ) ≤ 0]] (24)

To compute Equation 24, a set of realizations of X 1 is randomly generated and


the corresponding P[g c (X2 |x1 ) < 0] values are computed using numerical integra-
tion or fast probability integrators. As demonstrated, with proper grouping of
X , E[P[g c (X2 |X1 ) ≤ 0]] can be estimated using a relatively small number of samples
(Shiao 2006).

3.3.7 S a mpli n g b a s e d r is k s e n s it iv it y a n a ly s i s
As a by-product, the failure samples from Stage 1 and Stage 2 can be used to conduct
risk sensitivities. The sensitivity of pf with respect to changes in the distribution param-
eters (mean or standard deviation) θi of a random variable Xi can be evaluated from:
  
∂pf > ∂θi σi ∂fx σi ∂fx
Sθi = = ··· fx dx = E (25)
pf σi pf fx ∂θi pf fx ∂θi
! !

where Sθi are the sensitivity coefficients. Equation 25 leads to the following two
non-dimensional sensitivities that can be computed easily using TIS samples.

∂p/p
Sµi = = E[ui ]! (26)
∂µui /σui
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 387

∂p/p
Sσi = = E[u2i ]! − 1 (27)
∂σui /σui
The expectations in Equations 26 and 27 are over the failure region !; µui is the mean
of ui with a nominal value of zero and σui is the standard deviation of ui with a nominal
value of one. These two sensitivities have been found to be useful for identifying and
ranking important random variables (Karamchandani 1990; Enright & Wu 1999; Wu
& Mohanty 2006).

4 Examples

4.1 Rotor disk


Consider a rotor disk subject to fracture failure due to rare manufacturing anomalies
such as the alpha defect (Leverant et al. 1997). The potential random variables in
such an application include defect size and location, stress, material property, and the
time and effectiveness of the inspections. The following numerical example was one of
the several test examples developed for TIS methodology development. The example
represents the analysis at a highly stressed zone assuming a circular imbedded crack
with a specified probability of occurrence. For the entire disk, the risk can be integrated
using a zone-based risk integration approach (Wu et al. 2002). In the example, all the
units are MKS-based. The analyses were conducted using a computer program written
in Matlab language.
The fracture mechanics model is:
da
= C(
K)m (28)
dN

where m = 3.0, C = 1.021E-11, a = crack radius,
K = Yσ πa, σ = 414, and the
crack geometry factor is Y = 0.636. Simplified stress and life random variables are
used. Stress uncertainty is modeled as σ = X1 · σmodel where X1 is a random variable
accounting for the errors in geometry and numerical (such as finite element) modeling.
Similarly, a simplified stochastic life model is defined as N = X2 · Nmodel where Nmodel
is the life model, and X2 is a life scatter random variable. Both X1 and X2 are modeled
using log-normally distributed random variables with a median value of one and a
specified coefficient of variation (COV).
Assume the defect occurrence rate is 0.00348 per disk and the initial crack size
follows a three-parameter Weibull distribution with a location parameter of 0.0028 m,
a scale parameter of 0.00043 m, and a shape parameter of = 0.41. The critical crack
size for fracture is:

1 KC 2
ac = (29)
π YS
where KC = 60. The time of inspection is assumed to be normally distributed with a
specified COV. The inspection has a POD(a) of:

ln (a) − 2.996
POD(a) =  (30)
0.4724
388 Structural design optimization considering uncertainties

0.9

0.8

0.7
PDF(a)/1000
PDF or POD

0.6 POD(a)
POD(a)*PDF(a)/10
0.5 Crit. defect size

0.4

0.3

0.2

0.1

0
0 0.01 0.02 0.03 0.04 0.05 0.06
Defect size (m)

Figure 14.8 Initial defect PDF and POD, and their product.

The normalized initial-defect PDF and the POD(a) are plotted in Figure 14.8. Also
plotted are the nominal critical defect size (diameter) and PDF(a)∗POD(a). The later
curve, when integrated, is the percentage of defects that can be detected. Given all
the disks with a defect, the percentage is 1.45. Assuming there are 1000 disks, the
number of disks that have a defect is 3.48, and the number of defected disks that can
be detected at time zero is near zero (3.48*0.0145 = 0.05). Thus, the best inspection
time should be after the defect population has grown much bigger.
After integration, Equation 28 becomes:

1−m/2 1−m/2
ao − ac
N = X2 · (31)
(m/2 − 1) · C · Y m · (X1 S)m/2 · πm/2

For zero stress and life scatters, leaving defect as the only random variable, pof can be
analyzed by using Equation 27 and setting N = NService . Figure 14.9 compares the pW f
’s
using TIS and Monte Carlo and the analytical solution (solid curve). Given a defect,
the conditional pof calculated analytically is 0.109. The unconditional pof , plotted in
Figure 14.9, is 0.109∗0.00348 = 3.807e-04, which results in an average of 1.90e-08
per flight cycle. With inspection at 10 000 cycles, the unconditional pW f
is 1.29e-04,
o
which is approximately one third of pf or 67% risk reduction. For this example with
a relatively high pof (0.109), the TIS (500 samples) is ten times as efficient as Monte
Carlo (5000 samples). In general, the efficiency of TIS will be higher with smaller pof .
Figure 14.10 shows the increase of risk after adding uncertainties to inspection time,
stress scatter, and life scatter. 500 TIS samples proved to be sufficient for the analysis.
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 389

104
5
No inspection (Exact)
4.5 With inspection (MC 500)
With inspection (MC 5000)
4 With inspection (IS 500)

3.5
Prob. of failure

2.5

1.5

0.5

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Flight cycles 104

Figure 14.9 Risk with and without inspection (at 10 000 cycles) fixed stress and life scatter.

104
5
No insp. fixed stress and life
4.5 With insp. random stress and life

3.5
Prob. of failure

2.5

1.5

0.5

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Flight cycles 104

Figure 14.10 Random inspection, stress, and life scatter (All with COV = 0.1); 500 IS samples.

We will now demonstrate the use of failure samples for inspection optimization.
Figure 14.11 shows the failure samples (i.e., for N < 20 000 cycles) in the three-
dimensional u-space of defect, stress scatter, and life scatter. These samples were created
using FPA, a Fast Probability Analyzer software that integrates the Metropolis-Hasting
390 Structural design optimization considering uncertainties

3
u3 (Life scatter)

3
3

0 0
u2 (Life scatter)
u1 (Defect size) 3

Figure 14.11 Failure samples in the three-dimensional u-space.

104
1.2

0.8
Pf reduction

0.6

0.4

0.2 Optimal time


ⴝ 13 300 cycles

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Inspection time 104

Figure 14.12 Risk reduction plot for inspection optimization.

algorithm and importance sampling method. In the example, the selected proposal dis-
tribution was a uniform distribution with a range of 1. The samples were generated in
the failure region that has a probability of 0.195 with approximately +/−20% error
at 90% confidence.
Using the failure samples and applying Equation 21, the risk-reduction versus inspec-
tion can be computed easily to create figure 14.12. The optimal time of 13 300 cycles
is approximate due to the relatively small sample size. Figure 14.13 shows pf versus
time for three inspection times: 10 000, 13 300, and 16 000 cycles. Clearly, inspection
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 391

104
5
Insp. at N = 10 000
4.5
Insp. at N = 13 300
4 Insp. at N = 16 000

3.5

3
Prob. of failure

2.5

1.5

0.5

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Flight cycles 104

Figure 14.13 Probability of failure curves for one inspection at 1000, 13 300, and 16 000 cycles.

1.2

0.8

0.6
S

0.4

0.2

0.2

0.4
1 2 3
Defect Stress scatter Life scatter

Figure 14.14 Probability sensitivities for three random variables.

at 10 000 is too early, at 16 000 is too late, and at 13 300 is significantly better. Using
Equation 21, the failure samples were used to compute the mean sensitivities, as dis-
played in Figure 14.14, which shows that the initial defect size is the most influential
random variable, followed by stress scatter and life scatter.
392 Structural design optimization considering uncertainties

t 6R
P  140 KN 2R

r
P
4R 2R

R
P

Reference R  0.25 m, Thickness  67 mm, Initial flaw size  0.4 mm

Figure 14.15 Spindle lug (Forth et al. 2002).

CC03
P/Wt

W
P
S3  —
Dt
c
P

a t

D c

Figure 14.16 NASGRO model for the lug example.

4.2 L u g ex a m p le
A helicopter spindle lug model is shown in Figure 14.15 (Forth et al. 2002) with
its fracture mechanics model (using NASGRO software) shown in Figure 14.16.
Figure 14.17 shows a one-hour load spectra, FELIX/28 based on the main rotor blade
of a military helicopter with four mission types and 140 flights (Everett et al. 2002).
This study was conducted using an RBDT software that integrated a probabilistic func-
tion evaluation system (Wu and Shin 2005; Wu et al. 2006), a finite element software
(ANSYS), and a fracture mechanics software, NASGRO.
The random variables are listed in Table 14.2, where the load random variable
represents the point load applied to the center of the pin, P, in Figure 14.16.
The initial flaw size distribution, shown in Figure 14.18, is based on an equiva-
lent initial flaw size distribution (Forth et al. 2002), EIFS, derived from stress-life
experiments. We will compare the performance of three PODs shown in Figure 14.19
representing poor, fair, and good NDE devices.
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 393

400

300

Percent of max. load in spectrum 200

100

0
Cycles : 2755
100 Flight hours ⴝ 3.26 hr

200
0 500 1000 1500 2000 2500 3000
Number of cycles

Figure 14.17 Felix/28 Helicopter load spectra (Everett et al. 2002).

Table 14.2 Random variables for the Lug model.

Distribution Mean Std. Dev. COV(%)

Thickness, t (mm) LN 28 0.14 0.50


Max. load (N) LN 145 000 10 000 6.9
Initial flaw size (mm) User-defined 0.074 0.0224 30.2
Delta Kth LN 48 4 8.33
Life scatter LN 1 0.1 10.0

Equivalent initial flaw size


1

0.8

0.6
CDF

0.4

0.2

0
0 0.05 0.1 0.15 0.2 0.25 0.3
Flaw size (mm)

Figure 14.18 Equivalent initial flaw size distribution.

We will use conservative limit states to illustrate the TIS steps discussed in Sec-
tion 3.3.3.2. Table 14.3 shows the probabilistic analysis results using three A values:
1, 1.07, and 1.33, which are associated with three target lives. Because the conditional
probability of failure given a flaw is relatively high (about 7%), 1000 Monte Carlo
samples are sufficient for illustration purposes.
394 Structural design optimization considering uncertainties

Three POD curves (Log-logistic function)


1
POD I
POD II
0.8 POD III
Probability of detection
0.6

0.4

0.2

0 2 4 6 8 10
Defect size (mm)

Figure 14.19 Three POD curves for the lug example.

Table 14.3 Lug example results.

No. of FORM Angle No. of Prob. in P f in Pf


samp. in and from failures in samp. samp.
samp. samp. FORM samp. region region
region Beta MPP region (Ps) (Pc)

A N = Nf n Beta Deg. [N < 750] [N < Nf ] [N < 750] [N < 750]


1.00 750 – 1.495 0.0 – – – 0.0675
1.07 800 250 1.709 33.6 216 0.081 0.864 0.0699
1.33 1000 500 1.682 22.1 222 0.157 0.444 0.0696
MCS 750 3000 1.587 17.3 208 1.000 0.069 0.0693

Based on A = 1.067, the probability in the IS (importance sampling) region is 0.081.


Of the 250 samples generated in this region, there are 216 failure samples with a
conditional pof of 0.864. Therefore pof is 0.0699, which is close to the FORM solution,
0.0675. The agreement, and the fact that the angles between the FORM and the
sampling MPPs are reasonably small, suggest that the IS region covers the failure
region.
Now consider using A = 1.33. The probability in the IS region is 0.151, about twice
larger than for A = 1.067. Of the 500 samples generated in this region, there are 222
failure samples with a conditional pof of 0.444, about half smaller than for A = 0.167.
The resulting pof is still 0.0699, as before. This means that by doubling the IS region,
no additional failure region has been found. This further suggests that the IS region is
sufficient, provided that the MPP-based model is reasonably good (which was true as
determined using an independent check).
The above results are very close to the Monte Carlo result (0.0693) that took
60 hours of CPU time. Note that 500 IS samples are equivalent to 500/0.151 = 3311
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 395

Reducible risk (Pr) given a flaw

0.03 POD 1
POD II
0.025 POD III

0.02
Pr

0.015

0.01

0.005

0
0 100 200 300 400 500 600 700 800
Inspection time (flight hours)

Figure 14.20 Lug inspection optimization.

Monte Carlo samples, and therefore, for this example, IS provides a slightly better
accuracy. However, unlike MCS, IS could miss failure regions. In general, a suffi-
cient number of Monte Carlo samples should be used to ensure that IS has not missed
any significant failure region. For complex applications with possible multiple MPPs,
more robust error-checking should be considered, including Markov Chain Monte
Carlo methods.
Using the 222 saved simulated crack growth histories and applying Equation 21,
the risk reduction curves for three PODs are obtained as shown in Figure 14.20. The
unsmooth curves are due to lack of samples, but are still reasonable for illustration
purposes.
The results show that POD I is superior and that there is an “effective inspection
window,’’ roughly between 300 to 650 hours, with an optimal inspection time at about
550 flight hours. When POD II and III are used, the best inspection time is around 650
hours with a narrower inspection window. These results confirm the earlier suggestions
that (1) the best inspection time is earlier for a better POD capability, and (2) a better
POD capability will always produce a better optimal risk reduction.
Figure 14.21 displays the simulation results of pf versus flight hours with and without
inspection for POD I. Clearly the slope of pf changes more drastically at about 550
hours, coinciding with the best inspection time.

5 Conclusions
A reliability-based maintenance optimization (RBMO) methodology was presented,
with a focus on computational strategies. The RBMO methodology was demonstrated
using examples related to aircraft and helicopter structures. The examples suggest
that the RBDT methodology is well suited for inspection planning, and appears to be
applicable to other structures such as ships, cars, and oil and gas pipelines, to more
396 Structural design optimization considering uncertainties

Pf given a flaw for various PODs


0.08
No inspection
0.07 POD III at 650 hours
POD II at 650 hours
0.06
POD I at 550 hours
0.05
Pf

0.04
0.03
0.02
0.01
0
0 100 200 300 400 500 600 700 800
Flight hours

Figure 14.21 Probability of failure for one inspection at three different times using POD-I.

systematically design reliable and economical structures with associated maintenance


programs to sustain structural integrity and reliability.
RBMO involves time-dependent damage accumulation models, NDE detections,
repairs, replacements and other risk control measures, and an optimal maintenance
plan must consider a potentially large number of options including inspection sched-
ules, mitigation options, and selection of NDE devices. In addition, the planning must
factor in uncertainties. Given the wide spectrum of the options and the complexities
in modeling, the best practical way to conduct RBMO is through the use of random
simulations, preferably efficient sampling methods. The TIS approach has been devel-
oped to face the challenge. At its core, TIS is a type of Monte Carlo method that uses
the power of random simulation. However, drastic efficiency improvement can be
achieved by systematically generating samples in the failure domain. When mitigation
effects can be reasonably modeled using ideal repairs or replacements with original
parts, additional speed improvement can be realized by reusing crack growth histories
for various maintenance options.
Methods for generating failure-only samples were discussed, including one built on
the MPP-based linear surface of a conservative limit state and another based on the
Metropolis-Hasting algorithm. It is emphasized that the MPP methods, while widely
known and used, are limited to well-behaved functions. For TIS, MPP offers an easy
way to generate independent failure samples. M-H, on the other hand, can handle
more difficult (non-smooth and nonlinear) functions, but the generated Markov-chain
failure samples are correlated and therefore more samples are needed to reach the
target distribution. Thus, both methods provide useful tools for RBMO with different
strengths and limitations.
The disk and lug examples represented the feasibility of the RBMO method to
physics-based modeling applications. The software used for the lug example integrated
a probabilistic analysis module, a finite element module, and a fracture mechanics mod-
ule. As an illustration of the analysis CPU time needed for RBMO, in one lug analysis
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 397

using a 2 GHz desktop PC, it took several hours, including time for a FE analysis, to
carry out a thousand NSAGRO analyses with the rotorcraft load spectra, which took
the most time, and a probabilistic analysis, which took the least time. The CPU time
would increase if larger FE models were used or more failure samples were generated.
This example suggests that, even with the efficient TIS method, for complex problems
involving physics-based models, RBDT can still be very time-consuming unless further
model approximations are made. Potential approximation methods for RBMO anal-
ysis include kriging (Sacks et al. 1989; Martin & Simpson 2005) and moving least
squares (Krishnamurthy 2003) with error-checking procedures.

References

Ang, A.H.-S. & Tang, W.H. 1984. Probability Concepts in Engineering Planning and Design,
Volume II; Decision, Risk, and Reliability, New York, John Wiley & Sons.
Au, S.K. & Beck, J.L. 2001. Estimation of small failure probabilities in high dimensions by
subset simulation, Probabilistic Engineering Mechanics, Vol. 16, No. 4, pp. 263–277.
Berens, A.P., Hovey, P.W. & Skinn, D.A. 1991. Risk Analysis for Aging Aircraft Fleets, Air
Force Wright Lab Report, WL-TR-91-3066, Vol. 1.
Bucher, C.G. 1988. Adaptive Sampling – An Iterative Fast Monte Carlo Procedure, Structural
Safety, Vol. 5, pp. 119–26.
Cunha, S.B., De Souza, A.P.F., Nicolleti, E.S.M. & Aguiar, L.D. 2006. A Risk-Based Inspec-
tion Methodology to Optimize Pipeline In-Line Inspection Programs, Journal of Pipeline
Integrity, Q3.
Ditlevsen, O., Bjerager, Olesen, R. & Hasofer, A.M. 1989. Directional Simulation in Gaussian
Space, Probabilistic Engineering Mechanics, Vol. 3, No. 4, pp. 207–217.
Ditlevsen, O. & Madsen, H.O. 1996, Structural Reliability Methods. J. Wiley & Sons,
New York, 384 pp.
Der Kiureghian, A. & Dakessian, T. 1998. Multiple Design Points in First and Second-order
Reliability, Structural Safety, Vol. 20, pp. 37–50.
Der Kiureghian, A. 2005. First- and Second-Order Reliability Methods, Chapter 14 in Engi-
neering Design Reliability Handbook, E. Nikolaidis, D.M. Ghiocel & S. Singhal, (eds), CRC
Press, Boca Raton, FL.
Everett, Jr. R.A. 2002. Crack-Growth Characteristics of Fixed and Rotary Wing Aircraft, 6th
Joint FAA/DoD/NASA Aging Aircraft Conference.
Enright, M.P. & Wu, Y.-T. 1999. Probabilistic Fatigue Life Sensitivity Analysis of Titanium
Rotors, Proceedings of the AIAA 41st SDM Conference, Atlanta, GA.
Forth, S.C., Everett, Jr. R.A. & Newman, J.A. 2002. A Novel Approach to Rotorcraft Damage
Tolerance, 6th Joint FAA/DoD/NASA Aging Aircraft Conference.
Gamerman, D. 1997. Markov Chain Monte Carlo, Chapman & Hall.
Gentle, J.E. 1998. Random Number Generation and Monte Carlo Methods, Springer-Verlag
New York.
Harbitz, A. 1986. An Efficient Sampling Method for Probability of Failure Calculation.
Structural Safety, Vol. 3, pp. 109–115.
Harkness, H.H., Fleming, M., Moran, B. & Belytschko, T. 1994. Fatigue Reliability With
In-Service Inspections, FAA/NASA International Symposium on Advanced Structural Integrity
Methods for Airframe Durability and Damage Tolerance.
Hohenbichler, R. & Rackwitz, R. 1988. Improvement of Second-order Reliability Estimates by
Importance Sampling. J. Eng. Mech. ASCE, Vol. 114, No. 12, pp. 2195–2199.
Kale, A., Haftka, R.T. & Sankar, B.V. 2007. Efficient Reliability Based Design and Inspection
of Stiffened Panels Against Fatigue. Journal of Aircraft.
398 Structural design optimization considering uncertainties

Karamchandani, A. 1990. New Methods in Systems Reliability, Ph.D. dissertation, Stanford


University.
Karamchandani, A. & Cornell, C.A. 1991. Adaptive Hybrid Conditional Expectation
Approaches for Reliability Estimation, Structural Safety, Vol. 11, pp. 59–74.
Krishnamurthy, T. 2003. Response Surface Approximation with Augmented and Compactly
Supported Radial Basis Functions, Proceedings of the AIAA 44th SDM Conference.
Leverant, G.R., Littlefield, D.L., McClung, R.C., Millwater, H.R. & Wu, Y.-T. 1997. A Proba-
bilistic Approach to Aircraft Turbine Rotor Material Design, The International Gas Turbine
& Aeroengine Congress & Exhibition, Paper No. 97-GT-22, Orlando, FL.
Liu, P.-L. & Der Kiureghian, A. 1986. Multivariate Distribution Models with Pre-
scribed Marginals and Covariances, Probabilistic Engineering Mechanics, Vol. 1, No. 2,
pp. 105–112.
Martin, J.D. & Simpson, T.W. 2005. Use of Kriging Models to Approximate Deterministic
Computer Models, AIAA Journal, Vol. 43, No. 4.
Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety, Englewood Cliffs,
New Jersey; Prentice Hall.
Madsen, H.O., Skjong, R.K., Talin, A.G. & Kirkemo, F. 1987. Probabilistic Fatigue Crack
Growth Analysis of Offshore Structures, with Reliability Updating Through Inspection,
SNAME, Arlington, VA.
Melchers, R.E. 1987. Structural Reliability: Analysis and Prediction, Wiley.
Millwater, H.R., Wu, Y.-T., Cardinal, J.W. & Chell, G.G. 1996. Application of Advanced
Probabilistic Fracture Mechanics to Life Evaluation of Turbine Rotor Blade Attachments,
Journal of Engineering for Gas Turbines and Power, Vol. 118, pp. 394–398.
Millwater, H.R., Fitch, S., Riha, D.S., Enright, M.P., Leverant, G.R., McClung, R.C.,
Kuhlman, C.J., Chell, G.G. & Lee, Y.-D. 2000. A Probabilistically-Based Damage Tolerance
Analysis Computer Program for Hard Alpha Anomalies In Titanium Rotors, Proceedings,
45th ASME International Gas Turbine & Aeroengine Technical Congress, Munich, Germany.
Nikolaidis, E., Ghiocel, D.M. & Singhal, S. (eds). 2005. Engineering Design Reliability
Handbook, CRC Press, Boca Raton, FL.
Palmberg, B., Blom, A.F. & Eggwertz. 1987. Probabilistic Damage Tolerance Analysis of Aircraft
Structures, In Probabilistic Fracture Mechanics and Reliability, J.W. Provan (ed.). Martinus
Nijhoff Publishers.
Rackwitz, R. 2001. Reliability Analysis – A Review and Some Perspectives, Structural Safety,
Vol. 23, pp. 365–395.
Robert, C.P. & Casella, G. 2004. Monte Carlo Statistical Methods. New York: Springer.
Rosen Group, www.Roseninspection.net, 2004. Metal Loss Inspection Performance Specifica-
tions, Standard_CDP_POFspec_56_rev3.62.doc.
Rosenblatt, M. 1952. Remarks on a Multivariate Transformation, The Annals of Mathematical
Statistics 23(3), pp. 470–472.
Sacks, J., Schiller, S.B. & Welch, W.J. 1989. Design for Computer Experiments, Technometrics,
Vol. 31, No. 1.
Schuëller, G.I. 1998. Structural Reliability – Recent Advances, Proc. 7th ICOSSAR’97,
pp. 3–33.
Shiao, M.C. & Wu, Y.-T. 2004. An Efficient Simulation-Based Method for Probabilistic Dam-
age Tolerance Analysis With Maintenance Planning, Proceedings of the ASCE Specialty
Conference on Probabilistic Mechanics and Reliability.
Shiao, M.C. 2006. Risk-Based Maintenance Optimization, Proceedings of the International
Conference on Structural Safety and Reliability.
Thoft-Christensen, P. & Murotsu, Y. 1986. Application of Structural Systems Theory, Springer.
Volker, A.W.F., Dijkstra, F.H., Heerings, J.H.A.M. & Terpstra, S. 2004. Modeling of NDE
Reliability; Development of A POD-Generator, 16th WCNDT 2004 – World Conference
on NDT.
A r e l i a b i l i t y-b a s e d m a i n t e n a n c e o p t i m i z a t i o n m e t h o d o l o g y 399

White, P., Barter, S. & Molent, L. 2002. Probabilistic Fracture Prediction Based On Aircraft
Specific Fatigue Test Data, 6th Joint FAA/DoD/NASA Aging Aircraft Conference.
Wu, Y.-T., Millwater, H.R. & Cruse, T.A. 1990. An Advanced Probabilistic Structural Analysis
Method for Implicit Performance Functions, AIAA Journal, Vol. 28, No. 9, pp. 1663–1669.
Wu, Y-T., Enright, M.P. & Millwater, H.R. 2002. Probabilistic Methods for Design Assessment
of Reliability With Inspection, AIAA Journal, Vol. 40, No. 5, pp. 937–946.
Wu, Y.-T. & Shin, Y. 2004. Probabilistic Damage Tolerance Methodology For Reliability Design
And Inspection Optimization, Proceedings of the AIAA 45th SDM Conference.
Wu, Y.-T., Shiao, M., Shin, Y. & Stroud, W.J. 2005. Reliability-Based Damage Tol-
erance Methodology for Rotorcraft Structures, Transactions Journal of Materials and
Manufacturing.
Wu, Y.-T. & Shin, Y. 2005. Probabilistic Function Evaluation System for Maintenance
Optimization, Proceedings of the AIAA 46th SDM Conference.
Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2006. Probabilistic Function Evaluation System
(ProFES) for Reliability-Based Design, Journal of Structural Safety, Vol. 28, Issues 1–2,
pp. 164–195.
Wu, Y.-T. & Mohanty, S. 2006. Variable Screening and Ranking Using Several Sampling Based
Sensitivity Measures, Journal of Reliability Engineering and System Safety, Vol. 91, Issue 6,
pp. 634–647.
Chapter 15

Overview of reliability analysis and


design capabilities in DAKOTA with
application to shape optimization of
MEMS
Michael S. Eldred
Sandia National Laboratories, Albuquerque, NM, USA∗

Barron J. Bichon
Vanderbilt University, Nashville, TN, USA

Brian M. Adams
Sandia National Laboratories, Albuquerque, NM, USA

Sankaran Mahadevan
Vanderbilt University, Nashville, TN, USA

ABSTRACT: Reliability methods are probabilistic algorithms for quantifying the effect of
uncertainties in simulation input on response metrics of interest. In particular, they compute
approximate response function distribution statistics (such as response mean, variance, and
cumulative probability) based on specified probability distributions for input random variables.
In this chapter, recent algorithm research in first and second-order local reliability methods
is overviewed for both the forward reliability analysis of computing probabilities for speci-
fied response levels (the reliability index approach (RIA)) and the inverse reliability analysis
of computing response levels for specified probabilities (the performance measure approach
(PMA)). A number of algorithmic variations have been explored, and the effect of different
limit state approximations, probability integrations, warm starting, most probable point search
algorithms, and Hessian approximations is discussed. In addition, global reliability methods are
presented for performing reliability analysis in the presence of nonsmooth, multimodal limit state
functions. This set of reliability analysis capabilities is then used as the algorithmic foundation
for reliability-based design optimization (RBDO) methods, and bi-level and sequential formu-
lations are presented. These RBDO formulations may employ analytic sensitivities of reliability
metrics with respect to design variables that either augment or define distribution parameters for
the uncertain variables. Relative performance of these reliability analysis and design algorithms
is presented for a number of benchmark test problems using the DAKOTA software, and algo-
rithm recommendations are given. These recommended algorithms are subsequently applied
to real-world applications in the probabilistic analysis and design of microelectromechanical
systems (MEMS), and the calculation of robust and reliable MEMS designs is demonstrated.

∗ Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company,


for the United States Department of Energy’s National Nuclear Security Administration under Contract
DE-AC04-94AL85000.
402 Structural design optimization considering uncertainties

1 Introduction
Uncertainty quantification (UQ) is the process of determining the effect of input uncer-
tainties on response metrics of interest. These input uncertainties may be characterized
as either aleatory uncertainties, which are irreducible variabilities inherent in nature,
or epistemic uncertainties, which are reducible uncertainties resulting from a lack of
knowledge. Since sufficient data is generally available for aleatory uncertainties, prob-
abilistic methods are commonly used for computing response distribution statistics
based on input probability distribution specifications. Conversely, for epistemic uncer-
tainties, data is generally sparse, making the use of probability theory questionable
and leading to nonprobabilistic methods based on interval specifications.
Reliability methods are probabilistic algorithms for quantifying the effect of aleatory
input uncertainties on response metrics of interest. In particular, they perform UQ by
computing approximate response function distribution statistics based on specified
probability distributions for input random variables. These response statistics include
response mean, response standard deviation, and cumulative or complementary cumu-
lative distribution function (CDF/CCDF) response level and probability level pairings.
These methods are often more efficient at computing statistics in the tails of the
response distributions (events with low probability) than sampling-based approaches
since the number of samples required to resolve a low probability can be prohibitive.
Thus, these methods, as their name implies, are often used in a reliability context for
assessing the probability of failure of a system when confronted with an uncertain
environment.
A number of classical reliability analysis methods are discussed in (Haldar and
Mahadevan 2000), including Mean-Value First-Order Second-Moment (MVFOSM),
First-Order Reliability Method (FORM), and Second-Order Reliability Method
(SORM). More recent methods which seek to improve the efficiency of FORM analysis
through limit state approximations include the use of local and multipoint approx-
imations in Advanced Mean Value methods (AMV/AMV+ (Wu, Millwater, and
Cruse 1990)) and Two-point Adaptive Nonlinearity Approximation-based methods
(TANA (Wang and Grandhi 1994; Xu and Grandhi 1998)), respectively. Each of the
FORM-based methods can be employed for “forward’’ or “inverse’’ reliability anal-
ysis through the reliability index approach (RIA) or performance measure approach
(PMA), respectively, as described in (Tu, Choi, and Park 1999).
The capability to assess reliability is broadly useful within a design optimiza-
tion context, and reliability-based design optimization (RBDO) methods are popular
approaches for designing systems while accounting for uncertainty. RBDO approaches
may be broadly characterized as bi-level (in which the reliability analysis is nested
within the optimization, e.g. (Allen and Maute 2004)), sequential (in which iteration
occurs between optimization and reliability analysis, e.g. (Wu, Shin, Sues, and Cesare
2001; Du and Chen 2004)), or unilevel (in which the design and reliability searches
are combined into a single optimization, e.g. (Agarwal, Renaud, Lee, and Watson
2004)). Bi-level RBDO methods are simple and general-purpose, but can be compu-
tationally demanding. Sequential and unilevel methods seek to reduce computational
expense by breaking the nested relationship through the use of iterated or simultaneous
approaches, respectively.
In order to provide access to a variety of uncertainty quantification capabili-
ties for analysis of large-scale engineering applications on high-performance parallel
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 403

computers, the DAKOTA project (Eldred, Brown, Adams, Dunlavy, Gay, Swiler,
Giunta, Hart, Watson, Eddy, Griffin, Hough, Kolda, Martinez-Canales, and Williams
2006) at Sandia National Laboratories has developed a suite of algorithmic capabilities
known as DAKOTA/UQ (Wojtkiewicz, Jr., Eldred, Field, Jr., Urbina, and Red-Horse
2001). This package contains the reliability analysis capabilities described in this chap-
ter and provides the foundation for the RBDO approaches. DAKOTA is freely available
for download worldwide through an open source license.
This chapter overviews recent algorithm research activities that have explored a vari-
ety of approaches for performing reliability analysis. In particular, forward and inverse
local reliability analyses have been explored using multiple limit state approximation,
probability integration, warm starting, Hessian approximation, and optimization algo-
rithm selections. New global reliability analysis methods based on Gaussian process
surrogate models have also been explored for handling response functions which may
be nonsmooth or multimodal. Finally, these reliability analysis capabilities are used
to provide a foundation for exploring bi-level and sequential RBDO formulations.
Sections 2 and 3 describe these algorithmic components, Section 4 summarizes com-
putational results for several analytic benchmark test problems, Section 5 presents
deployment of these methodologies to the probabilistic analysis and design of MEMS,
and Section 6 provides concluding remarks.

2 Reliability method formulations

2.1 Mean Value method


The Mean Value method (MV, also known as MVFOSM in (Haldar and Mahadevan
2000)) is the simplest, least-expensive reliability method because it estimates
the response means, response standard deviations, and all CDF/CCDF response-
probability-reliability levels from a single evaluation of response functions and their
gradients at the uncertain variable means. This approximation can have acceptable
accuracy when the response functions are nearly linear and their distributions are
approximately Gaussian, but can have poor accuracy in other situations. The expres-
sions for approximate response mean µg , approximate response standard deviation
σg , response target to approximate probability/reliability level mapping (z → p, β),
and probability/reliability target to approximate response level mapping (p, β → z) are

µg = g(µx ) (1)
 dg dg
σg = Cov(i, j) (µx ) (µx ) (2)
dxi dxj
i j

µg − z
βcdf = (3)
σg
z − µg
βccdf = (4)
σg
404 Structural design optimization considering uncertainties

z = µg − σg βcdf (5)

z = µg + σg βccdf (6)

respectively, where x are the uncertain values in the space of the original uncertain
variables (“x-space’’), g(x) is the limit state function (the response function for which
probability-response level pairs are needed), and βcdf and βccdf are the CDF and CCDF
reliability indices, respectively.
With the introduction of second-order limit state information, MVSOSM calculates
a second-order mean as

1  d2g
µg = g(µx ) + Cov(i, j) (µx ) (7)
2 dxi dxj
i j

This is commonly combined with a first-order variance (Eq. 2), since second-order
variance involves higher order distribution moments (skewness, kurtosis) (Haldar and
Mahadevan 2000) which are often unavailable.
The first-order CDF probability p(g ≤ z), first-order CCDF probability p(g > z), βcdf ,
and βccdf are related to one another through

p(g ≤ z) = (−βcdf ) (8)


p(g > z) = (−βccdf ) (9)

βcdf = −−1 (p(g ≤ z)) (10)

βccdf = −−1 (p(g > z)) (11)


βcdf = −βccdf (12)
p(g ≤ z) = 1 − p(g > z) (13)

where () is the standard normal cumulative distribution function. A common conven-
tion in the literature is to define g in such a way that the CDF probability for a response
level z of zero (i.e., p(g ≤ 0)) is the response metric of interest. The formulations in this
chapter are not restricted to this convention and are designed to support CDF or CCDF
mappings for general response, probability, and reliability level sequences.

2.2 L oc al MPP s e ar c h me t ho d s
Other local reliability methods solve a nonlinear optimization problem to com-
pute a most probable point (MPP) and then integrate about this point to com-
pute probabilities. Regardless of specified input probability distributions, the MPP
search is performed in uncorrelated standard normal space (“u-space’’) since it sim-
plifies the probability integration: the distance of the MPP from the origin has
the meaning of the number of input standard deviations separating the median
response from a particular response threshold. The transformation from correlated
non-normal distributions (x-space) to uncorrelated standard normal distributions
(u-space) is denoted as u = T(x) with the reverse transformation denoted as x = T −1 (u).
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 405

These transformations are nonlinear in general, and possible approaches include


the Rosenblatt (Rosenblatt 1952), Nataf (Der Kiureghian and Liu 1986), and
Box-Cox (Box and Cox 1964) transformations. The nonlinear transformations
may also be linearized, and common approaches for this include the Rackwitz-
Fiessler (Rackwitz and Fiessler 1978) two-parameter equivalent normal and the
Chen-Lind (Chen and Lind 1983) and Wu-Wirsching (Wu and Wirsching 1987) three-
parameter equivalent normals. The results in this chapter employ the Nataf nonlinear
transformation which occurs in the following two steps. To transform between the
original correlated x-space variables and correlated standard normals (“z-space’’), the
CDF matching condition is used:

(zi ) = F(xi ) (14)

where F( ) is the cumulative distribution function of the original probability distribu-


tion. Then, to transform between correlated z-space variables and uncorrelated u-space
variables, the Cholesky factor L of a modified correlation matrix is used:

z = Lu (15)

where the original correlation matrix for non-normals in x-space has been modified
to represent the corresponding correlation in z-space (Der Kiureghian and Liu 1986).
The forward reliability analysis algorithm of computing CDF/CCDF probabil-
ity/reliability levels for specified response levels is called the reliability index approach
(RIA), and the inverse reliability analysis algorithm of computing response levels for
specified CDF/CCDF probability/reliability levels is called the performance measure
approach (PMA) (Tu, Choi, and Park 1999). The differences between the RIA and
PMA formulations appear in the objective function and equality constraint formula-
tions used in the MPP searches. For RIA, the MPP search for achieving the specified
response level z is formulated as

minimize uT u
(16)
subject to G(u) = z

and for PMA, the MPP search for achieving the specified reliability/probability level
β, p is formulated as

minimize ±G(u)
2 (17)
subject to uT u = β

where u is a vector centered at the origin in u-space and g(x) ≡ G(u) by definition. In the
RIA case, the optimal MPP solution u∗ defines the reliability index from β = ±u∗ 2 ,
which in turn defines the CDF/CCDF probabilities (using Eqs. 8–9 in the case of
first-order integration). The sign of β is defined by

G(u∗ ) > G(0): βcdf < 0, βccdf > 0


(18)
G(u∗ ) < G(0): βcdf > 0, βccdf < 0
406 Structural design optimization considering uncertainties

where G(0) is the median limit state response computed at the origin in u-space
(where βcdf = βccdf = 0 and first-order p(g ≤ z) = p(g > z) = 0.5). In the PMA case,
the sign applied to G(u) (equivalent to minimizing or maximizing G(u)) is similarly
defined by β

βcdf < 0, βccdf > 0: maximize G(u)


(19)
βcdf > 0, βccdf < 0: minimize G(u)

and the limit state at the MPP (G(u∗ )) defines the desired response level result.
When performing PMA with specified p, one must compute β to include in
Eq. 17. While this is a straightforward one-time calculation for first-order integra-
tions (Eqs. 10–11), the use of second-order integrations complicates matters since the
β corresponding to the prescribed p is a function of the Hessian of G (see Eq. 36), which
in turn is a function of location in u-space. The β target must therefore be updated in
Eq. 17 as the minimization progresses (e.g., using Newton’s method to solve Eq. 36
for β given p and κi ). This works best when β can be fixed during the course of an
approximate optimization, such as for the AMV2 + and TANA methods described in
Section 2.2.1. For second-order PMA without limit state approximation cycles (i.e.,
PMA SORM), the constraint must be continually updated and the constraint deriva-
tive should include ∇u β, which would require third-order information for the limit
state to compute derivatives of the principal curvatures. This is impractical, so the
PMA SORM constraint derivatives are only approximated analytically or estimated
numerically. Potentially for this reason, PMA SORM has not been widely explored in
the literature.

2.2.1 L i m it sta te a p p r o x im a t io n s
There are a variety of algorithmic variations that can be explored within RIA/PMA
reliability analysis. First, one may select among several different limit state approxi-
mations that can be used to reduce computational expense during the MPP searches.
Local, multipoint, and global approximations of the limit state are possible. (Eldred,
Agarwal, Perez, Wojtkiewicz, Jr., and Renaud 2007) investigated local first-order limit
state approximations, and (Eldred and Bichon 2006) investigated local second-order
and multipoint approximations. These techniques include:

1. a single Taylor series per response/reliability/probability level in x-space centered


at the uncertain variable means. The first-order approach is commonly known as
the Advanced Mean Value (AMV) method:

g(x) ∼
= g(µx ) + ∇x g(µx )T (x − µx ) (20)

and the second-order approach has been named AMV2 :

1
g(x) ∼
= g(µx ) + ∇x g(µx )T (x − µx ) + (x − µx )T ∇x2 g(µx )(x − µx ) (21)
2
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 407

2. same as AMV/AMV2 , except that the Taylor series is expanded in u-space. The
first-order option has been termed the u-space AMV method:

G(u) ∼
= G(µu ) + ∇u G(µu )T (u − µu ) (22)

where µu = T(µx ) and is nonzero in general, and the second-order option has
been named the u-space AMV2 method:

1
G(u) ∼
= G(µu ) + ∇u G(µu )T (u − µu ) + (u − µu )T ∇u2 G(µu )(u − µu ) (23)
2

3. an initial Taylor series approximation in x-space at the uncertain variable means,


with iterative expansion updates at each MPP estimate (x∗ ) until the MPP
converges. The first-order option is commonly known as AMV+:

g(x) ∼
= g(x∗ ) + ∇x g(x∗ )T (x − x∗ ) (24)

and the second-order option has been named AMV2 +:

1
g(x) ∼
= g(x∗ ) + ∇x g(x∗ )T (x − x∗ ) + (x − x∗ )T ∇x2 g(x∗ )(x − x∗ ) (25)
2

4. same as AMV+/AMV2 +, except that the expansions are performed in u-space.


The first-order option has been termed the u-space AMV+ method.

G(u) ∼
= G(u∗ ) + ∇u G(u∗ )T (u − u∗ ) (26)

and the second-order option has been named the u-space AMV2 + method:

1
G(u) ∼
= G(u∗ ) + ∇u G(u∗ )T (u − u∗ ) + (u − u∗ )T ∇u2 G(u∗ )(u − u∗ ) (27)
2

5. a multipoint approximation in x-space. This approach involves a Taylor series


approximation in intermediate variables where the powers used for the inter-
mediate variables are selected to match information at the current and previous
expansion points. Based on the two-point exponential approximation concept
(TPEA, (Fadel, Riley, and Barthelemy 1990)), the two-point adaptive nonlinear-
ity approximation (TANA-3, (Xu and Grandhi 1998)) approximates the limit
state as:

1−p

n
∂g xi,2 i pi 1 
n
g(x) ∼
p p p
= g(x2 ) + (x2 ) (xi − xi,2i ) + (x) (xi i − xi,2i )2 (28)
∂xi pi 2
i=1 i=1
408 Structural design optimization considering uncertainties

where n is the number of uncertain variables and:


 ∂g ? 
∂xi
(x1 ) xi,1
pi = 1 + ln ∂g ln (29)
∂x
(x2 ) xi,2
i

H
(x) = n pi pi 2 n pi pi 2 (30)
i=1 (x i − x i,1 +
) i=1 (xi − xi,2 )
 1−p 
n
∂g xi,2 i pi pi
H = 2 g(x1 ) − g(x2 ) − (x2 ) (xi,1 − xi,2 ) (31)
∂xi pi
i=1

and x2 and x1 are the current and previous MPP estimates in x-space, respectively.
Prior to the availability of two MPP estimates, x-space AMV+ is used.
6. a multipoint approximation in u-space. The u-space TANA-3 approximates the
limit state as:
1−pi

n
∂G ui,2 1  p n
G(u) ∼
p p p
= G(u2 ) + (u2 ) (ui i − ui,2i ) + (u) (ui i − ui,2i )2 (32)
∂ui pi 2
i=1 i=1

where:
 ∂G ? 
∂ui
(u1 ) ui,1
pi = 1 + ln ∂G
ln (33)
∂ui
(u2 ) ui,2

H
(u) = n pi p n p p (34)
i=1 (ui − ui,1i )2 + i=1 (ui i − ui,2i )2
 1−pi 

n
∂G ui,2 p p
H = 2 G(u1 ) − G(u2 ) − (u2 ) (ui,1i − ui,2i ) (35)
∂ui pi
i=1

and u2 and u1 are the current and previous MPP estimates in u-space, respectively.
Prior to the availability of two MPP estimates, u-space AMV+ is used.
7. the MPP search on the original response functions without the use of any approx-
imations. Combining this option with first-order and second-order integration
approaches results in the traditional first-order and second-order reliability
methods (FORM and SORM).

The Hessian matrices in AMV2 and AMV2 + may be available analytically, estimated
numerically, or approximated through quasi-Newton updates (see Section 2.2.3).
The quasi-Newton variant of AMV2 + is conceptually similar to TANA in that both
approximate curvature based on a sequence of gradient evaluations. TANA estimates
curvature by matching values and gradients at two points and includes it through the
use of exponential intermediate variables and a single-valued diagonal Hessian approx-
imation. Quasi-Newton AMV2 + accumulates curvature over a sequence of points and
then uses it directly in a second-order series expansion. Therefore, these methods may
be expected to exhibit similar performance.
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 409

The selection between x-space or u-space for performing approximations depends on


where the approximation will be more accurate, since this will result in more accurate
MPP estimates (AMV, AMV2 ) or faster convergence (AMV+, AMV2 +, TANA). Since
this relative accuracy depends on the forms of the limit state g(x) and the transforma-
tion T(x) and is therefore application dependent in general, DAKOTA/UQ supports
both options. A concern with approximation-based iterative search methods (i.e.,
AMV+, AMV2 + and TANA) is the robustness of their convergence to the MPP.
It is possible for the MPP iterates to oscillate or even diverge. DAKOTA/UQ con-
tains checks that monitor for this behavior; however, implementation of a robust
model management approach (Giunta and Eldred 2000; Eldred and Dunlavy 2006)
is an important area for future work. Another concern with TANA is numerical safe-
guarding. First, there is the possibility of raising negative xi or ui values to nonintegral
pi exponents in Eqs. 30–32, and 34–35. This is particularly likely for u-space. Safe-
guarding techniques include the use of linear bounds scaling for each xi or ui , offseting
negative xi or ui , or promotion of pi to integral values for negative xi or ui . In numerical
experimentation, the offset approach has been the most effective in retaining the desired
data matches without overly inflating the pi exponents. Second, there are a number of
potential numerical difficulties with the logarithm ratios in Eqs. 29 and 33. In this case,
a safeguarding strategy is to revert to either the linear (pi = 1) or reciprocal (pi = −1)
∂g ∂G
approximation based on which approximation has lower error in ∂x i
(x1 ) or ∂ui
(u1 ).

2.2.2 Prob a b ilit y i ntegrati ons


The second algorithmic variation involves the integration approach for comput-
ing probabilities at the MPP, which can be selected to be first-order (Eqs. 8–9) or
second-order integration. Second-order integration involves applying a curvature cor-
rection (Breitung 1984; Hohenbichler and Rackwitz 1988; Hong 1999). Breitung
applies a correction based on asymptotic analysis (Breitung 1984):


n−1
1
p = (−βp )  (36)
i=1
1 + β p κi

where κi are the principal curvatures of the limit state function (the eigenvalues of an
orthonormal transformation of ∇u2 G, taken positive for a convex limit state) and βp ≥ 0
(select CDF or CCDF probability correction to obtain correct sign for βp ). An alternate
correction in (Hohenbichler and Rackwitz 1988) is consistent in the asymptotic regime
(βp → ∞) but does not collapse to first-order integration for βp = 0:


n−1
1
p = (−βp )  (37)
i=1
1 + ψ(−βp )κi

φ()
where ψ() = () and φ() is the standard normal density function. (Hong 1999) applies
further corrections to Eq. 37 based on point concentration methods.
To invert a second-order integration and compute βp given p and κi (e.g., for
second-order PMA as described in Section 2.2), Newton’s method can be applied as
described in (Eldred and Bichon 2006). Additional probability integration approaches
410 Structural design optimization considering uncertainties

can involve importance sampling in the vicinity of the MPP (Hohenbichler and
Rackwitz 1988; Wu 1994), but are outside the scope of this chapter. While second-
order integrations could be performed anywhere a limit state Hessian has been
computed, the additional computational effort is most warranted for fully converged
MPPs from AMV+, AMV2 +, TANA, FORM, and SORM, and is of reduced value for
MVFOSM, MVSOSM, AMV, or AMV2 .

2.2.3 Hessi a n a p p r o x im a t io n s
To use a second-order Taylor series or a second-order integration when second-order
information (∇x2 g, ∇u2 G, and/or κ) is not directly available, one can estimate the
missing information using finite differences or approximate it through use of quasi-
Newton approximations. These procedures will often be needed to make second-order
approaches practical for engineering applications.
In the finite difference case, numerical Hessians are commonly computed using either
first-order forward differences of gradients using

∇g(x + hei ) − ∇g(x)


∇ 2 g(x) ∼
= (38)
h

to estimate the ith Hessian column when gradients are analytically available, or second-
order differences of function values using

g(x + hei + hej ) − g(x + hei − hej ) − g(x − hei + hej ) + g(x − hei − hej )
∇ 2 g(x) ∼
=
4h2
(39)

to estimate the ijth Hessian term when gradients are not directly available. This
approach has the advantage of locally-accurate Hessians for each point of interest
(which can lead to quadratic convergence rates in discrete Newton methods), but has
the disadvantage that numerically estimating each of the matrix terms can be expensive.
Quasi-Newton approximations, on the other hand, do not reevaluate all of the
second-order information for every point of interest. Rather, they accumulate approx-
imate curvature information over time using secant updates. Since they utilize the
existing gradient evaluations, they do not require any additional function evaluations
for evaluating the Hessian terms. The quasi-Newton approximations of interest include
the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update

Bk sk sTk Bk yk ykT
Bk+1 = Bk − + (40)
sTk Bk sk ykT sk

which yields a sequence of symmetric positive definite Hessian approximations, and


the Symmetric Rank 1 (SR1) update

(yk − Bk sk )(yk − Bk sk )T
Bk+1 = Bk + (41)
(yk − Bk sk )T sk
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 411

which yields a sequence of symmetric, potentially indefinite, Hessian approxima-


tions. Bk is the kth approximation to the Hessian ∇ 2 g, sk = xk+1 − xk is the step and
yk = ∇gk+1 − ∇gk is the corresponding yield in the gradients. The selection of BFGS
versus SR1 involves the importance of retaining positive definiteness in the Hessian
approximations; if the procedure does not require it, then the SR1 update can be more
accurate if the true Hessian is not positive definite. Initial scalings for B0 and numerical
safeguarding techniques (damped BFGS, update skipping) are described in (Eldred and
Bichon 2006).

2.2.4 O ptimizat ion al gori thms


The next algorithmic variation involves the optimization algorithm selection for solv-
ing Eqs. 16 and 17. The Hasofer-Lind Rackwitz-Fissler (HL-RF) algorithm (Haldar
and Mahadevan 2000) is a classical approach that has been broadly applied. It is a
Newton-based approach lacking line search/trust region globalization, and is gener-
ally regarded as computationally efficient but occasionally unreliable. DAKOTA/UQ
takes the approach of employing robust, general-purpose optimization algorithms with
provable convergence properties. This chapter employs the sequential quadratic pro-
gramming (SQP) and nonlinear interior-point (NIP) optimization algorithms from the
NPSOL (Gill, Murray, Saunders, and Wright 1998) and OPT++ (Meza 1994) libraries,
respectively.

2.2.5 Wa rm starti ng of MPP searches


The final algorithmic variation for local reliability methods involves the use of warm
starting approaches for improving computational efficiency. (Eldred, Agarwal, Perez,
Wojtkiewicz, Jr., and Renaud 2007) describes the acceleration of MPP searches through
warm starting with approximate iteration increment, with z/p/β level increment, and
with design variable increment. Warm started data includes the expansion point and
associated response values and the MPP optimizer initial guess. Projections are used
when an increment in z/p/β level or design variables occurs. Warm starts were consis-
tently effective in (Eldred, Agarwal, Perez, Wojtkiewicz, Jr., and Renaud 2007), with
greater effectiveness for smaller parameter changes, and are used for all computational
experiments presented in this chapter.

2.3 G lobal reliability methods


Local reliability methods, while computationally efficient, have well-known failure
mechanisms. When confronted with a limit state function that is nonsmooth, local
gradient-based optimizers may stall due to gradient inaccuracy and fail to converge to
an MPP. Moreover, if the limit state is multimodal (multiple MPPs), then a gradient-
based local method can, at best, locate only one local MPP solution. Finally, a linear
(Eqs. 8–9) or parabolic (Eqs. 36–37) approximation to the limit state at this MPP
may fail to adequately capture the contour of a highly nonlinear limit state. For these
reasons, efficient global reliability analysis (EGRA) is investigated in (Bichon, Eldred,
Swiler, Mahadevan and McFarland 2007).
412 Structural design optimization considering uncertainties

In this approach, ideas from Efficient Global Optimization (EGO) (Jones, Shonlau,
and Welch 1998) are adapted for use in reliability analysis. This approach employs a
Gaussian process (GP) model to approximate the true response function

Ĝ(u) = h(u)T β + Z(u) (42)

where h( ) is the trend of the model, β is the vector of trend coefficients, and Z() is a
stationary Gaussian process with zero mean and covariance defined from a squared-
exponential correlation function that describes the departure of the model from its
underlying trend.
Gaussian process (GP) models are set apart from other surrogate models because
they provide not just a predicted value at an unsampled point, but a full Gaussian
distribution with an expected value and a predicted variance. This variance gives an
indication of the uncertainty in the model, which results from the construction of the
covariance function. This function is based on the idea that when input points are
near one another, the correlation between their corresponding outputs will be high.
As a result, the uncertainty associated with the model’s predictions will be small for
input points which are near the points used to train the model, and will increase as
one moves further from the training points.
In EGO, the mean and variance estimates from the GP are used to form an expected
improvement function (EIF), which calculates the expectation that any point in the
search space will provide a better solution than the current best solution. An important
feature of the EIF is that it provides a balance between exploiting areas of the design
space where good solutions have been found, and exploring areas of the design space
where the uncertainty is high. To adapt this concept to forward reliability analysis
(z → p), an expected feasibility function (EFF) is used to provide an indication of how
well the true value of the response is expected to satisfy the equality constraint G(u) = z
by integrating over a region in the immediate vicinity of the threshold value z ± :
 z+
EF(Ĝ(u)) = [ − |z − G|] Ĝ(u) dG (43)
z−

where  is proportional to the standard deviation predicted by the GP at the point u.


This integral can be evaluated analytically, as described in (Bichon, Eldred, Swiler,
Mahadevan, and McFarland 2007), to create a simple GP-based function to maximize
with a global optimization algorithm. Once a new point or points are computed which
maximize the EFF, the GP is updated and the process is continued until the maximum
EFF value falls below a tolerance. With a converged GP representation of the limit
state, multimodal adaptive importance sampling is then applied to the GP to evaluate
an approximation to the probabilities of interest.

3 Reliability-based design optimization


Reliability-based design optimization (RBDO) methods are used to perform design
optimization accounting for reliability metrics. The reliability analysis capabilities
described in Section 2 provide a rich foundation for exploring a variety of RBDO for-
mulations. (Eldred, Agarwal, Perez, Wojtkiewics, Jr., and Renaud 2007) investigated
bi-level, fully-analytic bi-level, and first-order sequential RBDO approaches employing
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 413

underlying first-order reliability assessments. (Eldred and Bichon 2006) investigated


fully-analytic bi-level and second-order sequential RBDO approaches employing
underlying second-order reliability assessments. These methods are overviewed in the
following sections.

3.1 Bi-level RBDO


The simplest and most direct RBDO approach is the bi-level approach in which a
full reliability analysis is performed for every optimization function evaluation. This
involves a nesting of two distinct levels of optimization within each other, one at the
design level and one at the MPP search level.
Since an RBDO problem will typically specify both the z level and the p/β level, one
can use either the RIA or the PMA formulation for the UQ portion and then constrain
the result in the design optimization portion. In particular, RIA reliability analysis
maps z to p/β, so RIA RBDO constrains p/β:

minimize f
subject to β ≥ β (44)
or p ≤ p

And PMA reliability analysis maps p/β to z, so PMA RBDO constrains z:

minimize f
(45)
subject to z≥z

where z ≥ z is used as the RBDO constraint for a cumulative failure probability (failure
defined as z ≤ z) but z ≤ z would be used as the RBDO constraint for a complemen-
tary cumulative failure probability (failure defined as z ≥ z). Note that many other
objective and constraint formulations are possible (see (Eldred, Giunta, Wojtkiewicz
Jr., and Trucano 2002) for general mappings which allow flexible use of statistics
within multiple objectives, inequality constraints, and equality constraints); formula-
tions with a deterministic objective and a single probabilistic inequality constraint are
just convenient examples.
An important performance enhancement for bi-level methods is the use of sensitivity
analysis to analytically compute the gradients of probability, reliability, and response
levels with respect to the design variables. When design variables are separate from the
uncertain variables (i.e., they are not distribution parameters), then the following first-
order expressions may be used (Hohenbichler and Rackwitz 1986; Karamchandani and
Cornell 1992; Allen and Maute 2004):

∇d z = ∇d g (46)
1
∇d βcdf = ∇d g (47)
 ∇u G 
∇d pcdf = −φ( − βcdf )∇d βcdf (48)

where it is evident from Eqs. 12–13 that ∇d βccdf = −∇d βcdf and ∇d pccdf = −∇d pcdf . In
the case of second-order integrations, Eq. 48 must be expanded to include the curvature
414 Structural design optimization considering uncertainties

correction. For Breitung’s correction (Eq. 36),


⎡ ⎛ ⎞ ⎤
⎢ 
n−1
⎜ −κi 
n−1
1 ⎟ 
n−1
1 ⎥
∇d pcdf = ⎢
⎣(−βp )

⎝ 2(1 + β κ ) 32  ⎟ − φ(−βp )
⎠  ⎥ ∇d βcdf
i=1 p i j=1
1 + βp κj i=1
1 + βp κi ⎦
j =i

(49)

where ∇d κi has been neglected and βp ≥ 0 (see Section 2.2.2). Other approaches assume
the curvature correction is nearly independent of the design variables (Rackwitz 2002),
which is equivalent to neglecting the first term in Eq. 49.
To capture second-order probability estimates within an RIA RBDO formulation
using well-behaved β constraints, a generalized reliability index can be introduced
where, similar to Eq. 10,


βcdf = −−1 (pcdf ) (50)

for second-order pcdf . This reliability index is no longer equivalent to the magnitude
of u, but rather is a convenience metric for capturing the effect of more accurate prob-
ability estimates. Since reliability levels behave more linearly under design variable
change than probability levels, replacing a second-order probability constraint with
a generalized reliability constraint can improve optimization performance. The corre-
sponding generalized reliability index sensitivity, similar to Eq. 48, is avoid numerical
differencing across full reliability analyses, which can be costly or (worse) inaccurate.

∗ 1
∇d βcdf =− ∗ ∇d pcdf (51)
φ(−βcdf )

where ∇d pcdf is defined from Eq. 49. Even when ∇d g is estimated numerically,
Eqs. 46–51 can be used to avoid numerical differencing across full reliability analyses.
When the design variables are distribution parameters of the uncertain variables,
∇d g is expanded with the chain rule and Eqs. 46 and 47 become

∇d z = ∇d x∇x g (52)
1
∇d βcdf = ∇d x∇x g (53)
 ∇u G 

where the design Jacobian of the transformation (∇d x) may be obtained analytically
for uncorrelated x or semi-analytically for correlated x (∇d L is evaluated numeri-
cally) by differentiating Eqs. 14 and 15 with respect to the distribution parameters.
Eqs. 48–51 remain the same as before. For this design variable case, all required
information for the sensitivities is available from the MPP search.
Since Eqs. 46–53 are derived using the Karush-Kuhn-Tucker optimality conditions
for a converged MPP, they are appropriate for RBDO using AMV+, AMV2 +, TANA,
FORM, and SORM, but not for RBDO using MVFOSM, MVSOSM, AMV, or AMV2 .
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 415

3.2 Sequential/Surrogate-bas ed RBDO


An alternative RBDO approach is the sequential approach, in which additional effi-
ciency is sought through breaking the nested relationship of the MPP and design
searches. The general concept is to iterate between optimization and uncertainty quan-
tification, updating the optimization goals based on the most recent probabilistic
assessment results. This update may be based on safety factors (Wu, Shin, Sues, and
Cesare 2001) or other approximations (Du and Chen 2004).
A particularly effective approach for updating the optimization goals is to use the
p/β/z sensitivity analysis of Eqs. 46–53 in combination with local surrogate mod-
els (Zou, Mahadevan, and Rebba 2004). In (Eldred, Agarwal, Perez, Wojtkiewicz,
Jr., and Renaud 2007) and (Eldred and Bichon 2006), first-order and second-order
Taylor series approximations were employed within a trust-region model management
framework (Giunta and Eldred 2000; Eldred and Dunlavy 2006) in order to adaptively
manage the extent of the approximations and ensure convergence of the RBDO pro-
cess. Surrogate models were used for both the objective function and the constraints,
although the use of constraint surrogates alone is sufficient to remove the nesting.
In particular, RIA trust-region surrogate-based RBDO employs surrogate models of
f and p/β within a trust region
k centered at dc . For first-order local surrogates:

minimize f (dc ) + ∇d f (dc )T (d − dc )


subject to β(dc ) + ∇d β(dc )T (d − dc ) ≥ β
(54)
or p(dc ) + ∇d p(dc )T (d − dc ) ≤ p
 d − dc ∞ ≤
k

and for second-order local surrogates:

minimize f (dc ) + ∇d f (dc )T (d − dc ) + 12 (d − dc )T ∇d2 f (dc )(d − dc )


subject to β(dc ) + ∇d β(dc )T (d − dc ) + 12 (d − dc )T ∇d2 β(dc )(d − dc ) ≥ β
(55)
or p(dc ) + ∇d p(dc )T (d − dc ) + 12 (d − dc )T ∇d2 p(dc )(d − dc ) ≤ p
 d − dc ∞ ≤
k

For PMA trust-region surrogate-based RBDO, surrogate models of f and z are


employed within a trust region
k centered at dc . For first-order surrogates:

minimize f (dc ) + ∇d f (dc )T (d − dc )


subject to z(dc ) + ∇d z(dc )T (d − dc ) ≥ z (56)
 d − dc ∞ ≤
k

and for second-order surrogates:

minimize f (dc ) + ∇d f (dc )T (d − dc ) + 12 (d − dc )T ∇d2 f (dc )(d − dc )


subject to z(dc ) + ∇d z(dc )T (d − dc ) + 12 (d − dc )T ∇d2 z(dc )(d − dc ) ≥ z (57)
 d − dc ∞ ≤
k
416 Structural design optimization considering uncertainties

where the sense of the z constraint may vary as described previously. The second-order
information in Eqs. 55 and 57 will typically be approximated with quasi-Newton
updates.

3.3 Pro b l em f or mulat io n is s ues


When performing RBDO in practice, a number of formulation issues arise. In particu-
lar, a flexible set of design parameterizations are needed for the input random variables
and a rich set of output statistical metrics are needed for the optimization objectives
and constraints.

3.3.1 In pu t p a ram e t e r iza t io n


As described in Section 3.1, design variables in RBDO may be separate from the uncer-
tain variables or they may define distribution parameters for the random variables.
In the latter case, an implementation should first support design variable insertion
into any of the native distribution parameters (e.g., mean, standard deviation, lower
and upper bounds) for the supported probability distributions. While this supplies
sufficient design authority for many distributions (e.g., normal, lognormal, extreme
value distributions), other distributions (e.g., uniform, loguniform, triangular) do not
directly support location and scale control within the native parameters. In these cases,
location and scale are derived quantities and the native distribution parameters may be
insufficient for design purposes, depending on the application. For example, the dis-
tribution parameters for a triangular distribution are lower bound, mode, and upper
bound. Design control of any one of these three parameters independent of the other
two is useful in some applications, but it will be insufficient to arbitrarily translate
or scale the distribution in other applications. To provide additional design control
in these cases, supporting the ability to design derived distribution parameters (from
which the native parameters are updated) is an important extension. When gross dis-
tribution control (location, scale) and fine distribution control (native parameters) can
both be provided, a broad range of design scenarios can be supported.

3.3.2 Ou tpu t m e t r ics


Similar to the input parameterization, output metric characterization requires careful
thought when developing optimization under uncertainty capabilities. In particular, a
rich, expressive set of metrics is needed for arbitrary control of the shape of output
distributions.
Generally speaking, designing for robustness involves the control of moments; for
example, minimizing an output variance statistic. On the other hand, design for reli-
ability requires the control of tail statistics; for example, constraining a probability of
failure statistic. Reliability methods are better suited for computing tail statistics, as
MPP methods do not directly calculate moments. To control output variance, a PMA-
based response interval, e.g. |zβ=3 − zβ = −3 |, may be substituted for the variance in
order to achieve similar goals. To control both robustness and reliability, a multiobjec-
tive formulation can provide for general trade-off analysis. If, however, a particular reli-
ability goal is known (e.g., β > 3), then formulations such as the two shown in Section 5
can be effective in reducing output variance while achieving prescribed reliability.
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 417

Finally, model calibration under uncertainty studies typically involve the estimation
of random variable distribution parameters which result in the best match in statistics
between simulation and experiment. When using reliability methods, a convenient
formulation is a nonlinear least squares objective which sums the discrepancies in
CDF values (e.g., probabilities in an RIA RBDO formulation or response levels in a
PMA RBDO formulation). In this case, all of the same analytic machinery applies (i.e.,
the sensitivity analysis of Section 3.1), only a broader set of distribution parameters
may be of interest and a more complete set of CDF points may be required.

4 Benchmark problems
(Eldred, Agarwal, Perez, Wojtkiewicz, Jr., and Renaud 2007) and (Eldred and Bichon
2006) have examined the performance of first and second-order local reliability anal-
ysis and design methods for four analytic benchmark test problems: lognormal ratio,
short column, cantilever beam, and steel column. (Bichon, Eldred, Swiler, Mahadevan,
and McFarland, 2007) has examined the performance of global reliability analysis
methods for two additional analytic benchmark test problems that cause problems for
local methods.

4.1 Local reliability analys is res ults


Within the reliability analysis algorithms, various limit state approximation
(MVFOSM, MVSOSM, x-/u-space AMV, x-/u-space AMV2 , x-/u-space AMV+,
x-/u-space AMV2 +, x-/u-space TANA, FORM, and SORM), probability integration
(first-order or second-order), warm starting, Hessian approximation (finite difference,
BFGS, or SR1), and MPP optimization algorithm (SQP or NIP) selections have been
investigated. A sample comparison of reliability analysis performance, taken from the
short column example, is shown in Tables 15.1 and 15.2 for RIA and PMA analysis,
respectively, where “*’’ indicates that one or more levels failed to converge. Consistent
with the employed probability integrations, the error norms are measured with respect

Table 15.1 RIA results for short column problem.

RIA SQP Function NIP Function CDF p Target z


Approach evaluations evaluations Error norm Offset norm

MVFOSM 1 1 0.1548 0.0


MVSOSM 1 1 0.1127 0.0
x-space AMV 45 45 0.009275 18.28
u-space AMV 45 45 0.006408 18.81
x-space AMV2 45 45 0.002063 2.482
u-space AMV2 45 45 0.001410 2.031
x-space AMV+ 192 192 0.0 0.0
u-space AMV+ 207 207 0.0 0.0
x-space AMV2 + 125 131 0.0 0.0
u-space AMV2 + 122 130 0.0 0.0
x-space TANA 245 246 0.0 0.0
u-space TANA 296* 278* 6.982e-5 0.08014
FORM 626 176 0.0 0.0
SORM 669 219 0.0 0.0
418 Structural design optimization considering uncertainties

Table 15.2 PMA results for short column problem.

PMA SQP Function NIP Function CDF z Target p


Approach evaluations evaluations Error norm Offset norm

MVFOSM 1 1 7.454 0.0


MVSOSM 1 1 6.823 0.0
x-space AMV 45 45 0.9420 0.0
u-space AMV 45 45 0.5828 0.0
x-space AMV2 45 45 2.730 0.0
u-space AMV2 45 45 2.828 0.0
x-space AMV+ 171 179 0.0 0.0
u-space AMV+ 205 205 0.0 0.0
x-space AMV2 + 135 142 0.0 0.0
u-space AMV2 + 132 139 0.0 0.0
x-space TANA 293* 272 0.04259 1.598e-4
u-space TANA 325* 311* 2.208 5.600e-4
FORM 720 192 0.0 0.0
SORM 535 191* 2.410 6.522e-4

to fully-converged first-order results for MV, AMV, AMV2 , AMV+, and FORM meth-
ods, and with respect to fully-converged second-order results for AMV2 +, TANA, and
SORM methods. Also, it is important to note that the simple metric of “function eval-
uations’’ is imperfect, and (Eldred and Bichon 2006) provides more detailed reporting
of individual response value, gradient, and Hessian evaluations.
Overall, reliability analysis results for the lognormal ratio, short column, and can-
tilever test problems indicate several trends. MVFOSM, MVSOSM, AMV, and AMV2
are significantly less expensive than the fully-converged MPP methods, but come with
corresponding reductions in accuracy. In combination, these methods provide a useful
spectrum of accuracy and expense that allow the computational effort to be balanced
with the statistical precision required for particular applications. In addition, support
for forward and inverse mappings (RIA and PMA) provide the flexibility to support
different UQ analysis needs.
Relative to FORM and SORM, AMV+ and AMV2 + has been shown to have
equal accuracy and consistent computational savings. For second-order PMA analysis
with prescribed probability levels, AMV2 + has additionally been shown to be more
robust due to its ability to better manage β udpates. Analytic Hessians were highly
effective in AMV2 +, but since they are often unavailable in practical applications,
finite-difference numerical Hessians and quasi-Newton Hessian approximations were
also demonstrated, with SR1 quasi-Newton updates being shown to be sufficiently
accurate and competitive with analytic Hessian performance. Relative to first-order
AMV+ performance, AMV2 + with analytic Hessians had consistently superior effi-
ciency, and AMV2 + with quasi-Newton Hessians had improved performance in most
cases (it was more expensive than first-order AMV+ only when a more challenging
second-order p problem was being solved). In general, second-order reliability anal-
yses appear to serve multiple synergistic needs. The same Hessian information that
allows for more accurate probability integrations can also be applied to making MPP
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 419

solutions more efficient and more robust. Conversely, limit state curvature informa-
tion accumulated during an MPP search can be reused to improve the accuracy of
probability estimates.
For nonapproximated limit states (FORM and SORM), NIP optimizers have shown
promise in being less susceptible to PMA u-space excursions and in being more efficient
than SQP optimizers in most cases. Warm starting with projections has been shown
to be consistently effective for reliability analyses, with typical savings on the order of
25%. The x-space and u-space approximations for AMV, AMV2 , AMV+, AMV2 +,
and TANA were both effective, and the relative performance was strongly problem-
dependent (u-space was more efficient for lognormal ratio, x-space was more efficient
for short column, and x-space and u-space were equivalent for cantilever). Among all
combinations tested, AMV2 + (with analytic Hessians if available, or SR1 Hessians if
not) is the recommended approach.
An important question is how Taylor-series based limit state approximations (such
as AMV+ and AMV2 +) can frequently outperform the best general-purpose optimiz-
ers (such as SQP and NIP) which may employ similar internal approximations. The
answer likely lies in the exploitation of the structure of the RIA and PMA MPP prob-
lems. By approximating the limit state but retaining uT u explicitly in Eqs. 16 and 17,
specific problem structure knowledge is utilized in formulating a mixed surrogate/direct
approach.

4.2 G lobal reliability analys is res ults


Our test problem for demonstrating global reliability analysis is taken from (Bichon,
Eldred, Swiler, Mahadevan, and McFarland 2007). It has a highly nonlinear response
defined by:

(x21 + 4)(x2 − 1) 5x1


g(x) = − sin −2 (58)
20 2

The distribution of x1 is Normal(1.5, 1), x2 is Normal(2.5, 1), and the variables are
uncorrelated. The response level of interest for this study is z = 0 with failure defined
by g > z. Figure 15.1(a) shows a plot in u-space of the limit state throughout the ±5
standard deviation search space. This problem has several local optima to the forward-
reliability MPP search problem (see Eq. 16). Figure 15.1(b) shows an example of an
EGRA execution, with the total set of truth model evaluations performed from building
the initial surrogate model and then repeatedly maximizing the expected feasibility
function derived from the GP model. It is evident that the algorithm preferentially
selects the data points needed to accurately resolve the limit state contour of interest.
This multimodal problem was also solved using a number of local reliability meth-
ods for comparison purposes. Two approximation-based methods (AMV2 + and
TANA) were investigated in x-space and u-space as well as the no approximation
case (FORM/SORM). To produce results consistent with an implicit response func-
tion, numerical gradients and quasi-Newton Hessians from Symmetric Rank 1 updates
were used. For each method, at the converged MPP, both first-order and second-order
integration (using Eqs. 9 and 37) were used to calculate the probability.
420 Structural design optimization considering uncertainties

5 5
4 4
3 3
2 2
1 1
0 0
1 1
2 2
3 3
4 4
5 5
5 4 3 2 1 0 1 2 3 4 5 5 4 3 2 1 0 1 2 3 4 5
(a) Contour of the true limit state function (b) Gaussian process approximation
with data points generated by EGRA

Figure 15.1 Multimodal test problem.

Table 15.3 Results for the multimodal test problem.

Reliability Function First-order pf Second-order pf Sampling pf


method evaluations (% Error) (% Error) (% Error, Avg. Error)

No Approximation 66 0.11798 (276.3%) 0.02516 (−19.7%) —


x-space AMV2 + 26 0.11798 (276.3%) 0.02516 (−19.7%) —
u-space AMV2 + 26 0.11798 (276.3%) 0.02516 (−19.7%) —
x-space TANA 506 0.08642 (175.7%) 0.08716 (178.0%) —
u-space TANA 131 0.11798 (276.3%) 0.02516 (−19.7%) —
x-space EGRA 50.4 — — 0.03127 (0.233%, 0.929%)
u-space EGRA 49.4 — — 0.03136 (0.033%, 0.787%)
True LHS solution 1M — — 0.03135 (0.000%, 0.328%)

Table 15.3 gives a summary of the results from all methods. To establish an accurate
estimate of the true solution, 20 independent studies were performed using 106 Latin
hypercube samples per study. The average probability from these studies is reported as
the “true’’ solution. Because the EGRA method is stochastic, it was also run 20 times
and the average probability is reported. To measure the accuracy of the methods, two
errors are reported for the EGRA results: the error in the average probability, and the
average of the absolute errors from the 20 studies. For comparison, the same errors
are given for the 20 LHS studies.
Most of the MPP search methods converge to the same MPP (in the vicinity of (0.5, 1)
in u-space) and thus report the same probability. These probabilities are more accu-
rate when second-order integration is used, but still have significant errors. However,
x-space TANA converges to a secondary MPP, which lies in a relatively flat region of
the limit state (in the vicinity of (2, 1) in u-space). This local lack of curvature means
that first-order and second-order integration produce approximately the same prob-
ability. In isolation, this second-order result could be viewed as a verification of the
first-order probability and thus provide a misguided confidence in the local reliability
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 421

Table 15.4 Analytic bi-level RBDO results, short column test problem.

RBDO Function Objective Constraint


Approach evaluations function violation

RIA z → p x-space AMV+ 149 217.1 0.0


RIA z → p x-space AMV2 + 129 217.1 0.0
RIA z → p FORM 911 217.1 0.0
RIA z → p SORM 1204 217.1 0.0
RIA z → β x-space AMV+ 72 216.7 0.0
RIA z → β x-space AMV2 + 67 216.7 0.0
RIA z → β FORM 612 216.7 0.0
RIA z → β SORM 601 216.7 0.0
PMA p, β → z x-space AMV+ 100 216.8 0.0
PMA p → z x-space AMV2 + 98 216.8 0.0
PMA β → z x-space AMV2 + 98 216.8 0.0
PMA p, β → z FORM 285 216.8 0.0
PMA p → z SORM 306 217.2 0.0
PMA β → z SORM 329 216.8 0.0

analysis. For this problem, the new EGRA method is more expensive than AMV2 +,
but cheaper than all the other methods, and provides much more accurate results.
Thus, global reliability analysis can provide accuracy similar to that of exhaustive
sampling with expense comparable to local reliability. It handles both multimodal
and nonsmooth limit states and does not require any derivative information from the
response function. The primary limitation of the technique is dimensionality. For larger
scale uncertainty quantification problems, the expense of building a global approxi-
mation grows quickly with dimension, although this can be mitigated to some extent
by requiring accuracy only along a single contour and only in the highest probability
regions.

4.3 RBDO results


These reliability analysis capabilities provide a substantial foundation for RBDO for-
mulations, and bi-level and sequential RBDO approaches based on local reliability
analyses have been investigated. Both approaches have utilized analytic gradients for
z, β, and p with respect to augmented and inserted design variables, and sequential
RBDO has additionally utilized a trust-region surrogate-based approach to manage
the extent of the Taylor-series approximations. A sample comparison of RBDO perfor-
mance, taken again from the short column example, is shown in Tables 15.4 and 15.5
for bi-level and sequential surogate-based RBDO, respectively.
Overall, RBDO results for the short column, cantilever, and steel column test
problems build on the reliability analysis trends. Basic first-order bi-level RBDO has
been evaluated with up to 18 variants (RIA/PMA with different p/β/z mappings for
MV, x-/u-space AMV, x-/u-space AMV+, and FORM), and fully-analytic bi-level
and sequential RBDO have each been evaluated with up to 21 variants (RIA/PMA
with different p/β/z mappings for x-/u-space AMV+, x-/u-space AMV2 +, FORM, and
SORM). Bi-level RBDO with MV and AMV are inexpensive but give only approximate
422 Structural design optimization considering uncertainties

Table 15.5 Surrogate-based RBDO results, short column test problem.

RBDO Function Objective Constraint


Approach evaluations function violation

RIA z → p x-space AMV+ 75 216.9 0.0


RIA z → p x-space AMV2 + 86 218.7 0.0
RIA z → p FORM 577 216.9 0.0
RIA z → p SORM 718 216.5 1.110e-4
RIA z → β x-space AMV+ 65 216.7 0.0
RIA z → β x-space AMV2 + 51 216.7 0.0
RIA z → β FORM 561 216.7 0.0
RIA z → β SORM 560 216.7 0.0
PMA p, β → z x-space AMV+ 76 216.7 2.1e-4
PMA p → z x-space AMV2 + 58 216.8 0.0
PMA β → z x-space AMV2 + 79 216.8 0.0
PMA p, β → z FORM 228 216.7 2.1e-4
PMA p → z SORM 128 217.2 0.0
PMA β → z SORM 171 216.8 0.0

optima. These approaches may be useful for preliminary design or for warm-starting
other RBDO methods. Bi-level RBDO with AMV+ was shown to have equal accuracy
and robustness to bi-level FORM-based approaches and be significantly less expensive
on average. In addition, usage of β in RIA RBDO constraints was preferred due to it
being more well-behaved and more well-scaled than constraints on p. Warm starts in
RBDO were most effective when the design changes were small, with the most benefit
for basic bi-level RBDO (with numerical differencing at the design level), decreasing to
marginal effectiveness for fully-analytic bi-level RBDO and to relative ineffectiveness
for sequential RBDO. However, large design changes were desirable for overall RBDO
efficiency and, compared to basic bi-level RBDO, fully-analytic RBDO and sequential
RBDO were clearly superior.
In second-order bi-level and sequential RBDO, the AMV2 + approaches were consis-
tently more efficient than the SORM-based approaches. In general, sequential RBDO
approaches demonstrated consistent computational savings over the corresponding
bi-level RBDO approaches, and the combination of sequential RBDO using AMV2 +
was the most effective of all of the approaches. With initial trust region size tuning,
sequential RBDO computational expense for these test problems was shown to be
as low as approximately 40 function evaluations per limit state (35 for a single limit
state in short column, 75 for two limit states in cantilever, and 45 for a single limit
state in steel column). At this level of expense, probabilistic design studies can become
tractable for expensive engineering applications.

5 Application to MEMS
In this section, we consider the application of DAKOTA’s reliability algorithms to
the design of micro-electro-mechanical systems (MEMS). In particular, we summarize
results for the MEMS application described in (Adams, Eldred, and Wittwer 2006).
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 423

Force
Actuation force Switch
contact
Fmax

Anchors Shuttle
E2 E3
E1

vernier Displacement
Fmin

(a) Scanning electron micrograph of a MEMS bistable (b) Schematic of force–displacement curve for bistable
mechanism in its second stable position. The attached MEMS mechanism. The arrows indicate stability of equi-
vernier provides position measurements. libria E1 and E3 and instability of E2.

Figure 15.2 Bi-stable MEMS mechanism.

These types of application studies provide essential feedback on the performance


of algorithms for real-world design applications, which may contain computational
challenges not well-represented in analytically defined test problems. The reliabil-
ity analysis and design results in (Adams, Eldred, and Wittwer 2006) are extended
to include parameter-adaptive solution verification through the use of finite element
a posteriori error estimation in (Adams, Bichon, Eldred, Carnes, Copps, Neckels,
Hopkins, Notz, Subia, and Wittwer 2006; Eldred, Adams, Copps, Carnes, Notz,
Hopkins, and Wittwer 2007).
Pre-fabrication design optimization of microelectromechanical systems (MEMS) is
an important emerging application of uncertainty quantification and reliability-based
design optimization. Typically crafted of silicon, polymers, metals, or a combination
thereof, MEMS serve as micro-scale sensors, actuators, switches, and machines with
applications including robotics, biology and medicine, automobiles, RF electronics,
and optical displays (Allen 2005). Design optimization of these devices is crucial due
to high cost and long fabrication timelines. Uncertainty in the micromachining and
etching processes used to manufacture MEMS can lead to large uncertainty in the
behavior of the finished products, resulting in low part yield and poor durability.
RBDO, coupled with computational mechanics models of MEMS, offers a means to
quantify this uncertainty and determine a priori the most reliable and robust designs
that meet performance criteria.
Of particular interest is the design of MEMS bistable mechanisms which toggle
between two stable positions, making them useful as micro switches, relays, and
nonvolatile memory. We focus on shape optimization of compliant bistable mecha-
nisms, where instead of mechanical joints, material elasticity enables the bistability
of the mechanism (Kemeny, Howell, and Magleby 2002; Ananthasuresh, Kota, and
Gianchandani 1994; Jensen, Parkinson, Kurabayashi, Howell, and Baker 2001).
Figure 15.2(a) contains an electron micrograph of a MEMS compliant bistable mecha-
nism in its second stable position. The first stable position is the as-fabricated position.
One achieves transfer between stable states by applying force to the center shuttle via
a thermal actuator, electrostatic actuator, or other means to move the shuttle past an
unstable equilibrium.
424 Structural design optimization considering uncertainties

Tapered beam
Anchor

Shuttle

Actuation force
(a) Schematic of a tapered beam bistable mechanism in (b) Scale rendering of tapered beam leg for bistable
as fabricated position (not to scale). mechanism.

Figure 15.3 Tapered beams for bistable MEMS mechanism.

Bistable switch actuation characteristics depend on the relationship between actu-


ation force and shuttle displacement for the manufactured switch. Figure 15.2(b)
contains a schematic of a typical force–displacement curve for a bistable mechanism.
The switch characterized by this curve has three equilibria: E1 and E3 are stable equi-
libria whereas E2 is an unstable equilibrium (arrows indicate stability). A device with
such a force–displacement curve could be used as a switch or actuator by setting the
shuttle to position E3 as shown in Figure 15.2(a) (requiring large actuator force Fmax )
and then actuating by applying the comparably small force Fmin in the opposite direc-
tion to transfer back through E2 toward the equilibrium E1 . One could utilize this force
profile to complete a circuit by placing a switch contact near the displaced position
corresponding to maximum (closure) force as illustrated. Repeated actuation of the
switch relies on being able to reset it with actuation force Fmax .
The device design considered in this chapter is similar to that in the electron
micrograph in Figure 15.2(a), for which design optimization has been previously con-
sidered (Jensen, Parkinson, Kurabayashi, Howell, and Baker 2001), as has robust
design under uncertainty with mean value methods (Wittwer, Baker, and Howell 2006).
The primary structural difference in the present design is the tapering of the legs, shown
schematically in Figure 15.3(a). Figure 15.3(b) shows a scale drawing of one tapered
beam leg (one quarter of the full switch system). A single leg of the device is approx-
imately 100 µm wide and 5–10 µm tall. This topology is a cross between the fully
compliant bistable mechanism reported in (Jensen, Parkinson, Kurabayashi, Howell,
and Baker 2001) and the thickness-modulated curved beam in (Qiu and Slocum 2004).
As described in the optimization problem below, this tapered geometry offers many
degrees of freedom for design.
The tapered beam legs of the bistable MEMS mechanism are parameterized by the 13
design variables shown in Figure 15.4, including widths and lengths of beam segments
as well as angles between segments. For simulation, a symmetry boundary condition
allowing only displacement in the negative y direction is applied to the right surface
(x = 0) and a fixed displacement condition is applied to the left surface. With appro-
priate scaling, this allows the quarter model to reasonably represent the full four-leg
switch system.
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 425

0.5
W4
0 W3 <
4
0.5
W2
1
3
1.5
y (m)

2

2.5 W1 2
W0
3
1
3.5
4
L1 L2 L3 L4
4.5
100 80 60 40 20 0
x (m)

Figure 15.4 Design parameters for the tapered-beam fully-compliant bistable mechanism
(geometry not to scale). Displacement is applied in the negative y direction at the
right face (x = 0), while at the left face, a fixed displacement condition is enforced.

Table 15.6 Uncertain variables x = [


W, Sr ] used in reliability analysis.

Variable Mean (µ) Std. dev. Distribution

W (width bias) −0.2 µm 0.08 Normal


Sr (residual stress) −11 Mpa 4.13 Normal

Due to manufacturing processes, fabricated geometry can deviate significantly from


design-specified beam geometry. As a consequence of photo lithography and etching
processes, fabricated in-plane geometry edges (contributing to widths and lengths) can
be 0.1 ± 0.08 µm less than specified. This uncertainty in the manufactured geome-
try leads to substantial uncertainty in the positions of the stable equilibria and in the
maximum and minimum force on the force–displacement curve. The manufactured
thickness of the device is also uncertain, though this does not contribute as much to
variability in the force–displacement behavior. Uncertain material properties such as
Young’s modulus and residual stress also influence the characteristics of the fabricated
beam. For this application two key uncertain variables are considered:
W (edge bias
on beam widths, which yields effective manufactured widths of Wi +
W, i = 0, . . . , 4)
and Sr (residual stress in the manufactured device), with distributions shown in
Table 15.6.
Given the 13 geometric design variables d = [L1 , L2 , L3 , L4 , θ1 , θ2 , θ3 , θ4 , W0 , W1 ,
W2 , W3 , W4 ] and the specified uncertain variables x = [
W, Sr ] we formulate a
426 Structural design optimization considering uncertainties

m 5.0 Fmin zb  2 m 5.0 Fmin

b2
b2
(a) Response PDF control of mean and right tail (b) Response PDF control of both tails

Figure 15.5 Schematic representation of design formulations for output response PDF control.

reliability-based design optimization problem to achieve a design that actuates reliably


with at least 5 µN force. The RBDO formulation uses the limit state

g(x) = Fmin (x) (59)

and failure is defined to be actuation force with magnitude less than 5.0 µN
(Fmin > −5.0). Reliability index βccdf ≥ 2 is required. The RBDO problem utilizes the
RIA z → β approach (Eq. 44) with z = −5.0:
@ A
max E Fmin (d, x)
subject to 2 ≤ @ βccdf (d) A
50 ≤ E F@ max (d, x)A ≤ 150 (60)
E@ E2 (d, x) A ≤ 8
E Smax (d, x) ≤ 3000

although the PMA β → z approach (Eq. 45) could also be used. The use of the Fmin
metric in both the objective function and the reliability constraint results in a powerful
problem formulation, because in addition to yielding a design with specified reliability,
it also produces a robust design. By forcing the expected value of Fmin toward the −5.0
target while requiring two input standard deviations of surety, the optimization prob-
lem favors designs with less variability in Fmin . This renders the design performance
less sensitive to uncertainties. The response PDF control is depicted in Figure 15.5(a),
where the mean is maximized subject to a reliability constraint on the right tail. Alter-
natively, the response PDF control depicted in Figure 15.5(b) could be employed by
maximizing the PMA z level corresponding to β = −2. This has the advantage of con-
trolling both sides of the response PDF, but it is more computationally expensive since
it requires the solution of two MPP optimization problems per design cycle instead of
one. For this reason, the RIA RBDO formulation in Eq. 60 is used for all results in
this section.
Results using the MVFOSM, AMV2 +, and FORM methods are presented in
Table 15.7 and the optimal force–displacement curves are shown in Figure 15.6. Opti-
mization with MVFOSM reliability analysis offers substantial improvement over the
initial design, yielding a design with actuation force Fmin nearer the −5.0 target and
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 427

Table 15.7 Reliability formulation RBDO: design variable bounds and optimal designs from MVFOSM,
AMV2 +, and FORM methods for MEMS bistable mechanism.

Variable/metric Initial MVFOSM AMV 2 + FORM


Optimal Optimal Optimal
l.b. Name u.b.

E[F min ] (µN) −26.29 −5.896 −6.188 −6.292


2 β 5.376 2.000 1.998 1.999
50 E[F max ] (µN) 150 68.69 50.01 57.67 57.33
E[E2 ] (µm) 8 4.010 5.804 5.990 6.008
E[Smax ] (MPa) 1200 470 1563 1333 1329
AMV2 + verified β 3.771 1.804 – –
FORM verified β 3.771 1.707 1.784 –

70 2
MVFOSM
60 2.5
AMV2
3 Target force
50
3.5
40
Force (µN)
Force (µN)

4
30
4.5
20
5
10
5.5
0 6

10 6.5
0 2 4 6 8 10 6 6.5 7 7.5 8
Displacement (µm) Displacement (µm)

Figure 15.6 Optimal force–displacement curves resulting from RBDO of MEMS bistable mech-
anism with mean value and AMV2 + methods. The right plot shows the area near
the minimum force. Two input standard deviations (as measured by the method used
during optimization) separate F min from the target Fmin = −5.0.

tight reliability constraint β = 2. However, since mean value analyses estimate reliabil-
ity based solely on evaluations at the means of the uncertain variables, they can yield
inaccurate reliability metrics in cases of nonlinearity or nonnormality. In this example,
the actual reliability (verified with MPP-based methods) of the optimal MVFOSM-
based design is only 1.804 (AMV2 +) or 1.707 (FORM); both less than the prescribed
reliability β ≥ 2. In this example, the additional computational expense incurred when
using MPP-based reliability methods appears to be justified.
Reliability-based design optimization with either the AMV2 + or FORM methods for
reliability analysis yield constraint-respecting optimal beam designs with significantly
different geometries than MVFOSM. The MPP-based methods yield a more conser-
vative value of Fmin due to the improved estimation of β. Each of the three methods
428 Structural design optimization considering uncertainties

Fmin(∆W, Sr)
1.5

2

2.74 2.5

3
6.87
3.5
Residual stress Sr (MPa)

4
11
4.5

5
15.13
5.5

6
19.26
6.5

0.36 0.28 0.2 0.12 0.04


Width bias ∆W (µm)

Figure 15.7 Contour plot of F min (d, x) as a function of uncertain variables x (design variables d
fixed at MVFOSM optimum). Dashed line: limit state g(x) = F min (x) = −5.0; plus sign:
MPP from AMV2 +; circle: MPP from FORM; triangle indicates contour corresponding
to F min = −6.2 (optimal expected value from MPP-based RBDO runs).

yields an improved design that respects the reliability constraint. The variability in Fmin
has been reduced from approximately 5.6 (initial) to 0.52 (MVFOSM design), 0.67
(AMV2 + .design), or /0.65 (FORM design) µN per (FORM verified) input standard
deviation E[Fminβ]−Fmin , resulting in designs that are less sensitive to input uncertainties.
For the MVFOSM optimal design, the verified values of β calculated by AMV2 + and
FORM differ by 6%, illustrating a typical challenge engineering design problems pose
to reliability analysis methods. Figure 15.7 displays the results of a parameter study for
the metric Fmin (d, x) as a function of the uncertain variables x for design variables fixed
at the optimum from MVFOSM RBDO. Since the uncertain variables are both normal,
the transformation to u-space used by AMV2 + and FORM is linear, so the contour
plot is scaled to a ±3 standard deviation range in the native x-space. The relevant limit
state for MPP searches, g(x) = Fmin (x) = −5.0, is indicated by the dashed line. For some
design variable sets d (not depicted), the limit state is relatively well-behaved in the
range of interest and first-order probability integrations would be sufficiently accurate.
For the design variable set used to generate Figure 15.7, the limit state has significant
nonlinearity, and thus demands more sophisticated probability integrations. The most
probable points converged to by the AMV2 + and FORM methods are denoted in Fig-
ure 15.7 by the plus sign and circle, respectively. While the distance from each point
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 429

to the origin differs slightly (see verified β values in Table 15.7), there clearly exist
multiple candidates for the most probable point u satisfying Eq. 16. This appears to
be a common occurrence when using RBDO methods: the tendency of the optimizer
to push the design into a corner where the mean response is encircled by the failure
domain. Unfortunately, even second-order probability integration does a poor job in
these situations due to exception handling requirements for negative principal curva-
tures in Eqs. 36–37. This motivates future use of global reliability methods within
RBDO to properly estimate the probabilities in these situations.
Another computational difficulty observed during design optimization of an earlier
bistable mechanism design is simulation failure resulting from model evaluation at
extreme values of physical and/or geometric parameters. For example, during an MPP
search, edge bias
W might grow in magnitude into its left tail causing the effective
width of the beam to shrink, possibly resulting in too flimsy a structure to simulate. In
summary, highly nonlinear limit states, nonsmooth and multimodal limit states, and
simulation failures caused by, e.g., evaluations in the tails of input distributions, pose
challenges for RBDO in engineering applications, and must be mitigated through devel-
opment of algorithms hardened against these challenges, careful attention to problem
formulation, and ongoing simulation refinement.

6 Conclusions
This chapter has overviewed recent algorithm research in first and second-order local
reliability methods. A number of algorithmic variations have been presented, and the
effect of different limit state approximations, probability integrations, warm start-
ing, most probable point search algorithms, and Hessian approximations has been
discussed. These local reliability analysis capabilities have provided the foundation
for reliability-based design optimization (RBDO) methods, and bi-level and sequential
formulations have been presented. The RBDO formulations employ analytic sensitivi-
ties of reliability metrics with respect to design variables that either augment or define
distribution parameters for the uncertain variables.
An emerging algorithmic capability is global reliability analysis methodologies which
address the common limitations of local methods. In particular, nonsmooth limit states
can cause convergence problems with gradient-based optimizers and probability inte-
grations for highly nonlinear or multimodal limit states cannot be performed accurately
using a low order polynomial representation from a single MPP solution. Efficient
global reliability analysis (EGRA), on the other hand, can handle highly nonlinear and
multimodal limit states and is insensitive to nonsmoothness since it does not require
any derivative information from the response function.
Relative performance of these reliability analysis and design algorithms has been
measured for a number of benchmark test problems using the DAKOTA software. The
most effective local techniques in these computational experiments have been AMV2 +
for reliability analysis and second-order sequential/surrogate-based approaches for
RBDO. In a low-dimensional multimodal example problem, global reliability analysis
has been shown to provide accuracy similar to that of exhaustive sampling with expense
comparable to local reliability. Continuing efforts in algorithm research will build on
these successful methods through investigation of trust-region model management for
approximation-based local reliability analysis, sequential RBDO with mixed surrogate
430 Structural design optimization considering uncertainties

and direct models (for probabilistic and deterministic components, respectively) and
RBDO formulations based on global reliability assessment.
These reliability analysis and design algorithms have been applied to real-world
applications in the shape optimization of micro-electro-mechanical systems, and expe-
riences with this deployment have been presented. Issues identified in deploying
reliability methods to complex engineering applications include highly nonlinear, non-
smooth/noisy, and multimodal limit states, and potential simulation failures when
evaluating parameter sets in the tails of input distributions. In addition, RBDO meth-
ods tend to exacerbate the reliability analysis challenges by exhibiting the tendency
to push the design into a corner where the mean response is encircled by the failure
domain. To mitigate these challenges, continuing development of new algorithms that
have been hardened for engineering design applications, careful attention to design
under uncertainty problem formulations, and refinements to modeling and simulation
capabilities are recommended.

Acknowledgments
The authors would like to express their thanks to the Sandia Computer Science
Research Institute (CSRI) for support of this collaborative work between Sandia
National Laboratories and Vanderbilt University.

References

Adams, B.M., Bichon, B.J., Eldred, M.S., Carnes, B., Copps, K.D., Neckels, D.C.,
Hopkins, M.M., Notz, P.K., Subia, S.R. & Wittwer, J.W. 2006. Solution-verified reliabil-
ity analysis and design of bistable mems using error estimation and adaptivity. Technical
Report SAND2006-6286, Sandia National Laboratories, 2006, October, Albuquerque, NM.
Adams, B.M., Eldred, M.S. & Wittwer J.W. 2006. Reliability-based design optimization for
shape design of compliant micro-electro-mechanical systems. In Proceedings of the 11th
AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Number AIAA-
2006-7000, 2006, September 6–8, Portsmouth, VA.
Agarwal, H., Renaud, J.E., Lee, J.C. & Watson, L.T. 2004. A unilevel method for reliability-
based design optimization. In Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC Struc-
tures, Structural Dynamics, and Materials Conference, Number AIAA-2004-2029, 2004,
April 19–22, Palm Springs, CA.
Allen, J.J. 2005. Micro Electro Mechanical System Design. Boca Raton: Taylor and Francis.
Allen, M. & Maute, K. 2004. Reliability-based design optimization of aeroelastic structures.
Struct. Multidiscip. Optim. 27:228–242.
Ananthasuresh, G.K., Kota, S. & Gianchandani, Y. 1994. A methodical approach to the design
of compliant micromechanisms. In Proc. IEEE Solid-State Sensor and Actuator Workshop,
Hilton Head Island, SC, pp. 189–192.
Bichon, B.J., Eldred, M.S., Swiler, L.P., Mahadevan, S. & McFarland, J.M. 2007. Multimodal
reliability assessment for complex engineering applications using efficient global optimization.
In Proceedings of the 9th AIAA Non-Deterministic Approaches Conference, Number AIAA-
2007-1946, 2007, April 23–26, Honolulu, HI.
Box, G.E.P. & Cox, D.R. 1964. An analysis of transformations. J. Royal Stat. Soc. 26:
211–252.
Breitung, K. 1984. Asymptotic approximation for multinormal integrals. J. Eng. Mech., ASCE
110(3):357–366.
O v e r v i e w o f r e l i a b i l i t y a n a l y s i s a n d d e s i g n c a p a b i l i t i e s i n DA K OTA 431

Chen, X. & Lind, N.C. 1983. Fast probability integration by three-parameter normal tail
approximation. Struct. Saf. 1:269–276.
Der Kiureghian, A. & Liu, P.L. 1986. Structural reliability under incomplete probability
information. J. Eng. Mech. ASCE 112(1):85–104.
Du, X. & Chen, W. 2004. Sequential optimization and reliability assessment method for efficient
probabilistic design. J. Mech. Design 126:225–233.
Eldred, M.S. & Bichon, B.J. 2006. Second-order reliability formulations in DAKOTA/UQ.
In Proceedings of the 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics
and Materials Conference, Number AIAA-2006-1828, 2006, May 1–4, Newport, RI.
Eldred, M.S. & Dunlavy, D.M. 2006. Formulations for surrogate-based optimization with
data fit, multifidelity, and reduced-order models. In Proceedings of the 11th AIAA/ISSMO
Multidisciplinary Analysis and Optimization Conference, Number AIAA-2006-7117, 2006,
September 6–8, Portsmouth, VA.
Eldred, M.S., Adams, B.M., Copps, K.D., Carnes, B., Notz, P.K., Hopkins, M.M. &
Wittwer, J.W. 2007. Solution-verified reliability analysis and design of compliant micro-
electro-mechanical systems. In Proceedings of the 9th AIAA Non-Deterministic Approaches
Conference, Number AIAA-2007-1934, 2007, April 23–26, Honolulu, HI.
Eldred, M.S., Agarwal, H., Perez, V.M., Wojtkiewicz, S.F. Jr. & Renaud, J.E. 2007. Investigation
of reliability method formulations in DAKOTA/UQ. Structure & Infrastructure Engineering:
Maintenance, Management, Life-Cycle Design & Performance, Vol. 3, No. 3, Sept. 2007,
pp. 199–213.
Eldred, M.S., Brown, S.L., Adams, B.M., Dunlavy, D.M., Gay, D.M., Swiler, L.P., Giunta, A.A.,
Hart, W.E., Watson, J.-P., Eddy, J.P., Griffin, J.D., Hough, P.D., Kolda, T.G., Martinez-
Canales, M.L. & Williams, P.J. 2006. DAKOTA, a multilevel parallel object-oriented
framework for design optimization, parameter estimation, uncertainty quantification, and
sensitivity analysis: Version 4.0 users manual. Technical Report SAND2006-6337, Sandia
National Laboratories, Albuquerque, NM. See http://www.cs.sandia.gov/DAKOTA/software.
html. Accessed October, 2006.
Eldred, M.S., Giunta, A.A., Wojtkiewicz, S.F. Jr. & Trucano, T.G. 2002. Formulations for
surrogate-based optimization under uncertainty. In Proceedings of the 9th AIAA/ISSMO Sym-
posium on Multidisciplinary Analysis and Optimization, Number AIAA-2002-5585, 2002,
September 4–6, Atlanta, GA.
Fadel, G.M., Riley, M.F. & Barthelemy, J.-F.M. 1990. Two point exponential approximation
method for structural optimization. Structural Optimization 2(2):117–124.
Gill, P.E., Murray, E.W., Saunders, M.A. & Wright, M.H. 1998. User’s guide for npsol 5.0: A for-
tran package for nonlinear programming. Technical Report SOL 86-1, System Optimization
Laboratory, Stanford University, Stanford, CA.
Giunta, A.A. & Eldred, M.S. 2000. Implementation of a trust region model management strategy
in the DAKOTA optimization toolkit. In Proceedings of the 8th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization, Number AIAA-2000-4935,
2000, September 6–8, Long Beach, CA.
Haldar, A. & Mahadevan, S. 2000. Probability, Reliability, and Statistical Methods in
Engineering Design. New York: Wiley.
Hohenbichler, M. & Rackwitz, R. 1986. Sensitivity and importance measures in structural
reliability. Civil Eng. Syst. 3:203–209.
Hohenbichler, M. & Rackwitz, R. 1988. Improvement of second-order reliability estimates by
importance sampling. J. Eng. Mech., ASCE 114(12):2195–2199.
Hong, H.P. 1999. Simple approximations for improving second-order reliability estimates.
J. Eng. Mech. ASCE 125(5):592–595.
Jensen, B.D., Parkinson, M.B., Kurabayashi, K., Howell, L.L. & Baker, M.S. 2001. Design
optimization of a fully-compliant bistable micro-mechanism. In Proc. 2001 ASME Intl. Mech.
Eng. Congress and Exposition, 2001, November 11–16, New York, NY.
432 Structural design optimization considering uncertainties

Jones, D.R., Shonlau, M. & Welch, W. 1998. Efficient global optimization of expensive black-
box functions. INFORMS J. Comp. 12:272–283.
Karamchandani, A. & Cornell, C.A. 1992. Sensitivity estimation within first and second order
reliability methods. Struct. Saf. 11:95–107.
Kemeny, D.C., Howell, L.L. & Magleby, S.P. 2002. Using compliant mechanisms to improve
manufacturability in MEMS. In Proc. 2002 ASME DETC, Number DETC2002/DFM-34178.
Meza, J.C. 1994. OPT++: An object-oriented class library for nonlinear optimization. Technical
Report SAND94-8225, Sandia National Laboratories, 1994, March, Albuquerque, NM.
Qiu, J. & Slocum, A.H. 2004. A curved-beam bistable mechanism. J. Microelectromech. Syst.
13(2):137–146.
Rackwitz, R. 2002. Optimization and risk acceptability based on the Life Quality Index. Struct.
Saf. 24:297–331.
Rackwitz, R. & Fiessler, B. 1978. Structural reliability under combined random load sequences.
Comput. Struct. 9:489–494.
Rosenblatt, M. 1952. Remarks on a multivariate transformation. Ann. Math. Stat. 23(3):
470–472.
Tu, J., Choi, K.K. & Park, Y.H. 1999. A new study on reliability-based design optimization.
J. Mech. Design 121:557–564.
Wang, L. & Grandhi, R.V. 1994. Efficient safety index calculation for structural reliability
analysis. Comput. Struct. 52(1):103–111.
Wittwer, J.W., Baker, M.S. & Howell, L.L. 2006. Robust design and model validation of
nonlinear compliant micromechanisms. J. Microelectromechanical Sys. 15(1). To appear.
Wojtkiewicz, Jr. S.F., Eldred, M.S., Field, Jr. R.V., Urbina, A. & Red-Horse, J.R. 2001. A toolkit
for uncertainty quantification in large computational engineering models. In Proceedings
of the 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials
Conference, Number AIAA-2001-1455, 2001, April 16–19, Seattle, WA.
Wu, Y.-T. 1994. Computational methods for efficient structural reliability and reliability
sensitivity analysis. AIAA J. 32(8):1717–1723.
Wu, Y.-T. & Wirsching, P.H. 1987. A new algorithm for structural reliability estimation. J. Eng.
Mech. ASCE 113:1319–1336.
Wu, Y.-T., Millwater, H.R., & Cruse, T.A. 1990. Advanced probabilistic structural analysis
method for implicit performance functions. AIAA J. 28(9):1663–1669.
Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety-factor based approach for probability-
based design optimization. In Proceedings of the 42rd AIAA/ASME/ASCE/AHS/ASC Struc-
tures, Structural Dynamics, and Materials Conference, Number AIAA-2001-1522, 2001,
April 16–19, Seattle, WA.
Xu, S. & Grandhi, R.V. 1998. Effective two-point function approximation for design
optimization. AIAA J. 36(12):2269–2275.
Zou, T., Mahadevan, S. & Rebba, R. 2004. Computational efficiency in reliability-based opti-
mization. In Proceedings of the 9th ASCE Specialty Conference on Probabilistic Mechanics
and Structural Reliability, 2004, July 26–28, Albuquerque, NM.
Part 2

Robust Design Optimization (RDO)


Chapter 16

Structural robustness and its


relationship to reliability
Jorge E. Hurtado
National University of Colombia, Manizales, Colombia

ABSTRACT: Two main ways of incorporating structural uncertainties in design optimization


under a probabilistic point of view have been proposed in the international literature: robust
and reliability-based design options. While the former is oriented to a reduction of the spread
of critical responses, the latter aims to control the probabilities of failure. However, since the
reduction of response spread does not preclude a regard to extreme cases, both methods can
be considered as complementary. This makes desirable to have at a disposal methods yielding a
design satisfying reliability and robustness criteria. In this chapter, methods allowing a simulta-
neous calculation of the leading probabilistic quantities used in these approaches are examined.
It is shown that the the combination of the saddlepoint expansion of probability density together
with the method of point estimates for approximating the response statistical moments yields
good results. The chapter also deals with a rigorous definition of robustness, which is some-
what loose in the literature. It is shown that the entropy concept is highly appealing to such a
purpose, as it shows similarities with that of controllability and stability concepts in dynamic
systems theory. On this basis the concept of Robustness Assurance in structural design is intro-
duced, paralleling that of Quality Assurance in the construction phase. A practical method for
robust optimal design interpreted as entropy minimization is presented for the common case of
linear structures.

1 Introduction

1.1 Theoretic al and practical approache s to unc e rtai nty


In an essay that deserves attention (Sexsmith 1999), R. G. Sexsmith remarks that,
despite the rapid development of structural reliability theories and methods, their inclu-
sion into the design practice by structural engineers has been little. This rejection is
attributed by the author mainly to educational problems. In fact, modern natural sci-
ence, which arose from mechanical sciences developed in the sixteenth century on the
basis of a mathematical interpretation of nature, showed in its beginnings a trend to
interpret the random results of experiments as a deficiency of mathematical models
rather than as a property of nature itself. In another interesting essay, G. F. Klir quotes
the explicit objection common in the XIX century uncertainty jargon (Klir 1997). In
those times, uncertainty was rejected as a natural phenomenon because the illusion of
a science providing exact answers was still alive and enthusiastic. However, the intro-
duction of the mathematical models for probability and randomness became absolutely
436 Structural design optimization considering uncertainties

necessary to explain phenomena in thermodynamics and quantum mechanics. From


that time on, the old paradigm of a exact science was abandoned in those areas where
the evidence and the magnitude of randomness could no longer be ignored. Nowadays,
as a consequence of the need of considering complex systems, we assist to the develop-
ment of proposals intended to enhance or overcome the modeling of randomness and
uncertainty offered by probability theory, such as possibility theory (Dubois and Prade
1988), fuzzy set theory (Kosko 1992, e.g.), interval analysis (Kharitonov 1997; Hansen
and Walster 2004), clustering analysis (Ben-Haim 1985; Ben-Haim 1996; Elishakoff
1999), ellipsoidal modeling (Chernousko 1999), etc.
Structural and mechanical engineering continue the tradition of classical mechanics
as developed by Galileo and Newton. This fact may be invoked to explain the above
mentioned reluctance to include uncertainty models in structural design, at a difference
to the well established central consideration of randomness in quantum mechanics and
other, later branches of physics. In the author’s opinion, the explanation of this fact
cannot entirely be attributed to this historical reason. In addition to this, there is the
smaller randomness present in most structural and mechanical situations (with the
exemption of earthquake loads and others), as compared to that present in quantum
mechanics. But more important is the nature of the challenge posed to the structural
engineer, namely, the design of an object. In Kant’s terms, modern science is oriented by
the two-way approach of analysis (a priori mathematical principles, which are exact)
and synthesis (a posteriori empirical facts, to which mathematical models must accom-
modate), and it offers a knowledge that remains valid for some time until it is falsified,
according to Popper’s theory of science. Engineering, on the contrary, aims to offer
not a knowledge but a product, which is defended not by arguments but by its quality,
whose cost must be a minimum and whose design must be produced in most cases
with resort to simplifying rules. This implies taking decisions, which is a challenge not
hanging over knowledge discovering. All this may perhaps explain the somewhat para-
doxical fact that structural engineers, on the one hand, do not include probabilities into
their calculations, but, on the other, have for long recognized the importance of uncer-
tainties in the design practice, as expressed in the use of safety factors of several kinds
and of statistical analysis of experiments for fixing their code values. From the practical
design viewpoint it is not realistic to expect that in actual structural designs failure prob-
abilities will ever be calculated as a part of conventional design process of most struc-
tures. Besides the educational and computational problems involved, there is the lack
of sufficient probabilistic information on load and material parameters, the difficulties
for interpreting such probabilities, the high sensitivity of these values to probabilistic
models, the randomness of the results conveyed by the most universal method (Monte
Carlo simulation), etc. But the main reason is and will be the pragmatism of design.
Thus, it can be said that randomness is in fact considered in structural design,
but in a manner that results quite unsatisfactory from the analytical, argumentative,
mathematically-oriented point of view. In fact, the close examination of the relation-
ship between safety factors and failure probabilities conducted recently by I. Elishakoff
(Elishakoff 2005) shows that a link between them strongly depends on the probabilis-
tic structure of the random variables x in hand1 . But this is just the information from
1 In this chapter an underlined letter indicates a random variable or a vector, if it is written boldface.
Non-underlined letters are used to denote either their realizations, their deterministic counterparts or
deterministic variables in general.
Structural robustness and its relationship to reliability 437

whose need safety factors intend to dispense with. The contradiction lies in the fact
that reliability requires a scientifically-oriented calculation, whereas safety factors are
a mere practical tool for producing a qualified product.
The requirement of practical, sometimes simplifying approaches can be considered
as the implicit but dominating rule in engineering design. This has fostered the devel-
opment of the concept of robustness, meaning a product that exhibits strength with
respect to variations or fluctuations of parameters, sometimes random, sometimes
uncontrollable and sometimes unknown. A robust product assures the engineer that
it can absorb such fluctuation without compromising its quality, which is its main
feature. Such, in fact, is the rationale behind the concept of BIBO (bounded input,
bounded output) stability of dynamic systems (Szidarovszky and Bahill 1992, e.g.).
In that field, the certainty that under an action of a bounded input the system response
will be also bounded is considered sufficient by the engineer to be relieved from the
need of tracing the exact trajectory of the response in particular situations. In general,
robust design orientation aims at overcoming the need of considering particular uncer-
tain situations and to assure the designer the imperturbability of the system under the
presence of unknown, unpredictable or random parameters. Notice, however, that
the question for a quantitative measure of the uncertainty in extreme situations is not
solved by the robustness approach.

1.2 Optimal structural des ign under un c e rtai nty


Arising from the deterministic orientation of modern engineering mentioned above,
structural optimization is normally performed without regard to random fluctuations
of the parameters. It consists in minimizing a cost function C(y) subject to determin-
istic constraints posed upon responses (displacements, stresses, etc) and geometrical
quantities. Formally, this problem is expressed as (Haftka et al. 1990; Kirsch 1993,
e.g.)

Problem Deterministic optimization :


find : y
minimizing : C(y) (1)
subject to : fi (y) < Fi , i = 1, 2, . . .
y− ≤ y ≤ y+

In this equation y is the vector of design variables, fi (y) are system responses depending
on them and Fi are their limiting values. These constitute the so-called behavioral
constraints. On the other hand y− and y+ are bounds imposed to the design variables,
normally constituting geometric constraints. It is evident that the uncertainties present
in loads, materials and elements are not explicitly taken into account. Thus the result
may be a fragile structure with respect to random changes in the design parameters.
Notice, however, that in the definition of the upper bounds in behavioral and geometric
constraints there is an implicit recognition of the risk associated to values excessively
low. The decision on these bounds is normally taken on the basis of safety factors,
which simply express a caution with respect to randomness and uncertainty. Anyhow,
the robustness and the reliability of a structure designed using safety factors without
regard to the probabilistic definition of the random variables present in it remain rather
uncertain.
438 Structural design optimization considering uncertainties

The explicit consideration of uncertainties in structural design optimization is a chal-


lenging task, as it demands the minimization of cost functions in a noisy environment
generated by the presence of certain random variables. However, it can be consid-
ered an analysis of maximum importance, because it yields the solution to the ideal of
producing a structural model that is both economical and safe.
Two main families of methods have been proposed to this end: 1) Robust Design
Optimization (RDO), which is oriented to minimizing the spread of the structural
responses, as measured by low-order statistical moments. 2) Reliability-Based Design
Optimization (RBDO), which minimizes the cost function with probabilistic con-
straints (Rosenblueth and Mendoza 1971; Gasser and Schuëller 1997; Frangopol 1995;
Royset et al. 2001; Royset and Polak 2004).
A common formulation of RBDO can be formally presented as follows:

Problem Reliability-based optimization :


find : y
minimizing : C(y) (2)
subject to : P[fi (x, y) > Fi ] ≤ Pi , i = 1, 2, . . .
y− ≤ y ≤ y+

where x is a set of random variables, P[A] the probability of the random event A and
Pi its limiting value. Function gi (x, y) = Fi − fi (x, y) is known in structural reliability as
the limit state function. Other formulations than Eq. (2) are, however, possible.
The following are some criticisms that have been addressed to the application of
structural reliability for performing a structural optimization under uncertainty:

• The lack of information about actual probability models for materials and loads
and the concern about the applicability of the published ones in every case.
• The sensitivity of the tails of the input probability density models to their parame-
ters (Elishakoff 1991; Ben-Haim 1996; Elishakoff 1999). This undoubtedly affects
the value of the failure probability.
• The difficulty of interpretation among the engineering community of the meaning
of the failure probability. Despite structural safety researchers stress that it is a
nominal failure indicator, there is a natural tendency to interpret it in the frequentist
sense. On the other hand the Bayesian interpretation as a belief measure has gained
favor but in the limited field of health monitoring and other tasks associated to
existing structures. It is difficult to accommodate such an interpretation for new
projects. These and other phenomena explain the little use of structural reliability
concepts in design practice (Sexsmith 1999).
• The limitations of some popular methods of calculating such probabilities. For
instance, FORM presents problems of accuracy for non-linear limit state func-
tions and convergence problems (Schuëller and Stix 1987); the Response Surface
Method exhibits problems of instability (Guan and Melchers 2001); Monte Carlo
simulation requires high computational efforts, etc. There is a continuous effort
among researchers for improving these techniques and developing new ones (Au
and Beck 2001, e.g.), but there is no agreement about a method that satisfies both
the requirements of generality, accuracy and low computational cost. An updated,
general benchmark study is presently lacking.
Structural robustness and its relationship to reliability 439

• The need of calculating one or more failure probabilities, which in some cases
is a time-consuming task, for each trial model, increasing enormously the com-
putational effort with respect to conventional, deterministic optimization and to
reliability analysis as well. If Monte Carlo simulation is used this problem can be
greatly alleviated if use is made of convenient to apply solver surrogate techniques
such as neural networks (Papadrakakis et al. 1996; Hurtado and Alvarez 2001;
Hurtado 2001) or support vector machines (Hurtado 2004a; Hurtado 2004b;
Hurtado 2007). For performing the optimization, these methods can be com-
bined with optimization techniques with biological optimization such as genetic
algorithms, evolutionary strategies (Papadrakakis et al. 1998; Lagaros et al.
2002; Lagaros and Papadrakakis 2003), particle swarm optimization (Hurtado
2006), etc.

Some structural designers tend to favor the concept of robustness, understood as


safety against unpredictable variations of the design parameters, over the concept of
failure probability, which is normally a very low value lacking significant meaning in
practice. This may be explained by the production-oriented approach of design dis-
cussed above. Robustness can be defined in several forms, depending on whether use is
made of the clustering (Ben-Haim 1985; Ben-Haim 1996) or conventional, frequentist
interpretation of uncertainty. In this chapter the second interpretation is adopted. The
following formulation of robust design optimization corresponds to the proposal in
(Doltsinis and Kang 2004; Doltsinis et al. 2005):

Problem Robust optimization :


find : y

minimizing : C(y) = (1 − α)E[f (y)]/µ∗ + α Var[f (y)]/σ ∗
 (3)
subject to : E[gi (y)] + βi Var[gi (y)] ≤ 0, i = 1, 2, . . .

Var[hj (y)] ≤ σj+ , j = 1, 2, . . .
y− ≤ y ≤ y+

where f (y) is a performance function, 0 < α < 1 is a factor weighting the minimiza-
tion of its mean and standard deviation, βi > 0 is a factor defining the control of the
response gi (y) in the tail of its distribution, σj+ an upper bound to the standard devi-
ation of response hj (y) and µ∗ , σ ∗ are normalizing factors. Many other formulations
are, however, possible. Globally speaking, the essential of robustness optimization is
the control of low order statistical moments of the response.
The nature of these two alternative methods can be explained with the help of
Fig. 16.1, which shows three alternative probability density functions of a structural
response. While RDO aims to reduce the spread, RBDO is intended to bound the
probability of surpassing the critical threshold. Notice that in applying RDO the effect
pursued by RBDO is indirectly obtained, because the reduction of the spread implies a
reduction of the failure probability. The reliability (or its complement, the failure prob-
ability) refers to the occurrence of extreme events, whereas the robustness refers to the
low spread of the structural responses under large variation of the input parameters.
440 Structural design optimization considering uncertainties

pz(z)

Figure 16.1 Robust and reliability-based design options.While the first aims at reducing the spread
of the response function, the second attempts to control the probability of surpassing
a critical threshold (dashed line). However, low failure probabilities may correspond
to large spreads (dotted line).

This is assumed to assure a narrow response density function, which in turn assures a
low failure probability, if it is unimodal, as is common case. However, this is not neces-
sarily true: to a significant spread of the structural response may correspond a low fail-
ure probability because the definition of the limit state can be such that the possibility
of surpassing it is very rare, as the situation it describes is rather extreme (See Fig. 16.1).
In applying the RDO other possibilities exist, such as moving the probability density
function away from a critical threshold or a combination of both approaches.

1.3 Ai m s and s c o pe
The above discussion means that a comparison between RDO and RBDO on the basis
of some examples cannot be conclusive, because, as Fig. 16.1 shows, it all depends
on the critical thresholds selected for reliability estimations. Besides, the relationship
between statistical moments and probabilities is severely nonlinear. For these reasons,
to a good consideration of the uncertainties in structural design both approaches are
valuable and complementary. This justifies the search of techniques that allow estab-
lishing a link between them, which is the purpose of the research reported herein. In
fact, since both kinds of designs correspond to a different way of incorporating the
uncertainties and to different goals, a method allowing a joint monitoring both the
moments (mean and variance), on the one hand, and the failure probabilities, on
the other, at a low computational cost, would be of avail.
A simple link between RDO and RBDO is given by inequalities involving low order
moments and the probability of exceeding a certain threshold. Two of them are the
following (Abramowitz and Stegun 1972):

• Bienaymé – Markov inequality:

E(x)
P[x > ω] ≤ , ω>0 (4)
ω
Structural robustness and its relationship to reliability 441

• Chebyshev inequality:

1
P[|x − E(x)| ≥ tσx ] ≤ , t>0 (5)
t2
where σx is the standard deviation of the random variable x. These bounds, however,
are reputed to be not tight when the probabilities are very low, as is common case in
structural safety.
Bounds such as those expressed by the above inequalities are employed when the
probabilistic information is not sufficient to calculate exceedance probabilities. In
probability theory the concept of entropy associates information and uncertainty in a
clear, positive manner. For this reason, a second aim of the chapter is to discuss the
concept of robustness from this point of view. From the production-oriented, bound-
assuring approach of engineering design discussed above, robustness with respect to
uncontrollable external actions can be controlled in a similar fashion as the randomness
of the material properties can be subjected to quality control. For this reason the con-
cept of Robustness Assurance is introduced, referring to the control of the response
spread under the influence of the uncertainty of external actions, in a similar man-
ner as Quality Assurance in construction industry subjects to control the randomness
of material properties and structural member dimensions. It is shown that robust-
ness assurance defined in this manner can easily be incorporated into conventional
deterministic optimization.
The chapter is organized as follows. First, the methods for estimating the failure
probability upon this information are discussed: They can be grouped into (a) global
expansions and (b) local expansions. It is shown that the later offers significant advan-
tages for the purpose in hand. However, one of the global expansion techniques, namely
the maximum entropy method, is useful for linking robustness and reliability and there-
fore it is discussed in some detail. Next the methods allowing estimation of high order
moments of the response are briefly presented, with an emphasis on the point esti-
mates technique. The application of this method to robust design is then discussed. An
example illustrates the accuracy and the low computational cost of the joint compu-
tation of moments and probability estimates by the proposed procedures. Then, the
definition of robustness in terms of entropy and the ensuing derivations for optimiza-
tion of linear structures is introduced. It is shown that Robustness Assurance can easily
be incorporated into conventional deterministic optimization. The practical applica-
tion of this concept is developed and illustrated for the case of linear structures. The
chapter ends with some conclusions.
Since the information on concepts and methods involved in the exposition is disperse
in journals and books published along five decades, the chapter is as self-contained as
possible.

2 Probability estimation based on moments


In this section the estimation of probability density function based on the information
provided by statistical moments is reviewed, as it offers a general link between robust
and reliability-based design methods. This problem can be approached by means of
Pearson, Johnson or other families of distributions (Johnson et al. 1994). However,
442 Structural design optimization considering uncertainties

the discussion herein limits to polynomial and maximum entropy families of methods.
A digression into the latter is instructive as it sheds light on the nature of robustness
discussed at the end of the chapter.
The above mentioned proposals for density estimation are global, i.e. they are
valid for all values of the random variable. For estimating the reliability it would
be more interesting to expand the density about a critical threshold. This is the pur-
pose of the saddlepoint expansion, explained in the last paragraph of the present
section.

2.1 Pol yno m i al expans io ns


Classical probability theory provides two expansions of the probability density func-
tion based on moments. They are the Gram-Charlier and the Edgeworth expansions
given respectively by (Muscolino 1993, e.g.):


1 κ3 1 κ4 10 κ32
pz (z) = 1+ H 3 (z) + H 4 (z) + H6 (z)
3! σz3 4! σz4 6! σz6

35 κ3 κ4 280 κ33
+ H7 (z) + H9 (z) φ(z) (6)
7! σz7 9! σz9

and


1 κ3 1 κ4 10 κ32
pz (z) = 1+ H 3 (z) + H 4 (z) + H6 (z)
3! σz3 4! σz4 6! σz6

1 κ52 35 κ3 κ4 280 κ33
+ H 5 (z) + H 7 (z) + H 9 (z) φ(z) (7)
5! σz5 7! σz7 9! σz9

where φ(z) is the standard Gaussian density


1 1
φ(z) = √ exp − z2 (8)
2π 2

κi and Hi (z) are respectively the cumulant and the Hermite polynomial of order i. As
is well known the following relationships hold for the first cumulants and moments
µj = E[zj ] (Kolassa 1997):

κ1 = µ1
κ2 = µ2 − µ21
κ3 = µ3 − 3µ1 µ2 + 2µ31
κ4 = µ4 − 4µ1 µ3 − 3µ22 + 12µ2 µ21 − 6µ41 (9)
Structural robustness and its relationship to reliability 443

The coefficients are those of the Pascal triangle. The first Hermite polynomials are
given by (Abramowitz and Stegun 1972, e.g.)

H1 (z) = z
H2 (z) = z2 − 1
H3 (z) = z3 − 3z
H4 (z) = z4 − 6z2 + 3
H5 (z) = z5 + 10z3 + 15z
H6 (z) = z6 − 15z4 + 45z2 − 15 (10)

Despite the similarities between Gram-Charlier and Edgeworth expansions it is worth


noticing that they emerge from rather different approaches: The first from an orthog-
onal expansion of the probability density function and the second from the Fourier
transform of a non-Normal characteristic function. In practice the use of Edgeworth
expansion is more often recommended. However, notice first that they use polynomials,
whose behavior is more oscillating as their order increase. Hermite polynomials, in par-
ticular, are negative for intervals that are the larger, the higher their order (Abramowitz
and Stegun 1972, e.g.). As a consequence, the probability density estimate may not
be strictly positive in certain intervals and, in compensation, may exhibit undesirable
multimodality in other ones.
A discussion on the use of polynomial approximations to the density function based
on moments can be found in (Kennedy and Lennox 2000; Kennedy and Lennox 2001),
where the authors propose a method based on non-classical orthogonal polynomials.
As far as the examples presented in the mentioned chapters may be conclusive, the
method seems to overcome the deficiencies of the classical approaches. It consists in
an approximation of the form


r
pz (z) = w(z) ai Qi (z) (11)
i=0

where w(z) is a weighting function selected upon judgement of the moments in hand,
Qi (z) are orthogonal polynomials and ai coefficients to be determined. Notice, how-
ever, that the possibility of having a negative value of the density remains. This problem
is corrected in Er’s method (Er 1998) with an approximation of the form

pz (z) = C exp(Q(z)) (12)

due to the strict nonnegativity of the exponential function. Here C is a normalizing


constant and


r
Q(z) = ai zi (13)
i=1
444 Structural design optimization considering uncertainties

The coefficients are obtained by solving the following algebraic problem:

⎛ ⎞⎛ ⎞ ⎛ ⎞
1 2µ1 . . . rµr−1 a1 0
⎜ µ1 2µ2 . . . rµr ⎟ ⎜ a2 ⎟ ⎜ −1 ⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜ .. .. . . .. ⎟ ⎜ .. ⎟ = ⎜ .. ⎟ (14)
⎝ . . . . ⎠ ⎝ . ⎠ ⎝ . ⎠
µr−1 2µr . . . rµ2(r−1) ar −(r − 1)µr−2

Notice that for calculating r coefficients the number of moments needed equals
2(r − 1).

2.2 Ma x i m u m e nt r o py me t ho d
In probability theory, entropy is a simultaneous measure of the information and
uncertainty given by the present samples (Shannon 1948; Jaynes 1957). In fact, a
deterministic event offers no information at all, while a purely random event (having
uniform distribution) offers the maximum. Therefore, entropy establishes a connection
between information and uncertainty.
The random samples of an event A can be expressed by means of many possible par-
titions U, i.e. collections of mutually exclusive subsets Ai , i = 1, 2, . . . of A in which the
random occurrences are allocated. The entropy of the partition is defined by Shannon
(Shannon 1948) as


H(U) = − pi ln pi (15)
i

where pi is the probability associated to subset Ai . Empirically, if there are N samples


of the event and Ni are located in subset Ai , then pi ≈ Ni /N.
A continuous expression of a partition is a probability density function, in terms of
which entropy is defined as

Hx = − px (x) ln px (x)dx (16)

There is an important difference between entropy definitions for discrete and contin-
uous cases: It is an absolute measure of uncertainty in the former case, while a relative
one in the latter, as it changes with the coordinate system (Shannon 1948). (See Eq.
(76)). This remark is important for the development of a robustness assurance method
proposed in the final section of present chapter.
The principle of maximum entropy states that the most unbiased estimate of the
probability density function of a random event is that maximizing Eq. (16). The
principle determines a method for estimating the density function upon the avail-
ability of knowledge about the random event, such as e.g. statistical moments. If,
for instance, such a knowledge consists of ordinary moments µk , k = 1, 2, . . . , the
Structural robustness and its relationship to reliability 445

method of maximum entropy (MEM) consists in solving the following optimization


problem:

Problem Maximum entropy :

find : px (x)

(17)
maximizing : H = − px (x) ln px (x)dx

subject to : gk (x)px (x)dx = θk , k = 1, 2, . . .

where gk (x), θk are known functions and their expected values, respectively. When
these correspond to ordinary moments, i.e. gk (x) = xk , θk = µk the result is

px (x) = exp(−λ0 − λ1 x − λ2 x2 − λ3 x3 · · · ) (18)

where λk are Lagrange multipliers, with λ0 acting as a normalizing constant:



λ0 = ln exp(−λ1 x − λ2 x2 − λ3 x3 − · · · ) (19)

Upon replacing these equations into the definition (16) the following important result
is obtained:

Hx = λ0 + λ1 µ1 + λ2 µ2 + λ3 µ3 + · · · (20)

It is worth noticing that the maximum entropy method is not limited to moment
information but it applies to expected values of function in general. In (Shore and
Johnson 1980) it is proved that this is the uniquely correct method that satisfies all
consistency axioms.
Two families of methods have been proposed to find the Lagrange multipliers. The
first consists in solving the set of nonlinear equations by means of Newton meth-
ods (Mead and Papanicolau 1984; Sobczyk and Trȩbicki 1990; Trȩbicki and Sobczyk
1996; Hurtado and Barbat 1998; Ching and Hsieh 2007). The other consists in the
unconstrained minimization of the concave functional (Agmon et al. 1979; Pandey
and Ariaratman 1996)

F(ζ0 , ζ1 , . . . ) = ζ0 + ζ1 µ1 + ζ2 µ2 + ζ3 µ3 + · · · (21)

because its minimum is the entropy given by Eq. (20). According to the author’s
experience, the second approach is much faster and numerically more stable.
Notice that Er’s method mentioned above (Er 1998) is based on the same functional
form of the density as that resulting from applying the MEM to the case when the
information is given by ordinary moments. This is shown by a simple comparison of
Eqs. (12) with (13), on the one hand, and (18) with (19), on the other, indicating that
λ0 is equivalent to − ln C. However, Er’s method requires a larger number of moments
as said in the preceding and, therefore, the results are not coincident.
446 Structural design optimization considering uncertainties

2.3 Sad d l epoint e xpans io n


The saddlepoint approximation to an ordinate of a density or distribution func-
tion was originally proposed by Daniels (Daniels 1954). In contrast to the classical
Gram-Charlier or Edgeworth expansions, it has the advantage of producing good
approximations far into the tails, because it is a local rather than a global approxima-
tion method. In other words, it is aimed at estimating the functions at a single point
only. For a detailed exposition see (Barndorff-Nielsen and Cox 1979; Reid 1988;
Cheah et al. 1993; Kolassa 1997).
The saddlepoint approximation is based on the idea of embedding the target function
within a family of parameterized functions and to select one member of the family for
the approximation. Let us first approximate the density function pg (g) by the family

rg (g, η) = exp(ηg − Kf (η))pg (g) (22)

where Kf (η) is the cumulant generating function of the density pg (g),

Kf (η)
= ln Mf (η)
= ln E[ exp(ηg)]

= ln exp(ηg)pg (g)dg (23)

and η is a parameter. In the preceding equation Mf (η) is the moment generating function
of pg (g). Notice that function rg (g, η) satisfies the normalization condition for a density
since

exp(ηg − Kf (η))pg (g)dg = exp(−Kf (η))Mf (η) = 1 (24)

The parameter η is selected such that the mean of the family of functions equals the
ordinate at which the density is to be estimated, ḡ, which in structural reliability is
normally zero. The mean of the family is

g exp(ηx − Kf (η))pg (g)dg

d
= exp(−Kf (η)) (exp(ηg))pg (g)dg


d
= exp(−Kf (η)) (exp(ηg))pg (g)dg

1 d
= Mf (η)
Mf (η) dη
d
= ( ln Mf (η))

= Kf (η) (25)
Structural robustness and its relationship to reliability 447

It can also be easily shown that the variance of the saddlepoint density Kf (η). Hence
the parameter is the solution of

Kf (η̄) = ḡ (26)

A convenient choice for the family of approximating functions is the standard Nor-
 this density implies standardizing variable g in the form q = (g − ḡ)/σ,
mal (8). Using
where σ = Kf (η̄) is the standard deviation. The density of the standardized variable
is σ pg (σq + ḡ, η̄) according to the probability transformation rules. Hence we have

 . 1 2 / . /
Kf (η̄) exp η̄ Kf (η̄)q + ḡ − Kf (η̄) pg Kf (η̄)q + ḡ = φ(q) (27)

Setting q ≡ 0 and solving for pg ( · ) yields

1
pg (ḡ) =  exp(Kf (η̄) − η̄ḡ) (28)
2πKf (η̄)

which is the sought-after saddlepoint approximation for the density at ḡ. The com-
putation of a probability Q = P[G ≥ ḡ] eventually requires the calculation of the
integral

∞
1
Q=  exp(Kf (η̄(u)) − η̄u)du (29)

2πKf (η̄(u))

where η̄(u) is the solution of Kf (η̄(u)) = u. Since this should be solved at each integration
point, the computational demands and the accumulation of errors in approximating
this integral can be large. As an alternative, direct formulas for computing Q have
been proposed (Robinson 1982; Lugannani and Rice 1980). In this chapter use will
be made of the proposal in (Lugannani and Rice 1980), since in the comparisons made
in (Kolassa 1997) it yields the best performance. It is given by

1 1
Q = 1 − (ω̄) + φ(ω̄) − (30)
ν̄ ω̄
 
where ν̄ = η̄ Kf (η̄) and ω̄ = 2(η̄ḡ − Kf (η̄)).
The saddlepoint approximation is commonly applied in Statistics for estimating the
density or the distribution at a given ordinate for sums of variables with widely different
properties for which the Central Limit Theorem does not give good results (Lange
1999). This implies the solution of Eq. (26) using the derivative of the actual cumulant
generating function by means of Newton methods. In our case such a function is not
448 Structural design optimization considering uncertainties

known and resort must be made to an approximation in terms of the cumulants using
the series

 κj ηj
Kf (η) = (31)
j!
j=1

Upon deriving Eq. (31) with respect to η and equating to the threshold, according
to Eq. (26), one obtains a polynomial whose lowest real positive root yields the value
η̄. The probability of failure Pf can then be readily estimated with Eq. (30).
To conclude the present exposition of the saddlepoint expansion, mention should
be done to the use of Monte Carlo simulation for approximating the integral (29).
To this end, random numbers are generated from the saddlepoint density and the
probability is estimated as the average of the values of the indicator function located
on the threshold ḡ, as is usual in Monte Carlo integration. A method for doing
this, using the Metropolis-Hastings simulation method has been proposed (Robert
and Casella 1999). The method uses the alternative formulation of the integral,
given by

1 
2 Kf (ϑ)
Q= exp(Kf (ϑ) − ϑKf (ϑ))dϑ (32)

η̄

which can be obtained from (29) with the change of variable u = Kf (ϑ). In the numerical
experiments reported in the quoted reference the method gives quite similar results
to the exact integration. However, notice that the value of the integral hinges upon
the lower limit, which in the method proposed herein is known only approximately
via the point estimate technique. Hence very small differences can be expected from
this simulation approach in comparison to the simple application of the Lugannani-
Rice formula. In addition, the randomness of the failure probability, common to all
simulation-based methods, appears.

3 Structural response moment estimation

3.1 Perturb a ti o n appr o ac h


Perturbation methods in structural analysis (Hisada and Nakagiri 1981; Liu et al.
1995; Kleiber and Hien 1992) are based on a basic result of the probability theory
concerning the approximation of the mean vector and covariance matrix of a function
h(x) of a set of r basic variables x = {x1 , xj , . . . , xr }. Function h( · ) can be expanded in
Taylor series about the mean vector µx as


r
∂h 1   ∂2 h
r r
h(x)=h(µ
˙ x )+ (µ )[x −µxk ]+ (µ )(x −µxk )(xl −µxl ) (33)
∂xk x k 2 ∂xk xl x k
k=1 k=1 l=1
Structural robustness and its relationship to reliability 449

Applying the expectation operator to this equations we obtain

1   ∂2 h
r r
E[h(x)]=h(µ
˙ x) + (µ )Ckl (x) (34)
2 ∂xk xl x
k=1 l=1

where it has been taken into account that E[xk − µxk ] = 0. Here Ckl (x) denotes the
(k, l) element of the covariance matrix of the vector x. This equation is known as the
second order approximation of the mean of function h( · ).
Let us now derive a first order approximation to the covariance of two functions
hi (x) and hj (x). To this end multiply the Taylor expansion of the two functions up to
the first order derivative terms, i.e.
B 
r
∂hi C
.
hi (x)hj (x) = hi (µx ) + (µx )[xk − µxk ]
∂xk
k=1
B 
r
∂hj C
hj (µx ) + (µx )[xl − µxl ] (35)
∂xl
l=1

Arranging terms yields

.  ∂hi r
hi (x)hj (x) = hi (µx )hj (µx ) + hj (µx ) (µ )[x − µxk ]
∂xk x k
k=1


r
∂hj
+ hi (µx ) (µ )[x − µxl ]
∂xl x l
l=1


r
∂hi  ∂hj r
+ (µx )[xk − µxk ] (µ )[x − µxl ] (36)
∂xk ∂xl x l
k=1 l=1

Moving the product hi (µx )hj (µx ) to the left-hand side and taking expectations at both
sides of this equation leads to the final result:

.   ∂hi
r r
∂hj
cov(hi (x)hj (x)) = (µx ) (µ )Ckl (x) (37)
∂xk ∂xl x
k=1 l=1

The variance of either of the two functions is but a particular case of this equation:

.   ∂hi
r r
∂hi
var(hi (x)) = (µ ) (µ )Ckl (x), i = 1, 2 (38)
∂xk x ∂xl x
k=1 l=1

At least three objections can be addressed to perturbation methods for the purpose
of present chapter. First, they are reputed to be accurate for low coefficient of varia-
tion of the basic variables x (Elishakoff and Ren 2003). Second, they require special
computational codes to their application (Kleiber and Hien 1992). Third, as is evident,
they do not yield equations for estimating moments of order higher than two, which
450 Structural design optimization considering uncertainties

are needed for applying local or global expansions of probability distributions in order
to estimate the probability of failure. The method of point estimates summarized next
overcomes these deficiencies.

3.2 Poi n t esti m at e me t ho d


The method of Point Estimates (Rosenblueth 1975; Ordaz 1988; Christian and Baecher
1998; Harr 1989; Hong 1998) is a valuable tool for estimating the low order statis-
tical moments of a system response with good accuracy. The reason explaining this
property is that the method imposes the annihilation of some order terms in the Taylor
expansion of the response and the concentration of their information in some weights
located around the mean vector of the basic variables. This is the main difference
with perturbation approaches based in the Taylor expansion, which are built over
the assumption that the high order terms of the expansion are negligible. Besides, the
method of point estimates has the additional advantage over perturbation schemes
that it can be easily applied to the estimation of moments of order higher than the
second. Last but not least, the method does not need special finite element codes for
its application, as required by the perturbation approach. As a consequence, it can
be used in connection to practically any structural problem using available numerical
tools. In the basic formulation of the method, the total number of finite element solver
calls is only twice the number of independent random variables, which in a problem
determined by a few basic variables implies a low computational effort. These features
make the method an accurate and practical technique for the stochastic performance
analysis of mechanical systems. In the following lines the proposal in (Hong 1998)
is summarized, because in the experience reported in (Hong et al. 1998) using actual
structural models it offers by far the best approximation over the other point estimate
alternatives cited above. In addition, the applicability of the method for higher order
moment evaluation is discussed.
Let us consider a structural function g(x) that is a function of a single variable x.
The Taylor expansion of a power function g j (x) about the mean value of x is


 1 (l)
g j (x) = b(x) = b(µx ) + b (µx )(x − µx )l (39)
l!
l=1

Taking expectations on both sides of the above equation one obtains


 1 ∂b
E[g (x)] = b(µx ) +
j
(µ )E[(x − µx )l ] (40)
l! ∂xl x
l=1

which can be put in the form


 1 ∂b
E[g j (x)] = b(µx ) + (µ )γ σ l (41)
l! ∂xl x x,l x
l=1
Structural robustness and its relationship to reliability 451

where σx is the standard deviation of x and γx,l is a normalized central moment


defined as

1 ∞
γx,l = l (x − µx )l px (x)dx (42)
σx −∞

Multiplying successively equation (39) by two weights wi , l = 1, 2 assigned to the


concentration points xl and summing up the result yields

 1 ∂b
w1 b(x1 ) + w2 b(x2 ) = b(µx )(w1 + w2 ) + (µ )(w1 ξ1l + w2 ξ2l )σxl (43)
l! ∂xl x
l=1

where ξi , i = 1, 2 is the standardized random variable

xi − µx
ξi = (44)
σx

Solving equation (43) for b(µx ), imposing the condition

w1 + w2 ≡ 1 (45)

and substituting back the result into equation (41) yields



 1 ∂b
E[g j (x)] = w1 b(x1 ) + w2 b(x2 ) + (µ )[γ − (w1 ξ1l + w2 ξ2l )]σxl (46)
l! ∂xl x x,l
l=1

This equation suggests the approximation


.
E[g j (x)] = w1 b(x1 ) + w2 b(x2 ) = w1 g(x1 )j + w2 g(x2 )j (47)

in which function g( · ) is evaluated at points xi = µx + ξi σx , i = 1, 2. Implicit in the


above approximation is the condition that the concentration parameters must also
satisfy the following constraint:

w1 ξ1i + w2 ξ2i = γx,i (48)

for an adequate number of normalized moments γx,i allowing the determination of ξl


and wl . For determining two weights and concentration points three moment equations
of the type (48) are necessary. These, appended to Eq. (45), yield the values of the four
parameters. In this case the system has the following closed-form solution:

2
γx,3 γx,3
ξi = + ( − 1) 3−i
1+
2 2
ξ3−i
wi = (−1)i (49)
ζ
452 Structural design optimization considering uncertainties

where ζ = 2 1 + γx,3
2 /4. For the more general case of a function g(x) of n mutually

uncorrelated random variables xk , k = 1, . . . , n, collected in vector x, it is possible to


apply the same strategy as above by setting all variables at their means and applying the
Taylor expansion about the mean of each xk in turn. The derivation of the equations
for the weights and concentration points can be performed in the same way as for the
one dimensional case. As a result, the approximation of the ordinary moment of order
j of g(x) with m points per variable is given by

. 
n m
E[g j (x)] = wk,i g(µ1 , . . . , xk,i , µk+1 , . . . )j (50)
k=1 i=1

The total number of solver calls will then be S = mn. In (Hong 1998) several alter-
natives for calculating the weights and point locations are offered, according to the
number of concentration points. These are the following:

• S = 2n scheme:

γxk ,3 γ 2
xk ,3
ξk,i = + ( − 1) 3−i
n+
2 2
1 ξk,3−i
wk,i = (−1)i (51)
n ζk

with ζk = 2 n + γx,32 /4. Notice that, in this case, the approximation (50) is

accurate to the third order of the Taylor expansion, as determined by the num-
ber of normalized central moments used in the calculation of the weights and
concentration points.
• S = 2n + 1 scheme:

γxk ,3 γ 2
xk ,3
ξk,i = + ( − 1) 3−i
γxk ,4 − 3
2 2
1
wk,i = ( − 1)3−i (52)
ξk,i (ξk,1 − ξk,2 )

for i = 1, 2 and

ξk,3 = 0
1
wk,3 = − wk,1 − wk,2 (53)
n
Note that the repetition of the point ξk,3 = 0 makes this three-point scheme
equivalent to a 2n + 1-point scheme

.  n 2
E[g (x)] = w0 g(µ1 , . . . , µk , µk+1 , . . . )j +
j
wk,i g(µ1 , . . . , xk,i , µk+1 , . . . )j (54)
k=1 i=1
Structural robustness and its relationship to reliability 453

• S = 3n scheme:

ξk,j ξk,l
wk,i = , i, j, l = 1, 2, 3; i = j = l = i (55)
(ξk,j ξk,i )(ξk,l ξk,i )

The locations ξk,i , i = 1, 2, 3 are the roots of the polynomial

ω0 + ω1 q1 + ω2 q22 + ω3 q33 = 0 (56)

in which
. /
ω0 = γxk ,5 − γxk ,3 2γxk ,4 − γx2 ,3
k
γ 
γx ,4 
xk ,5
ω1 = γxk ,3 − γxk ,3 + γxk ,4 1 − k
n n
 γ
1 x ,5
ω2 = γxk ,3 γxk ,4 + − k
n n
γxk ,4 − (1 + γx2 ,3 )
ω3 = k
(57)
n

The approximation obtained with these points is accurate to the fifth order because
it supposes the cancelling of 2m − 1 = 5 terms of the Taylor expansion.
• S > 3n scheme:
In the general case the size of the nonlinear system for determining the weights and
location points of becomes 2m. This implies the solution of the following system
of nonlinear equations for each variable k:


m
1
wk,i =
n
i=1


m
j
wk,i ξk,i = γxk ,j (58)
i=1

A system like this can be solved by an algorithm described in (Hamming 1973;


Miller and Rice 1983). Let us expand the system of equations (58)

w1 + w2 + · · · + wm = b0 = n1
w1 ξ1 + w2 ξ2 + · · · + wm ξm = b1 = γ1
w1 ξ12 + w2 ξ22 + · · · + wm ξm2 = b2 = γ2 (59)
.. .. .. .. ..
. . . . .
w1 ξ12m−1 + w2 ξ22m−1 + · · · + wm ξm2m−1 = b2m−1 = γ2m−1
454 Structural design optimization considering uncertainties

where the subindex denoting the random variable has been dropped for clarity.
Define a polynomial


m
p(ξ) = ωl ξ l (60)
l=0

whose roots are the desired values ξ1 , ξ2 , . . . , ξm , i.e

p(ξ) = (ξ − ξ1 )(ξ − ξ2 ) · · · (ξ − ξm ) (61)

From this equation follows that ωm = 0 and that p(ξi ) = 0 for all i. Now, take the
first m equations from system (59) and multiply the first by ω0 , the second by ω1 ,
etc., and add them to obtain:


m 
m
ws p(ξs ) = ωl bl (62)
s=0 l=0

Then take the groups made up by the r to the m + r −1 equations, for r = 1, 2, . . .


and apply the same multiplications and sums. The result is the following linear set

b0 ω0 + b1 ω1 + · · · + bm−1 ωm−1 = −γm


b1 ω0 + b2 ω1 + · · · + bm ωm−1 = −γm+1
.. .. .. .. .. (63)
. . . . .
bm−1 ω0 + bm ω1 + · · · + b2m−2 ωm−1 = −γ2m−1

The solution of this system gives the values of the coefficients ωl , l = 0, 1, . . . , m.


Substituting them into the definition of the polynomial p(ξ) (Eqs. 60 and 61) yields
the value of the roots ξi . Finally, the weights wi can be computed from Eq. (59)
which now becomes a linear system.

The treatment of correlated variables in this method can be consulted in (Hong


1998). It consists in rotating the basic variable space to a new one in which no
correlation exists, using well-known spectral techniques.
Let us now examine the limitations of the point estimate method using some sim-
ple examples. A first limitation concerns the simple 2n scheme. In fact, as noted in
(Christian and Baecher 1998), when the number of input variables is very large the
locations of the concentration points may be very far from the mean value thus mak-
ing the concentration points meaningless from engineering viewpoint. With respect to
the 2n + 1 scheme, it has been observed (Hong 1998) that, eventually, the following
condition for applying Eq. (52) is not satisfied by some density functions:
γ 2
xk ,3
γxk ,4 − 3 >0 (64)
2

Further, in the 3n plot some roots of the polynomial (56) may be complex, rendering
impossible the application of the method. Finally, in the S > 3n scheme, a solution may
not exist.
Structural robustness and its relationship to reliability 455

As an example of this latter case, let us examine the application of the above numer-
ical procedure for obtaining the location and weights of m = 4 concentration points
for Normal variables. For a single variable x ∼ N(0, σx2 ) the moments are


1 · 3 · · · (n − 1)σxj , j even
µx,j = (65)
0, j odd

Consequently, the right hand vector in Eq. (63) is

[−3 0 − 15 0]T

and the solution of the system is

α = [3 0 − 6 0]T

Hence, the locations of the weights are the roots of the polynomial

3 − 6ξ 2 + ξ 4 = 0

which are −2.334, −0.742, 0.742 and 2.334. The weights are calculated by Eq. (59),
yielding 0.459, 0.454, 0.454, 0.459.
Let us now turn to the case n = 2 for which b0 = 0.5 in Eq. (59). In this case the
problem has a solution ξ = [ − 2.715, −1.275, 1.275, 2.715]. However, for n = 3 the
matrix of coefficients in Eq. (63) becomes ill-conditioned, so there is no stable solution.
And for n = 4 the linear system has a solution but two roots of the polynomial are
complex.

4 Robust analysis with point estimates


Before describing the linkage between RDO and RBDO approaches, it is useful to make
some remarks about the use of point estimates for robust optimization. Notice that the
method gives estimates of the ordinary moments. Since robust optimization is oriented
towards spread control, it is necessary to compute the variance of the response, given by

! "2
Var[g(x)] = E[g 2 (x)] − E[g(x)] (66)

Evidently, the minimum of the variance corresponds to the maximum of the mean and
the minimum of the mean square. However, since a requirement of the robust opti-
mization is also a minimization of the mean (see Eq. (3)), it results that in using point
estimates we eventually have to minimize a weighted cost function of the form
! "
C = ω1 E[g(x)] − ω2 E[g(x)] + ω3 E[g 2 (x)] (67)
456 Structural design optimization considering uncertainties

Q
10 12 14 16
9 11 13 15
2@0.5 m
2 4 6 8
1 3 5 7

4@1 m

Figure 16.2 Finite element mesh for the numerical example.

where the dependence on the design variables y has been removed for clarity of nota-
tion. However, since ω1 and ω3 are both related to spread, they can be made equal,
ω1 = ω3 ≡ ω, with the result
! "
C = (1 − ω)E[g(x)] + ω E[g 2 (x)] (68)

in which the basic requirement ω1 + ω2 + ω3 = 1 has been taken into account.

5 Uniting RDO and RBDO


The proposed approach for performing simultaneously a robust and reliability-based
design consists in the following steps: (a) To estimate the moments of the response by
means of the point estimate method, as this requires a minimal number of solver calls
and no other solver than that used for deterministic computations. (b) To estimate
the failure probabilities using either the saddlepoint expansion at the critical point.
Notice that several responses defining an equal number of limit states can be calculated
simultaneously.

5.1 Ex am pl e
In this example the method of point estimates will be applied to the estimation of the
failure probabilities corresponding to surpassing a threshold by the von Mises yield
stress τm in all the finite elements forming the elastic beam shown in Fig. 16.2:

gi (x) = τ̄m,i − τm,i (x) = 0, i = 1, 2, . . . , 16 (69)

For a plane problem, the von Mises stress is



τ12 − τ1 τ2 + τ22
τm = (70)
3

where τi , i = 1, 2 are the principal stresses.


Structural robustness and its relationship to reliability 457

Table 16.1 Random variable definition.

Variable Type Mean Standard deviation

P Lognormal 500 75
Q Lognormal 50 5
E Normal 20,000,000 3,000,000

Table 16.2 Samples for the 2N scheme.

Sample P Q E

1 648.01 50.00 2.0e07


2 385.99 50.00 2.0e07
3 500.00 59.44 2.0e07
4 500.00 42.06 2.0e07
5 500.00 50.00 3.1374e07
6 500.00 50.00 1.7606e07

The elements are constant strain triangles. The beam is subject to two random loads.
The elasticity modulus is also random and the Poisson modulus is fixed at 0.2. The
stochastic properties of these independent random variables are shown in Table 16.1.
A different von Mises stress threshold τ̄m was assigned to each element in order to
assure a probability of failure around 10−3 for all elements. In order to have an idea of
the estimation errors, 50,000 Monte Carlo simulations were calculated. Er’s method
for density estimation mentioned above was also computed for comparison. Notice
that the relationship between the limit state function and the input random variables
is highly nonlinear.
The attempts for calculating three and four concentration points per variable failed
in that no real solutions were found for the roots. For this reason the 2n point estimate
strategy was applied. Table 16.2 shows the coordinates of the point estimate samples.
Only four cumulants were used for the estimation of the failure probabilities. In spite
of the reduced number of concentration points and cumulants, the results given by
both methods are reasonably good as shown by Table 16.3. The second column of
the Table informs the threshold values for each element. Notice that in general the
saddlepoint method exhibits better accuracy than Er’s technique. This is especially so
for element No. 10, in which case the moment structure of the stress in the element
implied that there were three real roots for Eq. (26), at a difference with the rest
of elements, for which one real and two complex roots were found. Notice that the
saddlepoint method somewhat underestimates the failure probability with respect to
Monte Carlo simulation.
Figure 16.3 compares a histogram density obtained with a subset of 10,000 Monte
Carlo samples and Er’s estimation for element No. 1. Figure 16.4 depicts the standard
deviations of von Mises element stresses and the failure probabilities multiplied by
104 for each finite element, as given by Monte Carlo simulation with 50,000 samples.
It can be noticed that there is no clear-cut relationship between the two uncertainty
458 Structural design optimization considering uncertainties

Table 16.3 Estimates of the failure probabilities.

Finite τ̄m,i Pf,i P̂f,i P̂f,i


element Monte Carlo 2N scheme 2N scheme
i + saddlepoint + Er’s method

(50,000 samples) (6 samples) (6 samples)

1 600 0.0031 0.0024 0.0056


2 550 0.0042 0.0034 0.0072
3 750 0.0049 0.0043 0.0078
4 300 0.0041 0.0033 0.0069
5 325 0.0041 0.0034 0.0069
6 700 0.0063 0.0057 0.0094
7 625 0.0041 0.0034 0.0069
8 425 0.0048 0.0041 0.0076
9 425 0.0053 0.0047 0.0084
10 350 0.0029 0.0019 0.0107
11 750 0.0054 0.0047 0.0083
12 950 0.0033 0.0027 0.0060
13 225 0.0144 0.0142 0.0180
14 1100 0.0005 0.0004 0.0021
15 275 0.0121 0.0116 0.0153
16 450 0.0054 0.0048 0.0084

 103
8
Er method (4 moments)
7 Monte Carlo (10,000 samples)

6
Probability density

0
200 300 400 500 600 700 800
Von Mises stress in element 1

Figure 16.3 Comparison of Er’s method of density estimation and Monte Carlo histogram.

measures. In fact, in some cases to two similar deviations there correspond rather
different probabilities; also, the correlation coefficient between the two measures is
rather poor (0.542). Finally, notice that the highest probability there corresponds the
lowest standard deviation (element No. 13), which contradicts the naive intuition
Structural robustness and its relationship to reliability 459

s, 104 Pf

180

160 s

140 104 Pf

120

100

80

60

40

20

0 Element No.
0 2 4 6 8 10 12 14

Figure 16.4 Comparison of standard deviation and amplified failure probability for each element.

expressed by Figure 16.1. Similar conclusions arise from the comparison of the fail-
ure probability and the coefficient of variation of the von Mises stress. In this case,
the correlation is even poorer: −0.109. All this means that optimizing with respect
to the statistical moments may yield rather different results than when optimizing with
respect to the failure probability and that it is important to consider both kinds of
approaches in designing safe structures in the noisy environment of random loads and
structural material parameters.

6 Robustness as entropy minimization


The main goal of robust design is to control the spread of the structural response. This
can be regarded as a minimization of the entropy of the response. A simple illustration
of this is given by the fact that the entropy of a Normal density function

1 (x − µ)2
φx (x) = √ exp − (71)
2πσ 2σ 2

increases along with the standard deviation:


. √ /
Hx = ln σ 2πe (72)

In order to more rigorously define robustness as entropy minimization, let us con-


sider a set of random variables x = [x1 , x2 , x3 , . . . , xn ] which is transformed to a set
z = [z1 , z2 , z3 , . . . , zn ] by functions of the form

zj = gj (x) (73)
460 Structural design optimization considering uncertainties

According to the laws of probability transformation, the density function of vector z


is given by (Papoulis 1991)

1
pz (z) = px (x) (74)
|J(x)|

where J(x) is the Jacobian of the transformation:


 
 ∂g1 . . . ∂g1 
 ∂x1 ∂xn 
 
J(x) =  ... ... ...  (75)
 ∂gn ∂gn 
 ∂x . . . ∂x 
1 n

(
Upon substituting this result in Hz = − pz (z) ln pz (z)dz one obtains (Papoulis 1991)
@ A
Hz ≤ Hx + E ln|J(x, y)| (76)

In this result the dependence of the Jacobian on the vector of design variables y has
been made explicit in order to emphasize the relevance of the design in entropy trans-
formation. If the set of equations (73) has a unique inverse, as is the case for linear
structures, then equality holds.
Equation (76) is useful for understanding why entropy is relevant for defining struc-
tural robustness. In fact, assuming that x is the set of input random variables of the
structural system and z that of observed random responses, Eq. (76) states that the sys-
tem is an entropy dissipating system, i.e. it can reduce the scatter of the input variables if
@ A
E ln|J(x, y)| ≤ 0 (77)

because, in that case,


@ A
Hz − Hx ≤ E ln|J(x, y)| ≤ 0 (78)

implying

Hz ≤ Hx (79)

Recalling the remark made above about the relativity of the entropy measure of uncer-
tainty in the case of continuous
@ Adistributions, our focus will not be the terms Hx and
Hz but on the term E ln|J(x, y)| . Let us delve into its expression for the common case
of linear structures.

6.1 Rob ustn ess o f linear s t r uc t ur e s


Consider a linear structure modeled with the finite element method, so that it is
described with the classical equation

Ku = q (80)
Structural robustness and its relationship to reliability 461

where u is the displacement vector, K is the stiffness matrix and q the external force
vector, respectively. Assume that the external loads are random. The structure may have
some random properties of the materials, but since their dispersion can be controlled
with Quality Assurance (QA), we are interested only in reducing the sensitivity of
the response to random changes in external loads. By analogy with QA applied in the
construction phase, we may call Robustness Assurance (RA) the control of randomness
applied in the design phase.
It may seem surprising that the RA analysis just proposed ignores the random-
ness of the material properties and of other structural variables such as geometrical
dimensions, etc. The underlying reason for leaving it to the Quality Assurance in the
construction phase is that the robustness approach stems from the product-oriented
nature of engineering design and construction, that is not interested in establishing
the actual risk of the structure, as in the knowledge-oriented reliability approach, but
only in assuring a product capable of dissipating the randomness imposed by exter-
nal actions as much as possible. Thus, the proposed approach to robustness separates
design from the determination of the actual reliability, considering this latter as a
specialized task whose need arises in certain situations.
In addition, notice that the robustness defined in terms of entropy incorporates the
available information on external actions in a positive manner, as indicated by the
maximum entropy principle (see Eq. 17) and the quotation of Jaynes’ classical chapter
above. Thus, beyond the second order analysis on which some proposals for robust
design are based, which may invoke the lack of full probabilistic information to proceed
in that way, the entropy approach to robustness incorporates such a deficiency as an
element of design calculations. In fact, the maximum entropy principle establishes
the distribution that accords with several situations of available information. If, for
instance, both the mean and variance are prescribed and the variable can be either
negative or positive, the principle indicates that the distribution is Gaussian. However,
if it is known that the variable is strictly positive and the mean alone is prescribed, the
solution is the exponential distribution. And so on. (See (Kapur 1989) for a detailed
exposition).
Formulating the problem as

u = K −1 q, (81)

a typical element of vector u is



ui = (K−1 )ij qj (82)

The sensitivity of the i-th response with respect to the j-th random variable is,
therefore,

∂ui
= (K−1 )ij (83)
∂qj

which does not depend on any element of vector x. Other responses, such as end forces
and stresses can be expressed as other linear combinations of the displacements and,
462 Structural design optimization considering uncertainties

Hz

r
3 
r r
2  2
r
1
r
1

Figure 16.5 Variation of response entropy of structures as a function of a structural dimension


(y) and the coefficient of variation of an external load (ρ).

therefore, of the external loads. For this reason the ensuing development will be done
in terms of the displacements. The expectation in Eq. (76) is
@ A    
E ln|J(x, y)| = ln detK−1 (y) = −ln detK(y) (84)

Since K is a positive definite matrix, |detK(y)| = detK(y) > 0 yielding


@ A
E ln|J(x, y)| = −ln detK(y) (85)

The term

R(y) = ln detK(y) (86)

will be simply called robustness. Figure 16.5 illustrates the behavior of Eq. (76) for
a linear structure as a function of the single cross section dimension subject to design
and the coefficient of variation of a single random load.
According to this exposition a RA-design of linear structures with random external
loads can be involved in a Deterministic Optimization program (see Eq. (1)) as follows:

Problem Robustness and Cost Optimization :


find : y
minimizing : Z(y) = −αR(y)/R∗ + (1 − α)C(y)/C ∗
subject to : fi (y) < Fi , i = 1, 2, . . .
y− ≤ y ≤ y+ (87)

Here R∗ and C ∗ are normalizing factors. The meaning of this equation is that cost
C(y) is minimized while robustness R(y) is maximized. Thus, the solution will be a
saddlepoint instead of the global minimum of the cost. Notice that the robustness
term prevents from a one-sided search of the minimum cost in a global manner not
Structural robustness and its relationship to reliability 463

Table 16.4 Comparison of Shannon (entropy) and Lyapounov (stability) functionals.

Shannon Lyapounov

H(U ) is a continuous function of pi V(x(t)) is a continuous function of t


If all pi , i = 1, . . . , n are equal, V(x(t)) is a
H(U ) is an increasing function of n non-increasing function of t
H(U ) has a unique global maximum V(x(t)) has a unique global minimum

provided by usual behavioral or geometric constraints, some of which could eventually


be removed from optimization programs.
Factor α, 0 ≤ α ≤ 1, should weight the relative importance of cost and robustness
with regard to external loads and, therefore, it must be selected judiciously. The rel-
ative weight of robustness should consider the departure of the external loads from
determinism, in such a way that the larger their spread, the higher α.

6.2 Analogy to s ys tem dynamics


It is interesting to make some remarks on the analogy of entropy dissipation and the
theory of dynamic systems and control (See, e.g., (Szidarovszky and Bahill 1992, e.g.)).
In fact, one of the basic concerns in dynamic systems is that of controllability, mean-
ing that given an initial state there exists an input capable of leading the system to
another state. In dynamic system theory the control force is the result of a trade-off
between its cost and the reduction of the responses. Similarly, in robust design under
uncertainties the engineer is interested in obtaining a product such that, given an uncer-
tain input, the uncertainty of the response can be controlled to a given value without
excessive cost.
Another relevant analogy is that with stability. While in the designing dynami-
cal systems the engineer is not interested in particular trajectories of the system but
only in assuring its overall stability, in robust design the designer is interested in
assuring that the system will not be seriously perturbed by random fluctuations of
the input parameters, without detailed stochastic characterizations of the paths of
randomness inside the structural model. This is an important, practical reason moti-
vating the development of robust alternatives to the more theoretical, argumentative
approach of establishing failure probabilities, according to the exposition made in the
Introduction.
The theory of stable systems makes use of a Lyapounov functional V(x(t)) of the
dynamic system state x(t), whose characteristics have close resemblance to those of the
entropy function, as illustrated by Table 16.4. While the Lyapounov functional has its
global minimum at the equilibrium state of the system, the entropy functional has its
maximum at a density function which “may be asserted for the positive reason that
it is uniquely determined as the one which is maximally noncommittal with regard
to missing information, instead of the negative one that there was no reason to think
otherwise’’ as stated by Jaynes in his classical chapter (Jaynes 1957).
464 Structural design optimization considering uncertainties

6.3 Ex am pl e 1: A s imple b ar in t e ns io n
Let us consider the simplest structural model, i.e. an elastic bar of cross section A,
length l, elasticity modulus E subject to a tension force P, as shown in Fig. 16.6. Let
as assume that the set of random variables is
D E
x = P, E (88)

of which it is known that both are positive. On the other hand, the set of responses is
D E
z = u, τ (89)

where τ is the random tension in the bar. The single design variable is

y=A (90)

For the transformation

Pl
u =
EA
P
τ = (91)
A

the Jacobian is

Pl
J(x, y) = (92)
A2 E2

Since all quantities are positive, ln|J(x, y)| = lnJ(x, y) and


@ A
E ln|J(x, y)| = E[lnP − 2lnE] + lnl − 2lnA (93)

Thus, the bar is entropy dissipating if 2lnA > E[lnP − 2lnE] + lnl. Now, if A is con-
strained to lie in the range A1 ≤ A ≤ A2 and no reference is made to cost, the robust
design consists simply in assigning

A = A2 (94)

since this value minimizes the expectation in Eq. (93). It is evident that no probabilistic
information is necessary to arrive to this result. However, without such an information
it is not possible to ascertain whether the structure is entropy dissipating or not.

6.4 Ex am pl e 2: A t r us s
Consider the three-bar structure shown in Fig. 16.6. The random variables are
x = (P, E), the design variables are y = (A1 , A2 ) and the the observed responses are
Structural robustness and its relationship to reliability 465

E, A
P

Figure 16.6 Simple bar in tension.

z = (u, τ) which are the horizontal component of the displacement of load point and
the tension in the left bar, respectively. They are given by

Pl
u =
EA1

A2 + 2A1
τ = √ P (95)
2A1 A2 + 2A21

The Jacobian of the transformation is



Pl A2 + 2A1
J(x, y) = 2 √ (96)
E 2A21 A2 + 2A31

which has the separable form J(x, y) = Q(x)R(y). Therefore, without reference to cost,
a robust design can be obtained with no probabilistic information of P and E by simply
finding the values of (A1 , A2 ) that maximize

A2 + 2A1
ln √ (97)
2A21 A2 + 2A31

within the specific bounds assigned to each cross section area. If cost is considered, the
solution must be a trade-off between cost minimization and robustness maximization.

6.5 Example 3: A clamped beam


Consider finally the clamped beam of variable shape shown in Fig. 16.8. The only
random variable is the external load P. The cross section is a square tube of external
dimension yi , i = 1, 2 and thickness t, so that the moment of inertia is

2 3
Ii = ty (98)
3 i
The vertical displacement of the end point is

7Pl 3 Pl 3
u= + (99)
3EI1 3EI2
466 Structural design optimization considering uncertainties

45°
45°

l E, A1 E, A2 E, A1

Figure 16.7 A simple truss.

E, I1 E, I2

l l

Figure 16.8 A clamped beam with variable section.

so that

l3 7 1
J= 3
+ 3 (100)
2tE y1 y2

Let us use E = 2100 t/cm2 , t = 1 cm and l = 200 cm. A beam with y1 = 51.2 cm,
y2 = 31.5 cm minimizes the cost subject to the constraint that the end displacement is
less than or equal to l/250 (Hernández 1990). This solution is entropy dissipating since
ln |J| < 0. On the contrary, for y1 = 30 cm, y2 = 15 cm the structure increases entropy.

7 Conclusions
The following conclusions stem from the research reported in this chapter:

• The concept of entropy is useful for clarifying the meaning of robustness in


structural systems. In fact, the entropy of the structural responses decreases as
the stiffness increases. Accordingly, in parallel to Quality Assurance operating on
the spread control of structural material properties in the construction phase, one
may define Robustness Assurance as the control of the entropy of response vari-
ables such as displacements and stresses due to the uncertainty in random external
loads in the design phase. Such Robustness Assurance can easily be incorporated
Structural robustness and its relationship to reliability 467

into conventional Deterministic Optimization programs. A proposal in this regard


has been exposed.
• It has also been shown that structural robustness defined in these terms exhibits
similarities to the theory of controllability and stability studied with the assis-
tance of Lyapounov functions in the context of dynamic systems. Both subjects
are instances of the production-oriented approach that is characteristic of engineer-
ing design process at a difference to the knowledge-oriented process of scientific
discovery.
• For optimizing a structure under uncertainty both robust and reliability-based
approaches are valuable and therefore complementary. The first aims at a control
of response spread whereas the second to reducing the probability of extreme
undesirable situations.
• For the above reason, methods allowing simultaneous monitoring of the basic
statistical quantities implied by both approaches (namely statistical moments and
failure probabilities) are of importance. In this chapter, this has been sought by
means of the method of saddlepoint local expansion of the density function about
the critical threshold and the method of maximum entropy. While the former
exhibits better accuracy for estimating the failure probability, the latter is highly
useful for the assessment of the degree of robustness of the structural system as
commented above.
• The method of point estimates allows a fast and simple estimation of response
moments using the finite element solver employed for deterministic calcula-
tions. For these reasons it is highly for both robust and reliability-based design
optimization.

Further research is needed to develop the entropy approach to structural robustness


as well as on computational methods for the complex task of structural optimization
granting the accomplishment of both reliability and robustness requirements.

References

Abramowitz, M. & Stegun, I.A. 1972. Handbook of mathematical functions. New York: Dover
Publications.
Agmon, N., Alhassid, Y. & Levine, R.D. 1979. An algorithm for finding the distribution of
maximal entropy. Journal of Computational Physics 30:250–258.
Au, S.K. & Beck, J.L. 2001. Estimation of small failure probabilites in high dimensions by subset
simulation. Probabilistic Engineering Mechanics 16:263–277.
Barndorff-Nielsen, O. & Cox, D.R. 1979. Edgeworth and saddle-point approximations with
statistical applications. Journal of the Royal Statistical Society 41:279–312.
Ben-Haim, Y. 1985. The Assay of Spatially Random Material. Dordrecht: D. Reidel Publishing
Company.
Ben-Haim, Y. 1996. Robust Reliability in the Mechanical Sciences. Berlin: Springer-Verlag.
Cheah, P.K., Fraser, D.A.S. & Reid, N. 1993. Some alternatives to edgeworth. Canadian Journal
of Statistics 21:131–138.
Chernousko, F.L. 1999. What is ellipsoidal modelling and how to use it for control and state
estimation? In I. Elishakoff (ed.), Whys and Hows in Uncertainty Modelling, pp. 127–188.
Wien: Springer-Verlag.
468 Structural design optimization considering uncertainties

Ching, J. & Hsieh, Y.H. 2007. Local estimation of failure probability function and its con-
fidence interval with maximum entropy principle. Probabilistic Engineering Mechanics 22:
39–49.
Christian, J.T. & Baecher, G.B. 1998. Point-estimate method and numerical quadrature. Journal
of Geotechnical and Geoenvironmental Engineering 125:779–786.
Daniels, H.E. 1954. Saddlepoint approximations in statistics. Annals of Mathematical Statis-
tics 25:631–650.
Doltsinis, I. & Kang, A. 2004. Robust design of structures using optimization methods.
Computer Methods in Applied Mechanics and Engineering 193:2221–2237.
Doltsinis, I., Kang, A. & Cheng, G. 2005. Robust design of non-linear structures using
optimization methods. Computer Methods in Applied Mechanics and Engineering 194:
1779–1795.
Dubois, D. & Prade, H. 1988. Possibility Theory. New York: Plenum Press.
Elishakoff, I. 1991. Essay on reliability index, probabilistic interpetation of safety factor and
convex models of uncertainty. In F. Casciati & J.B. Roberts (eds), Reliability Problems:
General principles and Applications in Mechanics of Solids and Structures, pp. 237–271.
Wien: Springer-Verlag.
Elishakoff, I. 1999. Are probabilistic and anti-optimization approaches compatible? In
I. Elishakoff (ed.), Whys and Hows in Uncertainty Modelling, pp. 263–355. Wien:
Springer-Verlag.
Elishakoff, I. 2005. Safety Factors and Reliability: Friends or Foes? New York: Kluwer.
Elishakoff, I. & Ren, Y. 2003. Finite Element Methods for Structures with Large Stochastic
Variations. Oxford: Oxford University Press.
Er, G.K. 1998. A method for multi-parameter PDF estimation of random variables. Structural
Safety 20:25–36.
Frangopol, D.M. 1995. Reliability-based structural design. In C.R. Sundararajan (ed.),
Probabilistic Structural Mechanics Handbook, pp. 352–387. New York: Chapman & Hall.
Gasser, M. & Schuëller, G.I. 1997. Reliability-based optimization of structural systems.
Mathematical Methods of Operations Research 46:287–307.
Guan, X.L. & Melchers, R. 2001. Effect of response surface parameter variation on structural
reliability estimates. Structural Safety 23:429–444.
Haftka, R.T., Gurdal, Z. & Kamat, M.P. 1990. Elements of Structural Optimization. Dordrecht:
Kluwer Academic Publishers.
Hamming, R.W. 1973. Numerical Methods for Scientists and Engineers. New York: Dover
Publications.
Hansen, E. & Walster, G.W. 2004. Global Optimization using Interval Analysis. New York:
Marcel Dekker, Inc.
Harr, M. 1989. Probabilistic estimates for multivariate analysis. Applied Mathematical
Modelling 13:313–318.
Hernández, S. 1990. Métodos de diseño óptimo de estructuras. Madrid: Colegio de Ingenieros
de Caminos, Canales y Puertos.
Hisada, T. & Nakagiri, S. 1981. Stochastic finite element method developed for structural safety
and reliability. In Proceedings of the Third International Conference on Structural Safety and
Reliability, pp. 395–408. Rotterdam: Elsevier.
Hong, H.P. 1998. An efficient point estimate method for probabilistic analysis. Reliability
Engineering and System Safety 59:261–267.
Hong, H.P., Escobar, J.A. & Gómez, R. 1998. Probabilistic assessment of the in seismic
response of structural asymmetric models. In Proceedings of the Tenth European Conference
on Earthquake Engineering, Paris, 1998, Rotterdam. Balkema.
Hurtado, J.E. 2001. Neural networks in stochastic mechanics. Archives of Computational
Methods in Engineering 8:303–342.
Structural robustness and its relationship to reliability 469

Hurtado, J.E. 2004a. An examination of methods for approximating implicit limit state
functions from the viewpoint of statistical learning theory. Structural Safety 26:271–293.
Hurtado, J.E. 2004b. Structural Reliability. Statistical Learning Perspectives. Heidelberg:
Springer.
Hurtado, J.E. 2006. Optimal reliability-based design using support vector machines and arti-
ficial life algorithms. In Y. Tsompanakis & N.D. Lagaros (eds), Intelligent Computational
Paradigms in Earthquake Engineering. Hershey: Idea Group Inc.
Hurtado, J.E. 2007. Filtered importance sampling with support vector margin: a powerful
method for structural reliability analysis. Structural Safety 29:2–15.
Hurtado, J.E. & Alvarez, D.A. 2001. Neural network-based reliability analysis: A
comparative study. Computer Methods in Applied Mechanics and Engineering 191:
113–132.
Hurtado, J.E. & Barbat, A. 1998. Fourier-based maximum entropy method in stochastic
dynamics. Structural Safety 20:221–235.
Jaynes, E.T. 1957. Information Theory and Statistical Mechanics. The Physical Review 106:
620–630.
Johnson, N.L., Kotz, S. & Balakrishnan, N. 1994. Continuous Univariate Distributions, Vol. 1.
New York: John Wiley and Sons.
Kapur, J.N. 1989. Maximum Entropy Models in Science and Engineering. New York: John
Wiley and Sons.
Kennedy, C.A. & Lennox, W.C. 2000. Solution to the practical problem of moments using non-
classical orthogonal polynomials with applications for probabilistic analysis. Probabilistic
Engineering Mechanics 15:371–379.
Kennedy, C.A. & Lennox, W.C. 2001. Moment operations on random variables, with
applications for probabilistic analysis. Probabilistic Engineering Mechanics 16:253–259.
Kharitonov, V. 1997. Interval uncertainty structure: Conservative but simple. In H. Günther-
Naske & Y. Ben-Haim (eds), Uncertainty: Models and Measures, pp. 231–243. Berlin:
Akademie Verlag.
Kirsch, U. 1993. Structural Optimization. Fundamentals and Applications. Heidelberg: Springer
Verlag.
Kleiber, M. & Hien, T.D. 1992. The Stochastic Finite Element Method. Chichester: John Wiley
and Sons.
Klir, G.J. 1997. Uncertainty theories, models and principles: An overview of personal views and
contributions. In H. Günther-Natke & Y. Ben-Haim (eds), Uncertainty: Models and Measures,
pp. 27–43. Berlin: Akademie Verlag.
Kolassa, J.E. 1997. Series Approximation Methods in Statistics. New York: Springer Verlag.
Kosko, B. 1992. Neural Networks and Fuzzy Systems. Englewood Cliffs: Prentice Hall.
Lagaros, N., Papadrakakis, M. & Kokossalakis, G. 2002. Structural optimization using
evolutionary algorithms. Computers and Structures 80:571–579.
Lagaros, N. & Papadrakakis, M. 2003. Soft computing methodologies for structural optimiza-
tion. Applied Soft Computing 3:283–300.
Lange, K. 1999. Numerical Analysis for Statisticians. New York: Springer Verlag.
Liu, W.K., Belytschko, T. & Lua, Y.J. 1995. Probabilistic finite element method. In
C.R. Sundararajan (ed.), Probabilistic Structural Mechanics Handbook, pp. 70–105. New
York: Chapman & Hall.
Lugannani, R. & Rice, S. 1980. Saddle point approximation for the distribution of sums of
random variables. Advances in Applied Probability 12:475–490.
Mead, L.R. & Papanicolau, N. 1984. Maximum entropy in the problem of moments. Journal
of Mathematical Physics 25:2404–2417.
Miller, A.C. & Rice, T.R. 1983. Discrete approximations of probability distributions. Manage-
ment Science 29:352–362.
470 Structural design optimization considering uncertainties

Muscolino, G. 1993. Response of linear and non-linear structural systems under gaussian or
non-gaussian filtered input. In F. Casciati (ed.), Dynamic Motion: Chaotic and Stochastic
Behaviour, pp. 203–299. Wien: Springer-Verlag.
Ordaz, M. 1988. On the use of probability concentrations. Structural Safety 5:317–318.
Pandey, M.D. & Ariaratman, S.T. 1996. Crossing rate analysis of non gaussian response of
linear systems. Journal of Engineering Mechanics 122:507–511.
Papadrakakis, M., Lagaros, N. & Tsompanakis, Y. 1998. Structural optimization using
evolution strategies and neural networks. Computer Methods in Applied Mechanics and
Engineering 156:309–333.
Papadrakakis, M., Papadopoulos, V. & Lagaros, N. 1996. Structural reliability analysis of
elastic-plastic structures using neural networks and Monte Carlo simulation. Computer
Methods in Applied Mechanics and Engineering 136:145–163.
Papoulis, A. 1991. Probability, Random Variables and Stochastic Processes. New York:
McGraw-Hill.
Reid, N. 1988. Saddlepoint methods and statistical inference. Statistical Science 3:213–238.
Robert, C.P. & Casella, G. 1999. Monte Carlo Statistical Methods. New York: Springer.
Robinson, J. 1982. Saddlepoint approximations for permutation tests and confidence intervals.
Journal of the Royal Statistical Society Series B 44:91–101.
Rosenblueth, E. 1975. Point estimates for probability moments. Proceedings of the National
Academy of Sciences of the USA 72:3812–3814.
Rosenblueth, E. & Mendoza, E. 1971. Reliability optimization in isostatic structures. Journal
of the Engineering Mechanics Division ASCE 97:1625–1640.
Royset, J.O., Kiureghian, A.D. & Polak, E. 2001. Reliability-based optimal structural design
by the decoupling approach. Reliability Engineering and System Safety 73:213–221.
Royset, J.O. & Polak, E. 2004. Reliability-based optimal design using sample average
approximations. Probabilistic Engineering Mechanics 19:331–343.
Schuëller, G.I. & Stix, R. 1987. A critical appraisal of methods to determine failure probabilities.
Structural Safety 4:293–309.
Sexsmith, R.G. 1999. Probability-based safety analysis–value and drawbacks. Structural
Safety 21:303–310.
Shannon, C.E. 1948. A Mathematical Theory of Communication. The Bell System Technical
Journal 27:379–423.
Shore, J.E. & Johnson, R.W. 1980. Axiomatic derivation of the principle of maximum
entropy and the principle of minimum cross-entropy. IEEE Transactions on Information
Theory 26(1):26–37.
Sobczyk, K. & Trȩbicki, J. 1990. Maximum entropy principle in stochastic dynamics.
Probabilistic Engineering Mechanics 5:102–110.
Szidarovszky, F. & Bahill, A.T. 1992. Linear Systems Theory. Boca Ratón: CRC Press.
Trȩbicki, J. & Sobczyk, K. 1996. Maximum entropy principle and non-stationary distributions
of stochastic systems. Probabilistic Engineering Mechanics 11:169–178.
Chapter 17

Maximum robustness design of trusses


via semidefinite programming
Yoshihiro Kanno
University of Tokyo, Tokyo, Japan

Izuru Takewaki
Kyoto University, Kyoto, Japan

ABSTRACT: This chapter discusses evaluation and maximization of the robustness function
of trusses, which is regarded as one of measures of structural robustness under the uncertainties
of member stiffnesses and external forces. By using quadratic embedding of the uncertainty and
the S-procedure, we formulate a quasiconvex optimization problem which provides a lower
bound of the robustness function. We next formulate the maximization problem of the robust-
ness function as a robust structural optimization scheme. An algorithm based on the semidefinite
program is proposed to obtain the optimal truss design. Numerical examples are shown to
demonstrate the validity of the algorithms presented.

1 Introduction
Recently, the info-gap decision theory has been proposed as a non-probabilistic deci-
sion theory under uncertainties (Ben-Haim 2006), and has been applied to wide fields
including neural networks (Pierce et al. 2006), biological conservation (Moilanen &
Wintle 2006), financial economics (Ben-Haim 2005), etc.
In the info-gap decision theory, the robustness function plays a key role as a measure
of robustness of systems having uncertainties (Ben-Haim 2006). The robustness func-
tion is regarded to represent the immunity against failure, and is defined as the greatest
level of uncertainty at which any failure cannot occur. In structural engineering, the
robustness function represents the greatest level of uncertainty, caused by manufac-
ture errors, limitation of knowledge of input disturbance, observation errors, etc., at
which any constraint on mechanical performance cannot be violated. The constraints
on mechanical performance can be violated only at great level of uncertainty in a
structure with a large robustness function, while they can be violated at small level
of uncertainty in a structure with a small robustness function. Thus, we can compare
robustness of structures quantitatively in terms of robustness functions.
Takewaki & Ben-Haim (2005) computed the robustness function of structures in a
particular case where the worst case can be obtained analytically. Unfortunately it is
difficult to compute exactly the robustness function of structures exactly, and no effi-
cient method has ever been proposed to the authors’ knowledge. The first contribution
of the work in this chapter is to propose a numerically tractable optimization problem
to obtain a lower bound of the robustness function of trusses considering various con-
straints and circumstances of uncertainties. The solution of the problem presented can
472 Structural design optimization considering uncertainties

be obtained efficiently by solving some semidefinite programming (SDP) (Wolkowicz


et al. 2000) problems. Note that a lower bound is regarded as a conservative estimate
of the robustness function, i.e. a level of uncertainty at which the satisfaction of the
constraints on mechanical performance is guaranteed. Hence, finding a lower bound,
not an upper bound, is meaningful when it is difficult to find the exact value of the
robustness function. Secondly, we consider a structural optimization problem in which
we seek for the truss design maximizing the robustness function.
Based on the stochastic uncertainty model of mechanical parameters, various meth-
ods were proposed for reliability-based optimization (see the other chapters and the
references therein). One of the motivations for the info-gap theory is an awareness
of the limitations of the probabilistic approaches as discussed by Ben-Haim (2004,
Section 7). As a non-probabilistic uncertainty model, Ben-Haim & Elishakoff (1990)
developed the so-called convex model, where the uncertainty of a system is expressed in
terms of unknown-but-bounded parameters. Pantelides & Ganzerli (1998) proposed
a robust truss optimization method based on the convex model.
The mathematical programming problems including uncertain data have also been
investigated extensively. For various classes of convex optimization problems, a unified
methodology of robust counterpart was developed by Ben-Tal & Nemirovski (2002),
where the data in optimization problems are assumed to be unknown but bounded.
Calafiore & El Ghaoui (2004) proposed a method for finding the ellipsoidal bounds
of the solution set of uncertain linear equations by using SDP.
In this chapter, we deal with the robustness function of trusses that consist of
members with uncertain stiffness and/or are subjected to uncertain external forces.
The non-probabilistic uncertain parameters are assumed not to be known precisely
but to be bounded. The details of background of decision strategies based on the
info-gap theory may be consulted to the basic textbook (Ben-Haim 2006). To over-
come the difficulty of computing the robustness function, we utilize the framework
of SDP. For mathematical backgrounds and algorithms for SDP, the readers may
refer to the review chapters (Helmberg 2002; Vandenberghe & Boyd 1996) and the
handbook (Wolkowicz et al. 2000).

2 Preliminary results
Some useful technical results used in this chapter are listed in Appendix A. Throughout
this chapter, all vectors are assumed to be column vectors. However, for vectors u ∈ Rn
and v ∈ Rm , we often simplify the notation (uT , v T )T as (u, v). The standard Euclidean
norm (pT p)1/2 of a vector p ∈ Rn is denoted by p2 . The l∞ -norm of p, denoted by
p∞ , is defined as p∞ = maxi∈{1,...,n} |pi |.
Define Rn+ ⊂ Rn by

Rn+ = {p ∈ Rn | p ≥ 0}

For p = (pi ) ∈ Rn and q = (qi ) ∈ Rn , we write p ≥ 0 and p ≥ q, respectively, if p ∈ Rn+ and


pi ≥ qi (i = 1, . . . , n).
Let S n ⊂ Rn×n denote the set of all n × n real symmetric matrices. We write A  O if
A ∈ S n is positive semidefinite, i.e. if all the eigenvalues of the matrix A are nonnegative.
Maximum robustness design of trusses via semidefinite programming 473

For A ∈ S n and B ∈ S n , we write A  B if the matrix A − B is positive semidefinite. The


Moore–Penrose pseudo-inverse of C ∈ Rm×n is denoted by C† ∈ Rn×m .

2.1 Semidefinite program


The semidefinite program (SDP) is classified as a convex and nonlinear mathemati-
cal program. The SDP problem refers to the optimization problem having the form
of (Wolkowica et al. 2000)
 
m
max b y: C −
T
A i yi  O (1)
i=1

Here y ∈ Rm is a variable vector, b ∈ Rm is a constant vector, and Ai ∈ S n (i = 1, . . . , m)


and C ∈ S n are constant symmetric matrices.
Recently, SDP has received increasing attention for its wide fields of applica-
tion (Ohsaki et al. 1999; Ben-Tal & Nemirovski 2001). It is well known that the linear
program and the second-order cone program are included in SDP as particular cases.
The primal-dual interior-point method, which has been first developed for LP, has been
naturally extended to SDP. It is theoretically guaranteed that the primal-dual interior-
point method converges to the global optimal solution of the SDP problem (1) within
the number of arithmetic operations bounded by a polynomial of m and n (Ben-Tal &
Nemirovski 2001; Wolkowicz et al. 2000).

2.2 Quasic onvex optimization problem


The α-sublevel set of a function f : Rn → R is defined as

Lf (α) = {x ∈ Rn | f (x) ≤ α}

A function f is called quasiconvex if its domain and all its sublevel sets Lf (α) for α ∈ R
are convex.
Let f0 : Rn → R be quasiconvex, and let f1 , . . . , fm : Rn → R be convex. The qua-
siconvex optimization problem refers to the optimization problem having the form
of (Boyd & Vandenberghe 2004, Section 4.2.5)

min{f0 (x): fi (x) ≤ 0 (i = 1, . . . , m), Ax = b} (2)

where A ∈ Rm×n and b ∈ Rm .


The difference between convex and quasiconvex optimization problems is that a
quasiconvex optimization problem can have locally optimal solutions that are not
globally optimal. It is known that the global optimal solution of a quasiconvex opti-
mization problem can be obtained by using the bisection method in which some convex
optimization problems are solved (Boyd & Vandenberghe 2004).

3 Modeling uncertain trusses and mechanical constraints


Consider a linear elastic truss in the two- or three-dimensional space. Small rotations
and small strains are assumed. Letting nd denote the number of degrees of freedom
474 Structural design optimization considering uncertainties
d d
of displacements, u ∈ Rn and f ∈ Rn denote the vectors of nodal displacements and
external forces, respectively. The system of equilibrium equations can be written as

Ku = f (3)
d
where K ∈ S n denotes the stiffness matrix of the truss. In (3) we explicitly deal with
the model- and data-uncertainties of K and f , that shall be rigorously defined below.
m
Let a = (ai ) ∈ Rn denote the vector of member cross-sectional areas, where nm
denotes the number of members. For trusses, the stiffness matrix K is a function of a,
and can be decomposed as
m m

n 
n
K(a) = ai K i = ai bi bTi (4)
i=1 i=1

d d
where K i ∈ S n and bi = (bij ) ∈ Rn (i = 1, . . . , nm ) are constant matrices and constant
vectors, respectively.

3.1 U n c erta i n t y mo d el
Assume that the uncertainty of K is caused only by the uncertainties of stiffness
of members, while the locations of nodes are assumed to be certain. We repre-
sent the uncertainties of stiffness of members through the uncertainties of member
cross-sectional areas a.
f = (F
ai ) ∈ Rn and F
m d
Let F
a = (F fj ) ∈ Rn denote the nominal values (or the best estimates)
m d
of a and f , respectively. Let ζ a = (ζai ) ∈ Rn and ζ f = (ζfj ) ∈ Rn denote the parameter
vectors that are considered to be unknown but bounded. We describe the uncertainties
of a and f by using ζ a and ζ f , respectively. Suppose that a and f depend on ζ a and ζ f
affinely, i.e.

ai = F
ai + a0i ζai , i = 1, . . . , nm (5)
fj = F
fj + f 0 ζfj , j = 1, . . . , nd (6)
m
Here, a0 = (a0i ) ∈ Rn+ and f 0 ∈ R+ are constant coefficients satisfying F ai > a0i
(i = 1, . . . , n ). Note that ai and f represent the relative magnitude of uncertainties
m 0 0

of ai and f , respectively. Moreover, a0 and f 0 make ζ a and ζ f have no dimensions.


d
For p = 1, . . . , nt , let mp ∈ {1, . . . , nd } and let T p ∈ Rmp ×n be a constant matrix. For
m d
a fixed α ∈ R+ , define two sets Za (α) ⊂ Rn and Zf (α) ⊂ Rn by
m
Za (α) = {ζ a ∈ Rn | α ≥ ζ a ∞ } (7)
nd
Zf (α) = {ζ f ∈ R | α ≥ T p ζ f 2 (p = 1, . . . , n )}
t
(8)

Here, we choose T 1 , . . . , T nt so that Zf (α) becomes bounded for any α ∈ R+ . It is


obvious that Za (α) is bounded.
Since a truss is an assemblage of nodes connected by some independent members,
the perturbation of stiffness of a member from its nominal value does not affect those
Maximum robustness design of trusses via semidefinite programming 475

of the other members. Hence, in (7) we choose the l∞ -norm which represents the
independent uncertainties of scalars ζa1 , . . . , ζanm . On the other hand, the definition (8)
permits us to suppose that there exist correlation among some components of ζ f by
choosing T p appropriately. Moreover, these matrices allow to represent the difference
of magnitudes of uncertainties among some components of ζ f . For examples of T p , see
Example 3.1 and Section 6.3.
The uncertain parameters ζ a and ζ f in (5) and (6), respectively, are assumed to be
running through the uncertain sets Za (α) and Zf (α) defined by (7) and (8), i.e.

ζ a ∈ Za (α), ζ f ∈ Zf (α) (9)

For simplicity, we often write

ζ = (ζ Ta , ζ Tf )T , Z(α) = Za (α) × Zf (α)

so that (9) is simplified as

ζ ∈ Z(α)

Roughly speaking, ζ a and ζ f perturb around the origin with the “width’’ of α. Then
a and f , respectively, vary around the center-points F a and F
f . The greater the value of
α, the greater the range of possible variations of a and f , and hence α is called the
uncertainty parameter (Ben-Haim 2004). Note that the value of α is usually unknown
in structures actually built. Throughout the following robustness analysis based on the
info-gap theory, we do not use any knowledge of the actual range of uncertainty of a
truss, that is regarded as one of advantages of using the robustness function.
It is easy to check that the uncertainty model of a and f defined by (5)–(8) obey
the info-gap model (Ben-Haim 2006) of uncertainty. For given α ∈ R+ , F a, and Ff , let
a, F
m d
A(α,F f ) ⊆ Rn × Rn be the set of all vectors (aT , f )T satisfying (5)–(8). Then A(α)
T

satisfies the two basic axioms of the info-gap model:

(i) Nesting: 0 ≤ α1 < α2 implies A(α1 ,F a, F


f ) ⊂ A(α2 ,Fa, F
f );
(ii) Contraction: the info-gap model A(0,F F
a, f ) coincides with a singleton set
a, F a T,F
T
containing its center point, i.e. A(0,F f ) = {(F f )T }.

a, F
From the nesting axiom we see that the uncertainty set A(α,F f ) becomes more inclu-
a and F
sive as α becomes larger. The contraction axiom guarantees that the estimates F f
are correct at α = 0.

Example 3.1 (interval uncertainty of external load). The interval uncertainty model
of the external load f is conventionally used in the so-called interval analysis of
uncertain structures; see, e.g., Chen et al. (2002). We show in this example that the
uncertainty model of f defined by (6), (8), and (9) includes the interval uncertainty
476 Structural design optimization considering uncertainties
d
model as a particular case. For each p = 1, . . . , nd , let ep ∈ Rn denote the pth row
d
vector of the identity matrix I ∈ S n , and let δp be a positive constant. Then, by putting

1 T
Tp = e , p = 1, . . . , nd
δp p

with m1 = · · · = mnt = 1 and nt = nd , Zf (α) defined by (8) is reduced to


    
nd 
 ζfj 
 
Zf (α) = ζ f ∈ R  α ≥   , j = 1, . . . , n d
(10)
δj

Consequently, the uncertainty of f obeying (6), (9), and (10) can be alternatively
written as

fj ∈ [F
fj − αf 0 δj , F
fj + αf 0 δj ], j = 1, . . . , nd

which coincides with the conventional interval uncertainty model.

3.2 C o nstrai n t s o n mec hanic al per fo r m a n ce


Consider the mechanical performance of trusses that can be expressed by the con-
d d
straints in terms of displacements. Let Ql ∈ S n , ql ∈ Rn , and γl ∈ R. Suppose that
the constraints on mechanical performance can be written in the following quadratic
inequalities in terms of u:

uT Ql u + 2qTl u + γl ≤ 0, l = 1, . . . , nc (11)

where nc denotes the number of constraints. Suppose that Ql , ql , and γl are functions
r
of r c ∈ Rn . Here, r c is regarded as the vector of parameters representing the level of
r d+1
performance, and nr denotes the number of these parameters. Define H l : Rn → S n
by
) *
c c
Q l (r ) q l (r )
H l (r c ) = −
ql (r c )T γl (r c )

r d
For a given vector r c ∈ Rn , define a set F ⊆ Rn as
    
 T
nd  u u
F(r ) = u ∈ R 
c c
H l (r ) ≥ 0 (l = 1, . . . , n )
c
(12)
 1 1

Then the constraint (11) is equivalently rewritten as

u ∈ F(r c ) (13)

Note that we have restricted ourselves to cases in which the constraints on the truss
can be represented by a finite number of quadratic inequalities. However, there exist
various constraints that can be described via (12) and (13) from a practical point of
Maximum robustness design of trusses via semidefinite programming 477

view, because it is known that any single polynomial inequality can be converted into
a system of (a finite number of) quadratic inequalities (Kojima & Tunçel 2000).

Example 3.2 (stress constraints). We show the explicit reformulation of the stress
constraints into (13). Let σi (u) denote the stress of the ith member compatible with u,
m
and let σ c = (σic ) ∈ Rn+ . Then the stress constraints may be written in the form of

|σi (u)| ≤ σic , i = 1, . . . , nm (14)

Here, we assume for simplicity that the lower and the upper bounds of stress of each
member have the common absolute value σic . Let E denote the elastic modulus of truss
members; let i denote the initial unstressed length of the ith member. From (4) we see

E T
σi (u) = b u (15)
i i

From (15) it follows that (14) is equivalently rewritten as u ∈ F(σ c ) with


     
 T
nd  u −(E/i )bi bTi 0 u
F(σ ) = u ∈ R 
c
≥ 0 (i = 1, . . . , n )
m
 1 0T −(σic )2 1

Thus, the stress constraints (14) can be embedded into the form of (13) with nc = nm .
The parameters σ c determine the level of performance required. Hence, we have r c = σ c
with nr = nm in (13).

4 Definition of robustness function for truss


In this section, we show that the robustness function (Ben-Haim 2006) of trusses is
obtained as the optimal objective value of a mathematical programming problem with
infinitely many constraint conditions.
F = K(F
For simplicity, we often write K a). By introducing auxiliary variables η ∈ Rn
d

and from (5) and (6), the system (3) of uncertain equilibrium equations is reduced to
m

n
F +
Ku a0i ζai K i u = η, ζ a ∈ Za (α) (16)
i=1

F
f + f 0 ζ f = η, ζ f ∈ Zf (α) (17)
d
For a given α ∈ R+ , let U(α,F
a) ⊂ Rn denote the set of all possible solutions to (16) and
(17), that is defined by
B d
C
U(α,Fa) = u ∈ Rn  (16), (17) (18)

Recall that the (nominal) constraint has been introduced in (13). We next consider
the robust counterpart of (13). Let α ∈ R+ be fixed. Since the equilibrium equations
(16) and (17) include the unknown parameters ζ = (ζ a , ζ t ), the nodal displacement
u is regarded as a function of ζ, namely, we may write u(ζ) for ζ ∈ Z(α). To define
the robustness function, we require that the constraint (13) should be satisfied by
478 Structural design optimization considering uncertainties

all possible realization of u(ζ) when ζ takes any vector satisfying ζ ∈ Z(α). This
requirement can be written as

u(ζ) ∈ F(r c ), ∀ζ ∈ Z(α) (19)

By using the set U introduced in (18), the condition (19) is equivalently rewritten as

u ∈ F(r c ), ∀u ∈ U(α,F
a) (20)
m r
For a given F a ∈ Rn and r c ∈ Rn , the robustness function G
α(F
a, r c ) represents the
largest α with which the robust constraint (20) is satisfied. Rigorously, the robustness
m r
function G α: Rn × Rn → (−∞, +∞] associated with the constraints (11) is defined
by (Ben-Haim 2006, Chapter 3)
 ∗
α , if Problem (22) is feasible
G
α(Fa, r c ) = (21)
0, if Problem (22) is infeasible

where

α∗ = max{α: u ∈ F(r c ), ∀u ∈ U(α,F


a)} (22)

Problem (22) is classified to the semi-infinite programming. By semi-infinite we mean


an optimization problem having a finite number of scalar variables and infinitely
many inequality constraints. Note that α∗ defined by (22) depends on the level r c of
constraints on mechanical performance as well as the nominal cross-sectional areas F a.
Throughout the chapter, we assume U(0,F a) ⊆ F(r c ) for simplicity, and hence
Problem (22) is feasible. In what follows, G a, r c ) is often abbreviated by G
α(F α or G
α(Fa).
m m
For the two different vectors of design variables F a1 ∈ Rn and F a2 ∈ Rn , we say
that Fa1 is more robust than F a2 if G
α(F
a1 , r c ) >G
α(Fa2 , r c ). Let ζ 1 ∈ Z(Gα). If there exists an
l ∈ {1, . . . , n } such that (11) becomes active at a given ζ 1 , then we say that ζ 1 is the
c

worst case. Note that there exists typically more than a single worst case. Especially,
optimum truss designs maximizing the robustness function or for specified robustness
function often have many worst cases, as will be illustrated in Section 8.2.
Figure 17.1 illustrates the schematic relations among F(r c ), U(α,F a), and Gα with
various values of α. Here, Figure 17.1(a) and 17.1(b), respectively, correspond to
αa <G α and αb =G α, where we see that the constraint u ∈ F(uc ) is satisfied for all possible
u ∈ U(α,F a). The worst case corresponds to the point u ∈ U(G α,Fa) on the boundary of

Worst case
u2 u2 u2
(aa)

(ab) (ac)

u1 u1 u1
(a) aa  a
^ (b) ab  a
^ (c) ac  a^

Figure 17.1 Relation among F (r c ), U (α,F


a), and G
α for various α.
Maximum robustness design of trusses via semidefinite programming 479

F(r c ) in Figure 17.1(b). It is observed in Figure 17.1(c) that some solutions u ∈ U(αc ,F
a)
to the equilibrium equations violate the constraint u ∈ F(r c ), which implies αc >G α.

5 Illustrative example of robustness analysis


As an illustrative example, consider a two-bar truss shown in Figure 17.2. The nodes (b)
and (c) are pin-supported at (x, y) = (0, 100.0) and (0, 0) in cm, respectively, while the
node (a) is free, i.e.√nd = nm = 2. The lengths of members (1) and (2), respectively, are
100.0 cm and 100 2 cm. The elastic modulus of each member is 200 GPa.
Let f = (f1 , f2 )T denote the external force vector applied at the node (a). The nominal
value Ff of f is given as

F
f = (1000.0, 0)T kN

The vector of nominal cross-sectional areas is denoted by F


a = (F
a1 ,F
a2 ), and is given by

F
a = (20.0, 30.0)T cm2

Consider the uncertainty model introduced in section 3.1. In accordance with (5) and
(6), define the uncertainties of a and f as

ai = F
ai + a0i ζai , i = 1, 2; ζ a ∈ Za (α) (23)
fj = F
fj + f 0 ζfj , j = 1, 2; ζ f ∈ Zf (α) (24)

where the coefficients of uncertainty are

a0i = 5.0 cm2 , i = 1, 2; f 0 = 200.0 kN (25)

For a given α, the uncertain sets Za (α) and Zf (α) are defined as

Za (α) = {ζ a ∈ R2 | α ≥ |ζai |, i = 1, 2} (26)


Zf (α) = {ζ f ∈ R2 | α ≥ ζ f 2 } (27)

y u2, f2
(1)
(a) u1, f1
(b)

(2)

0 (c) x

Figure 17.2 2-bar truss.


480 Structural design optimization considering uncertainties

Here, we have put nt = 1, T 1 = I, and m1 = 2 in (8). For simplicity, we often write


ζ ∈ Z(α) if ζ a ∈ Za (α) and ζ f ∈ Zf (α). Let σ1 and σ2 denote the stresses of members (1)
and (2), respectively. Consider the stress constraints of all members defined by (14)
with σ1c = σ2c = 1.0 GPa, i.e. the conditions

|σi (u)| ≤ σic , i = 1, 2 (28)

should be satisfied for any ζ ∈ Z(α).


As an example, putting α = 1.0, we randomly generate a number of ζ satisfying
ζ ∈ Z(α) with (23) and (24). The corresponding generated a and f defined by (23) and
(24) are shown in Figures 17.3 and 17.4, respectively.

36

34

32
a2 (cm2)

30

28

26

24
14 16 18 20 22 24 26
a1 (cm2)

Figure 17.3 The cross-sectional areas a for randomly generated ζ a ∈ Za (α) with α = 1.0.

200

100
f2 (kN)

100

200

700 800 900 1000 1100 1200 1300


f1 (kN)

Figure 17.4 The external forces f for randomly generated ζ f ∈ Zf (α) with α = 1.0.
Maximum robustness design of trusses via semidefinite programming 481

The axial forces q1 and q2 of the members (1) and (2), respectively, are written as


q1 = f1 − f2 , q2 = 2f2 (29)

Note that q1 and q2 are independent of a, because the truss is statitiscally determinate.
From (24), (27), and (29), the maximum value of q1 under the uncertain external force
f is obtained as


max{q1 (ζ): ζ ∈ Z(α)} = F
f1 + 2f 0 α (30)

The minimum value of q1 and the maximum and minimum values of q2 are obtained
similarly. Figure 17.5 depicts the variations of (q1 , q2 ) for randomly generated ζ ∈ Z(α)
with α = 1.0, and Figure 17.6 shows the corresponding variation of (u1 , u2 ).
Figure 17.7 shows the stress states (σ1 , σ2 ) computed from randomly generated
ζ ∈ Z(α) with α = 1.0. By using (23), (26), and (30), the maximum value σ1max of σ1
among possible realization of uncertain parameters ζ can be computed analytically as


max{q1 (ζ): ζ ∈ Z(α)} F f1 + 2f 0 α
σ1max (α): = max{σ1 (ζ): ζ ∈ Z(α)} = = (31)
min{a1 (ζ): ζ ∈ Z(α)} Fa1 − a01 α

Similarly, we obtain


F
f1 − 2f 0 α
σ1min (α): = min{σ1 (ζ): ζ ∈ Z(α)} = (32)
Fa1 + a01 α

300

200

100
q2 (kN)

100

200

300
700 800 900 1000 1100 1200 1300
q1 (kN)

Figure 17.5 The axial forces q for randomly generated (ζ a , ζ f ) ∈ Za (α) × Zf (α) with α = 1.0.
482 Structural design optimization considering uncertainties

0.1

u2 (cm) 0.2

0.3

0.4

0.5

0 0.1 0.2 0.3 0.4 0.5 0.6


u1 (cm)

Figure 17.6 The nodal displacements u for randomly generated (ζ a ,ζ f ) ∈ Za (α) × Zf (α) with
α = 1.0.

0.2

0.15

0.1

0.05
s2 (GPa)

0.05

0.1

0.15

0.2
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
s1 (GPa)

Figure 17.7 Stress states σ of the 2-bar truss with F


a =F
a1 for randomly generated (ζ a ,ζ f ) ∈
Za (α) × Zf (α) with α = 1.0.


2f 0 α
σ2max (α): = max{σ2 (ζ): ζ ∈ Z(α)} = (33)
F
a2 − a02 α
σ2min (α): = min{σ2 (ζ): ζ ∈ Z(α)} = −σ2max (α) (34)

Substitution α = 1.0 and F


a =F
a1 into (31)–(34) results in

σ1max = 855.2 MPa, σ1min = 286.9 MPa, σ2max = −σ2min = 113.1 MPa (35)
Maximum robustness design of trusses via semidefinite programming 483

0.2

0.15

0.1
s2 (GPa) 0.05

0.05

0.1

0.15

0.2
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
s1 (GPa)

Figure 17.8 Stress states σ of the 2-bar truss with F


a =F
a1 for randomly generated (ζ a ,ζ f ) ∈
Za (α) × Zf (α) with α =G
α(F
a ) = 1.277.
1

It is verified by Figure 17.7 and (35) that the stress constraints (28) are always inactive
for any ζ ∈ Z(α) with α = 1.0. This implies that the robustness function G a1 , σ c ) is
α(F
greater than 1.0.
Observe that the definition (21) (with (22)) of the robustness function is alternatively
rewritten as

G
α(F
a1 , σ c ) = max{α: σimax (α) ≤ σic , σimin (α) ≥ −σic (i = 1, 2)} (36)

By substituting F
a =Fa1 into (31)–(34), we see that σ1max (α) > σ2max (α) and σ1max (α) >
|σ1 (α)| hold for any α ≥ 0. Hence, (36) implies that G
min
α(F
a1 , σ c ) satisfies the condition

α) = σ1c
σ1max (G

from which we obtain

a1 − F
σ1cF f1
G
α(F
a1 , σ c ) = √ = 1.277
σ1 a1 + 2f 0
c 0

The stress states (σ1 , σ2 ) computed from randomly generated ζ ∈ Z(α) with α =Gα(Fa1 , σ c )
is shown in Figure 17.8. It is observed from Figure 17.8 that the stress constraints 17.8
are always satisfied for the generated ζ, and that the worst case corresponds to the
case in which σ1 (ζ) = σ1c holds. The stress constraints on the member (2) are always
inactive.
We next consider the nominal cross-sectional areas

F
a2 = (31.7, 21.7)T cm2
484 Structural design optimization considering uncertainties

as an alternative truss design. Note that F


a1 and F
a2 share the same structural volume,
and at F
a =F 2
a the condition

σ1max (α) = σ2max (α) (37)

is satisfied. Thus, the robustness function G


α(F
a2 , σ c ) now satisfies the condition

σ1max (G
α) = σ1c , σ2max (G
α) = σ2c

from which we obtain

G
α(F
a2 , σ c ) = 2.774

For the truss defined by F a =F


a2 , Figure 17.9 depicts the stress states (σ1 , σ2 ) computed
from randomly generated ζ ∈ Z(α) with α =G α(F
a2 , σ c ). From Figure 17.9 it is seen that
the constraints σ1 ≤ σ1 , σ2 ≤ σ2 , and σ2 ≥ −σ2 become active in the worst cases, i.e. the
c c c

constraints on both members can happen to be active.


It is of interest to note that the robustness function of the truss design F a2 is larger
than twice of that of F a in spite of the fact that F
1
a and F
1 2
a have the same structural
volume. This implies that the truss defined by F a2 violates the constraints only at larger
ambient uncertainty compared with F a1 . Thus, we may naturally conclude that the truss
design F a is more robust than F
2 1
a .
Unfortunately, if a truss has moderately many degrees of freedom and/or the uncer-
tainty set has a complicated structure, it is difficult to find the worst case parameters
and the corresponding active constraint conditions. This is the crucial difficulty in eval-
uating the robustness function. This motivates us to propose a numerically tractable
formulation for finding a lower bound of the robustness function in the following
section.

1
0.8
0.6
0.4
0.2
s2 (GPa)

0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
s1 (GPa)

Figure 17.9 Stress states σ of the 2-bar truss with F


a =F
a2 for randomly generated (ζ a ,ζ f ) ∈
Za (α) × Zf (α) with α =G
α(F
a ) = 2.774.
2
Maximum robustness design of trusses via semidefinite programming 485

6 Computation of robustness function


In this section, we propose an approximation algorithm for Problem (22), which pro-
vides a lower bound on the robustness function Gα(F
a, r c ). We also show that the exact
value of the robustness function can be obtained by solving an SDP problem if a is
certain.

6.1 Lower bounds of robus tnes s functio n


We start with embedding (16) and (17) into a finite number of quadratic inequalities.
d m
Define the matrix
∈ Rn ×n by


= (b1 , . . . , bnm )

where bi has been introduced in (4). In what follows, we assume nd < nm , which is
usually satisfied for moderately large trusses. Define nn by

nn = nm − rank
(38)

where rank
denotes the row rank of
. Then we see nn > 0.
Let
† ∈ Rn ×n denote the pseudo-inverse of
. We denote by
⊥ ∈ Rn ×n a basis
m d m n

nm
for the nullspace of
, where the nullspace of
is the set of all vectors β ∈ R satisfying

β = 0. Letting ν ∈ Rn , define ξ ∈ Rn +2n +1 and H l (r c ) ∈ S n +2n +1 (l = 1, . . . , nc ) by


n n d n d

ξ = (ν, η, u, 1)

O O
H l (r c ) =
O H l (r c )

so that

u
H l ξ = Hl
1

holds, where H l (r c ) has been introduced in (12). Let


†i,· and

i,· denote the ith row of
the matrices
† and
⊥ , respectively. Note that
†i,· and

i,· are row vectors. Define
n +2nd +1 n d
i (α2 ) ∈ S n (i = 1, . . . , nm ) and p (α2 ) ∈ S n +2n +1 (p = 1, . . . , nt ) as
⎛ ⎞
⎛ ⎞ −(
⊥ i,· )
T
0 ⎜ ⎟
⎜ ⎟ ⎜ (
† )T ⎟
2⎜ 0 ⎟ ⎜ i,· ⎟ † F
⎟ (−


i (α ) = α ⎝ 0 ⎠ (0T 0T ai bi 0) − ⎜
2 0 T
i,·
i,· −
i,· K 0)
ai bi ⎜−(
† K) F T⎟
⎝ i,· ⎠
0
0
⎛ ⎞ ⎛ ⎞
0 O
⎜ ⎟
2⎜0⎟
⎜ Tp ⎟
T
p (α ) = α ⎝ ⎠ (0T
2
0T 0T f 0) − ⎜
⎝ O ⎠ (O
⎟ Tp O −T pF
f)
0
−F
T
f0 f TT
p
486 Structural design optimization considering uncertainties

Proposition 6.1. The conditions (16) and (17) hold if and only if ξ satisfies

ξ T i (α2 )ξ ≥ 0, i = 1, . . . , nm (39)
ξ T p (α2 )ξ ≥ 0, p = 1, . . . , n t
(40)

m
Proof. By introducing w = (wi ) ∈ Rn , we see that (16) is equivalently rewritten as
F − η,

w = Ku (41)
wi = ζai (−a0i bTi u), α ≥ |ζai |, i = 1, . . . , nm (42)

From the definition of
and
, we see that any solution to (41) can be written as

F − η) +
⊥ ν
w =
† (Ku (43)
nn
with ν ∈ R . On the other hand, the condition (42) is equivalent to
wi2 ≤ (a0i α)2 (bTi u)2 , i = 1, . . . , nm (44)
Consequently, by using (43) and (44), we see that (41) and (42) are equivalent to
F − η) +
⊥ ν]2 ≥ 0,
(a0i α)2 (bTi u)2 − [
†i,· (Ku i = 1, . . . , nm
i,·

Thus, the condition (16) is equivalent to (39). From the definition (8) of Zf it follows
that ζ f ∈ Zf if and only if ζ f satisfies
α2 ≥ ζ Tf T Tp T p ζ f , p = 1, . . . , nt
Hence, the condition (17) can be equivalently embedded into the following quadratic
inequalities in terms of η:

(f 0 α)2 ≥ (η − F
f )T T Tp T p (η − F
f ), p = 1, . . . , nt
which can be rewritten as (17). Consequently, (17) is equivalent to (40).
t c m nc
Let ρ ∈ Rn n and τ ∈ Rn be

ρ = (ρ11 , . . . , ρnt 1 , . . . , ρ1nc , . . . , ρnt nc )T


τ = (τ11 , . . . , τnm 1 , . . . , τ1nc , . . . , τnm nc )T

The following proposition, which plays a key role in constructing an approximation


of Problem (22), shows a relaxation of infinitely many constraints by using a finite
number of constraints:
Proposition 6.2. The implication

u ∈ U(α,F
a) =⇒ u ∈ F(r c ) (45)

holds if there exist ρ and τ satisfying


t m

n 
n
Hl (rc ) − ρpl p (α ) −
2
τil i (α2 )  O, l = 1, . . . , nc (46)
p=1 i=1
Maximum robustness design of trusses via semidefinite programming 487

ρ ≥ 0, τ≥0 (47)

Proof. From Proposition 6.1 it follows that u ∈ U(α,F a) if and only if (39) and (40)
are satisfied. Observe that the constraint (13) is reduced to

ξ T H l ξ ≥ 0, l = 1, . . . , nc

Consequently, the implication (45) holds if and only if the implication

ξ T p ξ ≥ 0, p = 1, . . . , nt ξ T i ξ ≥ 0 i = 1, . . . , nm =⇒ ξ T H l ξ ≥ 0 (48)

holds for each l = 1, . . . , nc . The assertion of this proposition is obtained by applying


Lemmas A.1 and A.2 (ii) to (48).

Proposition 6.2 implies that the set of a finite number of constraints (46) and (47)
in terms of a finite number of variables corresponds to a sufficient condition for the
infinitely many constraints of Problem (22). A lower bound of Problem (22) is then
naturally constructed as follows:
t c m nc
Lemma 6.3. Consider the following problem in variables (t, ρ, τ) ∈ R × Rn n × Rn :

⎨ nt
 nm

∗  c
t : = max t : Hl (r ) − ρpl p (t) − τil i (t)  O (l = 1, . . . , nc ),
t,ρ,τ ⎩
p=1 i=1

ρ ≥ 0, τ ≥ 0 (49)

Then

G a, r c )2 ≥ t ∗
α(F

Proof. Recall that the robustness function G α is defined by (21) with Problem (22). It
follows from Proposition 6.2 that the constraints of Problem (22) are satisfied if the
constraints of Problem (49) are satisfied. This completes the proof.

6.2 Algorithm for computing lower bounds

Lemma 6.4. Problem (49) is a quasiconvex programming problem.


Proof. For a given t, define a set T by
⎧ 
⎨  nt nm
m c 
 
T (−t) = (ρ, τ) ∈ R(n +n )n  H l −
t
ρpl p (t) − τil i (t)  O (l = 1, . . . , nc ),
⎩  p=1 i=1

ρ ≥ 0, τ ≥ 0
488 Structural design optimization considering uncertainties

By regarding t ∈ R as an auxiliary variable, Problem (49) is equivalently rewritten as

min{t : (ρ, τ) ∈ T (t)} (50)


t,ρ,τ

Observe that T (t) is defined by nc linear matrix inequalities and (nt + nm )nc linear
inequalities. Hence, T (t) is convex for any given t ∈ R. This implies that Problem (50)
is a quasiconvex optimization problem.

Let I denote the identity matrix with an appropriate size. For a fixed t, consider the
t c m c
following problem in the variables (s, ρ, τ) ∈ R × Rn n × Rn n :


⎨ 
nt

nm


s : = min s : H l (r c ) − ρpl p (t) − τil i (t) + sI  O (l = 1, . . . , nc ) ,
s,ρ,τ ⎩
p=1 i=1


ρ ≥ 0, τ ≥ 0 (51)

Problem (51) corresponds to a convex feasibility problem of Problem (49) at the given
level t.
Lemma 6.4 guarantees that the following bisection method solves Problem (49):

Algorithm 6.5 (bisection method for Problem (49)).


0 0
Step 0: Choose t 0 and t satisfying 0 ≤ t 0 ≤ t ∗ ≤ t , and the small tolerance  > 0. Set
k = 0.
k k
Step 1: If t − t k ≤ , then stop. Otherwise, set t = (t k + t )/2.
∗ ∗ ∗
Step 2: Find an optimal solution (s , ρ , τ ) to the SDP problem (51).
k+1 k k+1
Step 3: If s∗ ≤ 0, then set t k+1 = t and t = t . Otherwise, set t = t and t k+1 = t k .
Step 4: Set k ← k + 1, and go to Step 1.

Algorithm 6.5 finds a global optimal value t ∗ of Problem (49) by solving some SDP
0
problems, where exactly #log2 ((t − t 0 )/)$ iterations are required before the algo-
rithm terminates. Here, we denote by #γ$ the minimum integer that is not smaller
than γ ∈ R. From Lemma 6.3 it follows that (t ∗ )1/2 corresponds to a lower bound
of the robustness function G a, r c ). At Step 0, we may simply choose t 0 = 0, and a
α(F
0
sufficiently large t . At Step 2 of each iteration, we solve Problem (51), which can
be embedded into the standard form of SDP problem (1) with m = nc (nt + nm + 1) + 1
and n = nc (nn + 2nd + nm + nt + 1). It should be emphasized that a global optimal solu-
tion to an SDP problem (51) can be obtained by using the primal-dual interior-point
method, where the number of arithmetic operations is bounded by a polynomial of m
and n (Wolkowicz et al. 2000).
Maximum robustness design of trusses via semidefinite programming 489

6.3 Spec ial cas e


The remainder of this section is devoted to investigating the case in which a is certain.
The following result shows that, under some assumptions on the uncertainty set, the
robustness function G
α can be obtained by solving an SDP problem:
Proposition 6.6. By putting nt = 1 in (8), let Zf be
d
Zf (α) = {ζ ∈ Rn |α ≥ T1 ζ2 } (52)

and let

a0i = 0, i = 1, . . . , nm (53)
d +1
in (5). Assume that Hl  O (l = 1, . . . , nc ). Define 0 (α2 ,F
a) ∈ S n by
 ) *
0 ! T 0" (T1F K)T
0 (α ,F
2
a) = α2
0 f − K −F
(T1F f ).
−F
T
f0 f

Then the robustness function G α(Fa, r c ) is obtained by solving the following SDP problem
nc
in the variables (t, µ) ∈ R × R with µ = (µl ) ∈ nc :
D E
Gα(F
a, r c )2 = max t : µl Hl (r c ) − 0 (t,F a)  O (l = 1, . . . , nc ), µ ≥ 0 (54)
t,µ

Proof. From (52) and (53), the uncertain equilibrium equations (16) and (17) are
reduced to
F =F
Ku f + f 0 ζ f , α ≥ T 1 ζ f 2
Hence, u ∈ U(α,F
a) if and only if
1 2T 1 2
F −F
T 1 (Ku f) F −F
T 1 (Ku f ) ≤ α2

from which we obtain


T 
u u
u ∈ U(α,F
a) ⇐⇒ 0 (α2 ,F
a) ≥0 (55)
1 1
By using (55) and Lemmas A.2 (i) and A.1, we see that the implication (45) holds if
and only if
∃ρl ≥ 0 subject to H l  ρl 0 (α2 ,F
a), l = 1, . . . , nc (56)
Note that H l  O implies that ρl = 0 does not satisfy (56). Hence, by putting µl = 1/ρl ,
l = 1, . . . , nc , the implication (56) is reduced to
∃µl ≥ 0 subject to µl H l − 0 (α2 ,F
a)  O, l = 1, . . . , nc
Consequently, Problem (22) is reduced to
D E
G a, r c ) = max α : µl H l (r c ) − 0 (α2 ,F
α(F a)  O (l = 1, . . . , nc ), µ ≥ 0 (57)
α,µ

We see in Problem (57) that maximizing α is equivalent to maximizing α2 , which


concludes the proof.
490 Structural design optimization considering uncertainties

7 Maximization of robustness function


Throughout this section, we assume that the assumptions in Proposition 6.6 hold, i.e.
only f possesses the uncertainty defined by (6) and (52) and a =F a is always satisfied.
In Section 5, we have observed through an analytical example that the truss with the
larger robustness function is considered to be more robust. We attempt in this section
to find F
a which maximizes the robustness function G α(F
a, r c ). We call this structural
optimization problem the maximization problem of robustness function.
Consider the conventional constraints on Fa which are dealt with in the usual struc-
tural optimization problems, e.g. the upper and lower-bound constraints of F a and
m g
the upper-bound constraint of structural volume. Letting g : Rn → Rn be a smooth
function, we assume that these constraints can be written in the form of

a) ≥ 0
g(F (58)

a) involves neither u nor f . For the given r c and g, the maximization


Note that g(F
problem of robustness function is formulated as

max{G
α(F a) ≥ 0}
a, r c ) : g(F (59)
F
a

In what follows, the argument r c is often omitted for brevity.


m
Assume that there exists F
a ∈ Rn satisfying G
α(F
a, r c ) > 0 and g(F
a) ≥ 0. Then the objec-
tive function of Problem (59) can be replaced by G
α(Fa, r c )2 without changing the optimal
solution. From this observation and Proposition 6.6 it follows that Problem (59) is
equivalent to the following problem:

max{t : µl H l − 0 (t,F
a)  O (l = 1, . . . , nc ), µ ≥ 0, g(F
a) ≥ 0} (60)
t,µ,F
a

Problem (60) is sometimes referred to as nonlinear semidefinite programming prob-


lem (Kanzow et al. 2005). To solve Problem (60), we next propose a sequential SDP
method, which is an extension of the successive linearization method for standard
nonlinear programming problems. Let DG(x ) denote the derivative of the smooth
mapping G : Rm → S n at x = (xi ) ∈ Rm defined such that DG(x )h is a linear function
of h = (hi ) ∈ Rm given by


m
∂Gl (x) 
DGl (x )h = hi
∂xi x=x
i=1

The following is the sequential SDP method solving Problem (60) based on the
successive linearization method:
Algorithm 7.1 (Sequential SDP method for Problem (60)).
Step 0: Choose Fa0 satisfying g(F a0 ) ≥ 0 and G α(F
a0 , r c ) > 0; choose cmax ≥ cmin > 0,
c ∈ [cmin , cmax ], and the small tolerance  > 0. Set k = 0.
0

Step 1: Find an optimal solution (t k , µk ) of Problem (54) by setting F a =Fak .


Maximum robustness design of trusses via semidefinite programming 491
c m
Step 2: Find the (unique) optimal solution (
t k ,
µk ,
F
ak ) ∈ R × Rn × Rn of the
SDP problem

1 k ⎪
max
t − c (
t,
µ,
Fa)22 ⎪


t,
µ,
F
a 2 ⎪

k
subject to F l (
t,
µ,
Fa)  O, l = 1, . . . , n ,
c
(61)



µ + µk ≥ 0, ⎪


∇g(Fa )
F
k T
a + g(F
a )≥0
k

where

F kl (
t,
µ,
F
a) = (
µl + µkl )H l − D 0 (t k ,F
ak )(
t,
F
a T )T − 0 (t k ,F
ak ).

If (
t k ,
µk ,
Fak )2 ≤ , then stop.
Step 3: Set F
a k+1
=Fa +
F
k
ak .
Step 4: Choose c k+1
∈ [cmin , cmax ]. Set k ← k + 1, and go to Step 1.

Essentially, Algorithm 7.1 solves the nonlinear SDP problem (60) by successively
approximating it as the SDP problems. In Steps 1 and 2, we solve the SDP problems (54)
and (61) by using the primal-dual interior-point method (Wolkowicz et al. 2000).
The following proposition shows the global convergence property of Algorithm 7.1:

Proposition 7.2. (Kanno & Takewaki 2006b). SupposeF f = 0 and that Problem (61) is
strictly feasible at each iteration. Let {(t k , µk ,F
ak )} be a sequence generated by Algorithm
7.1. Then any accumulation point of {(t k , µk ,F ak )} is a stationary point of Problem (60).

8 Numerical examples
The lower bounds on the robustness functions are computed for various trusses by
using Algorithm 6.5. Moreover, the optimal designs with the maximal robustness
functions are computed for various trusses by using Algorithm 7.1 in the case where
only the external forces possess uncertainties. In these algorithms, the SDP problems
are solved by using SeDuMi Ver. 1.05 (Sturm 1999), which implements the primal-
dual interior-point method for the linear programming problems over symmetric cones.
Computation has been carried out with MATLAB Ver. 6.5.1 (The MathWorks, Inc.
2002).

8.1 20-bar trus s


Consider a plane truss illustrated in Figure 17.10, where nd = 16 and nm = 20.
Nodes (a) and (b) are pin-supported. The lengths of members in the x- and y-directions,
respectively, are 100 cm and 50 cm. The elastic modulus of each member is 200 GPa.
We assume that the cross-sectional areas of members (1)–(5) have uncertainty,
whereas those of members (6)–(20) are certain. The external loads applied at nodes
(e)–(j) have uncertainty, whereas those applied at nodes (c) and (d) are certain.
No external loads are applied at nodes (c) and (d). The nominal cross-sectional areas are
492 Structural design optimization considering uncertainties


(i) (8) (j)

(11) (14)
(17) (20)

(g) (7) (h)


Uncertain loads &
(10) (13) certain stiffness
(16) (19)

(6) (f)
(e)
(9) (12)
(15) (18)


(c) (1) (d)
Uncertain stiffness &
(2) (3) certain loads
(4) (5)
(a) (b)
0 x

Figure 17.10 20-bar truss.

F
ai = 20.0 cm2 (i = 1, . . . , 20). As the nominal external loads, we consider the following
two cases:

(Case 1): (200.0, 0) kN, (500.0, 0) kN, (700.0, −400.0) kN, and (0, −400.0) kN are
applied at the nodes (e), (g), (i), and (j), respectively;
(Case 2): (200.0, 0) kN, (500.0, 0) kN, (700.0, −700.0) kN, and (0, −700.0) kN are
applied at the nodes (e), (g), (i), and (j), respectively.

The coefficients of uncertainty in (5) and (6) are a0i = 2.5 cm2 (i = 1, . . . , 5) and
(j) (j)
f 0 = 50.0 kN. The uncertainty set for ζ f is given by (52) with T 1 = I. Let u(j) = (ux , uy )T
denote the nodal displacement vector of the node (j). As the constraint (13) we consider
the following conditions:

|u(j)
x | ≤ 5.0 cm, (62)
y |
|u(j) ≤ 2.0 cm. (63)

The lower bound of the robustness function Gα(Fa, uc ) is computed by using Algorithm
0
6.5 for each case. We set t = 0, t = 10.0, and  = 10−4 . The lower bounds (t ∗ )1/2 are
0

obtained as 2.672 and 2.412 for (Case 1) and (Case 2), respectively, after 17 SDP prob-
lems are solved. Thus, the robustness functions depend on the nominal external loads.
Maximum robustness design of trusses via semidefinite programming 493

0.8

1

1.2

1.4
u(j)y

1.6

1.8

2
2 2.5 3 3.5 4 4.5 5
u(j)
x

Figure 17.11 Nodal displacements of the node ( j ) in (Case 1) for randomly generated ζ ∈ Z(α)
with α = 2.6717.

0.8

1

1.2

1.4
u(j)
y

1.6

1.8

2
2 2.5 3 3.5 4 4.5 5
u(j)
x

Figure 17.12 Nodal displacements of the node ( j ) in (Case 2) for randomly generated ζ ∈ Z(α)
with α = 2.4124.

We next randomly generate a number of ζ a and ζ f satisfying (7) and (8), respec-
tively, by putting α = (t ∗ )1/2 , and compute the corresponding nodal displacements.
Figures 17.11 and 17.12 depict the obtained displacement of the node (j) for (Case 1)
and (Case 2), respectively. It is observed from Figures 17.11 and 17.12 that the
494 Structural design optimization considering uncertainties

constraints (62) and (63) are satisfied for all generated (ζ a , ζ f ), which verifies that
the obtained values (t ∗ )1/2 are certainly the lower bounds of G α. In (Case 1), from Fig-
ure 17.11 we may conjecture that the worst case corresponds to the case in which the
constraint (62) becomes active. On the other hand, in (Case 2), Figure 17.12 shows that
the worst case corresponds to the case in which the constraint (63) becomes active. In
(j) (j)
both cases, at least one of |ux | and |uy | possibly becomes very close to its bound. This
implies that Algorithm 6.5 provides sufficiently tight lower bounds, i.e. the obtained
value (t ∗ )1/2 is very close to the exact value of G
α in each case.

8.2 2 9 - b ar trus s
Consider a truss illustrated in Figure 17.13. The nodes (a) and (b) are pin-supported,
where nd = 20 and nm = 29. The lengths of members both in x- and y-directions are
50.0 cm. The elastic modulus of each member is 200 GPa. Suppose that the force
of (0, −10.0) kN is applied at the nodes (c) and (d) as the nominal external load
F
f . The uncertainty set for ζ f is given by (52) with T 1 = I. We put f 0 = 1.0 kN in (6).
The member cross-sectional areas a are assumed to be certain. Hence, we can compute
the exact value of the robustness function by using Proposition 6.6. Consider the stress
constraints (14) with σic = 500 MPa for each member.
The maximization problem (60) of the robustness function is solved by using Algo-
rithm 7.1. As the constraints (58) in Problem (59), we consider the conventional
constraint on structural volume as well as nonnegative constraints of F a, namely,
g is defined as

F
a
a) =
g(F
V − V(F
a)

y
(a) (9) (10) (11)

(18) (19) (20)

(1) (3) (5) (7)


(24) (25) (26)

(12) (13) (14)

(21) (22) (23)

(2) (4) (6) (8)


(27) (28) (29)

(15) (16) (17)


0 (b) (c) (d) x
~ ~
ƒ ƒ

Figure 17.13 29-bar truss.


Maximum robustness design of trusses via semidefinite programming 495

Here, V(F a) denotes the total structural volume of a truss, which is a linear function
of a, and ng = nm + 1. The initial solution is given as Fa0i = 20.0 cm2 (i = 1, . . . , nm ). We
first compute the robustness function at the initial solution F a =Fa0 by using Proposition
6.6. Since only the external load possesses the uncertainty, the robustness function is
obtained as G α(Fa0 ) = 0.7261 by solving only one SDP problem.
In Algorithm 7.1 we set  = 0.1, cmax = cmin = 10−5 , and V = 3.3971 × 104 cm3 so
that the volume constraint becomes active at F a =Fa0 . The optimal design F aopt found by
Algorithm 7.1 after 53 iterations is shown in Figure 17.14, where the width of each
member is proportional to its cross-sectional area. The corresponding robustness func-
tion is G
α(F
aopt ) = 11.0710. We compute the optimal designs for various V. Figure 17.15
depicts the relation between V and the robustness function at the optimal design. For
comparison, we compute the robustness function for the cross-sectional areas that

Figure 17.14 Optimal design of the 29-bar truss.

12

10
a
Robustness function ^

0
0 0.5 1 1.5 2 2.5 3 3.5
Volume V (cm3) 104

Figure 17.15 Relation between V and G α of the optimal trusses (×: initial solution; •: optimal
solution; ∗: solution obtained by scaling aopt ).
496 Structural design optimization considering uncertainties

are obtained by scaling F


aopt . It is observed from Figure 17.15 that the optimal design
cannot be obtained only by scaling F aopt . It is of interest to note that, from the defi-
nition of the robustness function, all truss designs are plotted in (or on the boundary
of) the domain D in Figure 17.15. Thus, engineers may be able to make decisions
incorporating the trade-off between the robustness and the structural volume.

9 Conclu sions
Based on the info-gap theory (Ben-Haim 2006), the robustness function of trusses has
been investigated extensively as a measure of robustness of a truss under load and
structural uncertainties. We have proposed an approximation algorithm for comput-
ing the robustness functions of trusses under the load and structural uncertainties.
A global convergent algorithm has been proposed for the maximization problem of
the robustness function.
We have introduced an uncertainty model of trusses, where the external forces as
well as the member stiffness include uncertainties. We assume that the constraints on
mechanical performance can be expressed by using some quadratic inequalities in terms
of displacements. In fact, we can deal with the polynomial inequality constraints in
terms of displacements by converting them into a finite number of quadratic inequal-
ities. Then we have formulated a quasiconvex optimization problem, which provides
a lower bound, i.e. a conservative estimation, of the robustness function. In order to
obtain a global optimal solution to the present quasiconvex optimization problem,
a bisection method has been proposed, where a finite number of SDP problems are
successively solved by the primal-dual interior-point method.
In order to solve the maximization problem of the robustness function for variable
member cross-sectional areas, a sequential SDP approach has been presented, where
the SDP problems are successively solved by the primal-dual interior-point method to
obtain the optimal truss designs. The method has been shown to be globally convergent
under certain assumptions.

Technical lemmas

Lemma A.1 (homogenization). Let Q ∈ S n , p ∈ Rn , and r ∈ R. Then the following


two conditions are equivalent:
T  
x Q p x
(a) ≥ 0, ∀ x ∈ Rn
1 pT r 1

Q p
(b) O
pT r

Proof. The implication from (b) to (a) is trivial. We show that (a) implies (b) by the
contradiction. Suppose that there exist x ∈ Rn and η ∈ R satisfying

 T  
x Q p x
<0 (64)
η pT r η
Maximum robustness design of trusses via semidefinite programming 497

for the contradiction to the assertion (b). If η = 0, then (64) is reduced to


T   
x /η Q p x /η
<0
1 pT r 1

which contradicts the assertion (a). Alternatively, if η = 0, then (64) is reduced to

xT Qx < 0 (65)

Letting x = ζx , the left-hand side of (a) is reduced to

(xT Qx )ζ 2 + 2(pT x )ζ + r (66)

which is regarded as a function of ζ. (65) implies that (66) is not bounded below, from
which it follows that there exists a ζ such that (66) becomes negative. Thus, we see the
contradiction to (a).

Lemma A.2 (S-lemma). Let f0 (x), f1 (x), . . . , fm (x) be quadratic functions of x ∈ Rn .


Then,
(i) the implication

f1 (x) ≥ 0 =⇒ f0 (x) ≥ 0

holds if and only if there exists τ1 ( ≥ 0) such that

f0 (x) ≥ τ1 f1 (x), ∀ x ∈ Rn

(ii) the implication

f1 (x) ≥ 0, . . . , fm (x) ≥ 0 =⇒ f0 (x) ≥ 0

holds if there exist τ1 , . . . , τm ≥ 0 such that



m
f0 (x) ≥ τi fi (x), ∀ x ∈ Rn
i=1

Proof. See Ben-Tal & Nemirovski (2001, Theorem 4.3.3).

References

Ben-Haim, Y. 2004. Uncertainty, probability and information-gaps. Reliability Engineering and


System Safety 85:249–266.
Ben-Haim, Y. 2005. Value at risk with Info-gap uncertainty. Journal of Risk Finance 6:388–403.
Ben-Haim, Y. 2006. Information-gap Decision Theory: Decisions Under Severe Uncertainty.
2nd edition. London: Academic Press.
Ben-Haim, Y. & Elishakoff, I. 1990. Convex Models of Uncertainty in Applied Mechanics. New
York: Elsevier.
Ben-Tal, A. & Nemirovski, A. 2001. Lectures on Modern Convex Optimization: Analysis,
Algorithms, and Engineering Applications. Philadelphia: SIAM.
498 Structural design optimization considering uncertainties

Ben-Tal, A. & Nemirovski, A. 2002. Robust optimization – methodology and applications.


Mathematical Programming B92:453–480.
Boyd, S. & Vandenberghe, L. 2004. Convex Optimization. Cambridge: Cambridge University
Press.
Calafiore, G. & El Ghaoui, L. 2004. Ellipsoidal bounds for uncertain linear equations and
dynamical systems. Automatica 40:773–787.
Chen, S., Lian, H. & Yang, X. 2002. Interval static displacement analysis for structures
with interval parameters. International Journal for Numerical Methods in Engineering 53:
393–407.
Helmberg, C. 2002. Semidefinite programming. European Journal of Operational Research
137:461–482.
Kanno, Y. & Takewaki, I. 2006a. Robustness analysis of trusses with separable load and
structural uncertainties. International Journal of Solids and Structures 43:2646–2669.
Kanno, Y. & Takewaki, I. 2006b. Sequential semidefinite program for maximum robust-
ness design of structures under load uncertainties. Journal of Optimization Theory and
Applications 130:265–287.
Kanzow, C., Nagel, C., Kato, H. & Fukushima, M. 2005. Successive linearization methods
for nonlinear semidefinite programs. Computational Optimization and Applications 31:
251–273.
Kojima, M. & Tunçel, L. 2000. Cones of matrices and successive convex relaxations of
nonconvex sets. SIAM Journal on Optimization 10:750–778.
Moilanen, A. & Wintle, B.A. 2006. Uncertainty analysis favors selection of spatially aggregated
reserve structures. Biological Conservation 129:427–434.
Ohsaki, M., Fujisawa, K., Katoh, N. & Kanno, Y. 1999. Semi-definite programming for
topology optimization of truss under multiple eigenvalue constraints. Computer Methods
in Applied Mechanics and Engineering 180:203–217.
Pantelides, C.P. & Ganzerli, S. 1989. Design of trusses under uncertain loads using convex
models. Journal of Structural Engineering (ASCE) 124:318–329.
Pierce, S.G., Worden, K. & Manson, G. 2006. A novel information-gap technique to assess
reliability of neural network-based damage detection. Journal of Sound and Vibration 293:
96–111.
Sturm, J.F. 1999. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric
cones. Optimization Methods and Software 11/12:625–653.
Takewaki, I. & Ben-Haim, Y. 2005. Info-gap robust design with load and model uncertainties.
Journal of Sound and Vibration 288:551–570.
Vandenberghe, L. & Boyd, S. 1996. Semidefinite programming. SIAM Review 38:49–95.
Wolkowicz, H., Saigal, R. & Vandenberghe, L. (eds) 2000. Handbook of Semidefinite
Programming – Theory, Algorithms, and Applications. Dordrecht: Kluwer.
Using MATLAB, 2002. Natick: The MathWorks.
Chapter 18

Design optimization and robustness


of structures against uncertainties
based on Taylor series expansion
Ioannis Doltsinis
University of Stuttgart, Stuttgart, Germany

ABSTRACT: Synthetic Monte Carlo sampling and analytic Taylor series expansion offer two
different techniques for the treatment of random input scatter. The present chapter expounds
on the Taylor series approximation as applied to the stochastic analysis and design optimization
of structures including robustness against uncertainties. A unified approach is presented that
encompasses linear and nonlinear elastic structures as well as path dependent elastic–plastic
response. The methodology refers to finite element systems, and assumes that the response
as a function of the input varies continuously; representation of the probability distribution is
restricted to mean and variance. The approach is applicable to input scatter of practical relevance
and is computationally efficient; its analytic nature allows utilization of optimization algorithms.

1 Introduction
Consideration of random scatter in the analysis and design of structures is important
with regard to reliability, and for securing standards of operation performance which
implies robustness against uncertainties. The synthetic Monte Carlo sampling and the
analytic Taylor series expansion offer alternatives of stochastic analysis and design
improvement (Doltsinis 2003).
This chapter expounds on the Taylor series approximation as applied to the stochas-
tic analysis and design optimization of structures including nonlinear elastic and path
dependent elastic–plastic response. The Taylor series expansion supplies a formalism
suitable for developing the robust design of structures using optimization algorithms.
The robustness problem is stated as a two criteria task involving minimization of mean
value and standard deviation of the objective function, randomness of the constraints
included. The quantities requested from stochastic analysis are mean vector and covari-
ance matrix of the response variables; gradient based optimization demands design
sensitivity expressions as well. The approximation is by a second-order Taylor series
expansion of the finite element equations with respect to the random input. A uniform
methodology is presented starting at elastic structural systems with randomness and
extending to nonlinear elastic as well as path dependent elastic–plastic response. In
elastoplasticity the procedure is adapted to the incrementation of the associated deter-
ministic analysis. The proposed stochastic analysis is applied in conjunction with
standard optimization to the robust design of linear and nonlinear structures. Monte
Carlo sampling verifies the results. Merits and deficiencies of the Taylor expansion
approach are discussed.
500 Structural design optimization considering uncertainties

The text is organized as follows. After Section 1 which introduces into the theme
treated and summarizes the contents, Section 2 deals with design optimization and
robustness. The task of structural optimization is briefly posed, the implications of
randomness are discussed, and the issue of design robustness is addressed. The formal
statement of stochastic optimization is extended to robust design that raises the prob-
lem of the concurrent observance of two criteria: mean and standard deviation of the
objective function that characterize level and variability, respectively. The compound
desirability function is introduced as the single requirement compromising the two
criteria. The subject of robustness is illustrated by an example, the difference between
performance robustness and structural reliability is pointed out.
Section 3 is concerned with the random response of deformable systems as
represented by finite elements. A Taylor series expansion to the second order
about the mean values of the random input is employed to approximate the fluc-
tuating response displacements of the system. Therefrom the expressions for the mean
vector and the covariance matrix of the structural response are deduced in general
terms; the interaction with the finite element equations is indicated. Section 4 specifies
the formalism for the stochastic analysis of elastic structures. First, the linear finite
element equations are developed in order to deduce the second-order approximate of
the mean response and the first-order transform of the response covariance matrix. In
view of an application of gradient based optimization algorithms, the response sensi-
tivity analysis is carried out and the design sensitivity expressions are presented. The
stochastic analysis is then extended to large displacement problems of elastic struc-
tures taking full account of the kinematic nonlinearity. The methodology is exposed
for the quasistatic conditions underlying the equilibrium equations; application to
transient response problems governed by the dynamic equations of motion is seen to
be straightforward.
Section 5 addresses the stochastic analysis of path dependent elastic–plastic response.
In difference to the description of linear or nonlinear elastic structures, elastic–plastic
systems imply incremental linearization of the process as a consequence of the path
dependent constitutive nature of the problem. Following the incrementation of the
primary (deterministic) algorithm of the elastic–plastic structural analysis, the stochas-
tic formalism for means and covariances is developed essentially along the lines of the
previous issues. The sensitivity expressions for the response displacements also have
to be advanced incrementally during the course of the loading programme. A simpli-
fication introduced by discarding in the incremental step past scatter in geometry and
stress is contrasted to the rigorous formalism, and the possible impact on the results
is commented by a subsequent numerical demonstration. Of course, the incremental
technique is equally applicable to the nonlinear elastic problem, and although not
necessary from the theoretical point of view it is employed for structural analysis in
practice.
Section 6 illustrates the theory by numerical applications to nonlinear and to path
dependent problems. For this purpose, the proposed methodology of stochastic anal-
ysis is combined with standard optimization to perform tasks of structural design
optimization and robustness. To be specific, the stochastic analysis supplies means
and variances of the response displacements that enter the evaluation of the objective,
resp. of the desirability function in the context of robustness, design sensitivities sup-
port utilization of gradient based optimization algorithms. Apart from the instructive
case of a planar cantilever truss that verifies the algorithm, selected problems include
Taylor approach to structural optimization and robustness 501

the structural compliance optimization of a space truss structure under material and
geometrical nonlinearity, and the robust design of an antenna structure undergoing
large displacements. Results are compared to synthetic Monte Carlo sampling. Section
7 concludes the chapter with a summary of the presented methodology and with a criti-
cal appraisal of the Taylor expansion approach as regarding the inherent assumptions,
the quality of the approximation, the range of applicability and the computational
efficiency.
The present account refers to the work reported in (Doltsinis and Kang 2004) and
(Doltsinis et al. 2005) as inspired by (Kleiber and Hien 1992); Zhan Kang contributed
to the advancement of the subject in the framework of his doctoral dissertation (Kang
2005). The theoretical development here uses the explicit formalism of (Doltsinis
2003). The proposed methodology of stochastic structural design allows solution of
the associated optimization problem by a software package available in the Internet
(Lawrence et al. 2007). To this end, the optimization algorithm based on sequen-
tial quadratic programming is operated in conjunction with the input supplied by the
developed stochastic finite element analysis.

2 Design optimization, robustness

2.1 The optimization tas k


The purpose of structural design is the achievement of a certain performance of the
system. If design variations are possible, optimization can be envisaged with respect
to a specified objective. The following deals with structures as deformable systems
represented by finite elements, the terminology is as in (Doltsinis and Rodič 1999).
The response to applied actions is defined by the N displacements of the mesh nodal
points of the discretized object, collected in the N × 1 vector u(z). It depends on the
p × 1 vector z = {z1 . . . zp } which comprises the set of p design parameters that are not
fixed and are therefore disposable in optimization. Optimum design is attempted by
minimizing a scalar objective function

fo (z) = f [u(z), z] (1)

which defines the performance. The minimum of fo (z) specifies the values of the design
parameters z. The mathematical statement of the optimization problem is:

find z
minimizing fo (z)
(2)
subject to gci (z) ≤ 0, i = 1, . . . , k
and zL ≤ z ≤ zU

In the above gci (z) = gi [u(z), z] are constraint functions, zL and zU lower and upper
bounds, respectively, that restrict the value of the design variables.

2.2 Implicatio n of randomnes s


When randomness is present in the system the problem is no longer deterministic but
stochastic in nature. Scatter may be due to the external actions, such as fluctuating
loads. Besides, shape geometry and part dimensions have to tolerate imperfections to a
502 Structural design optimization considering uncertainties

Load

Displacement

Figure 18.1 Response to fluctuating action and random deviation from nominal behaviour due to
parameter scatter.

P, u
O

l
A, E
O

Figure 18.2 Bar subjected to tension.

certain extent, material properties exhibit random deviations from nominal behaviour.
Both, fluctuating external actions and randomness of inherent parameters contribute
to the scatter of the response of a structure (Figure 18.1).
For the purpose of illustration let pay attention to the bar element (length l, cross-
section A) depicted in Figure 18.2. Tension of the bar by the force P applied at the
upper end induces there a displacement u while the other end is pin-jointed. Assuming
linear elastic behaviour with modulus E:

l 1
u= P= P (3)
EA k
Taylor approach to structural optimization and robustness 503

where k = EA/l denotes the stiffness of the bar element, the quantity 1/k is known as
the flexibility. Random fluctuations of the response displacement u can be due to the
flexibility 1/k, and the force P. Mean value µu and variance σu2 are obtained by the
expectation operations (for probabilistic terminology see (Breipohl 1970))

µu = E(u), σu2 = Var(u) = E[(u − µu )2 ] (4)

with u from Eq. (3).


For instance, if P is random with mean µP and covariance σP2 while 1/k fixed, the
mean value and the variance of the ensuing displacement u are given by

1 1 2
µu = µP , σu2 = σ (5)
k k2 P

Analogously, if the flexibility 1/k varies among bar elements in a series while P is fixed,
one has

µu = Pµ1/k , σu2 = P2 σ1/k


2
(6)

In either case the coefficient of variation (COV = σ/µ) of the random input is
transferred to the output without alteration by the system.
If both 1/k and P exhibit randomness but vary independently, mean value and
variance of the displacement are obtained after substitution of Eq. (3) in the expectation
operations of Eq. (4) as

µu = µ1/k µP (7)

and

2 ∼ 2
σu2 = σ1/k
2
σP2 + µ21/k σP2 + µ2P σ1/k = µ1/k σP2 + µ2P σ1/k
2
(8)

The last, linearized expression in Eq. (8) presumes that the variances σ1/k
2
, σP2 are small.
Alternatively, a Taylor series expansion of Eq. (3) to the first order about the mean
values of the input quantities 1/k, P gives
   
1 ∼ ∂u ∂u 1
u ,P = u(µ1/k , µP ) + (P − µP ) + − µ1/k
k µ ∂P ∂(1/k) µ k

1
= µ1/k µP + µ1/k (P − µP ) + µP − µ1/k (9)
k

Application to the above of the variance operator of Eq. (4) while bearing in mind the
independence of 1/k and P confirms the linearized expression for the variance σu2 in
Eq. (8). The approximation is not of relevance for Eq. (7), the mean value of u.
Assuming deterministic input except of the applied force P, and taking the weight W
of the bar with mass density # as the design objective, randomness is introduced in the
504 Structural design optimization considering uncertainties

problem by constraining the response displacement: u ≤ u0 . With the bar cross-section


A as the design variable, the simple task of dimensioning is formally presented as

find A
minimizing W = #lA (10)
subject to µu + βσu ≤ u0

On account of Eq. (5) the displacement constraint determines the cross-section A by

l
µu + βσu = (µP + βσP ) ≤ u0 (11)
EA

The value of the parameter β controls the impact of randomness. For the choice β = 0
only the mean displacement µu complies with the constraint, for β = 1 scatter is covered
up to a displacement exceeding the mean by the standard deviation σu .
The random variability of the system in general reflects on the objective function,
Eq. (1). In the following a direct dependence on the design parameters and the implicit
dependence via the response variables will be discussed separately. The first case refers
to an objective function

fo = f (z),

and demonstrates dealing with design parameters z that are random variables. The
Taylor series expansion about the mean values µz = {µz1 . . . µzp } gives to the second
order

 2 
df 1 d f
f (z) ∼
= f (µz ) + (z − µz ) + (z − µz ) t
(z − µz ) = (12)
dz µ 2 dzdzt µ
p  p 
 ∂f 1  ∂2 f
= f (µz ) + (zk − µzk ) + (zk − µzk )(zl − µzl )
∂zk µ 2 ∂zk ∂zl µ
k=1 k,l=1

Note that df/dz is a row matrix, df/dzt = (df/dz)t represents a vector (column matrix).
The last expression is helpful in obtaining a second order approximation to the mean
value of the objective function:

p 
1  ∂2 f
µf ∼
= f (µz ) + σz z
2 ∂zk ∂zl µ k l
k,l=1
) *
1  ∂2 f
p
= f (µz ) + σz2k (13)
2 ∂zk2
k=1 µ
Taylor approach to structural optimization and robustness 505

The first order approximate of the variance of the objective function reads

 t p
  
df df ∂f ∂f
σf2 ∼
= z = σz z
dz µ dz µ ∂zk µ ∂zl µ k l
k,l=1
p 2
 ∂f
= σ2 (14)
∂zk µ zk
k=1

In the above, z = [σzk zl ] k, l = 1, . . . , p, denotes the symmetric p × p covariance matrix


of the design variables. It comprises the mutual covariances σzk zl of the parameters
zk , zl . The entries on the diagonal are the individual variances of the p parameters:
σz2k = σzk zk with σzk being the standard deviation. The last expressions in Eqs. (13)
and (14) are applicable only if the random variables are independent such that the
population covariances σzk zl (k = l) vanish. The terminology referring to several random
variables follows (Rencher 1995; Doltsinis 1999).
In the approximation of µf by Eq. (13) independent random design variables are
represented by the mean values µzk and the variances resp. the standard deviations
σzk . If the second-order terms are discarded such that µf ∼ = f (µz ), only the mean
values are design parameters: z ⇐ {µzk } k = 1, . . . , p. The standard deviations essen-
tially determine the variance of the objective function, (Eq. (14)), but they may
constitute additional design variables in higher order mean value approximation as
well: z ⇐ {µzk ; σzk } k = 1, . . . , p. The general case of statistically dependent variables in
z involves all the relevant entities of the covariance matrix.
With an objective function fo = f (z) the random response affects merely the con-
straints of the optimization problem. This becomes different if the objective function
is defined in terms of response variables determined by the displacement u(z):

fo = f [u(z)]

Keeping the design parameters hidden, the expansion to the second order about the
mean displacement µu = {µu1 . . . µuN } gives here,


df
f (u) ∼
= f (µu ) + (u − µu )
du µ

1 d2 f
+ (u − µu )t (u − µu ) (15)
2 dudut µ

Therefrom, the second order approximate of the mean value of the objective function
is deduced as

N 
1 ∂2 f
µf ∼
= f (µ u ) + σu u (16)
2 ∂ui ∂uj µ i j
i,j=1
506 Structural design optimization considering uncertainties

1.0 1.0

1 2 3

(2)

1.0
(1) (3)

4 (4)
P 5
u

Figure 18.3 Four-bar truss.

and the first order approximation to the variance is

 t N
  
∼ df df ∂f ∂f
σf2 = u = σu u (17)
du µ du µ ∂ui µ ∂uj µ i j
i,j=1

Unlike the design parameters in Eqs. (13), (14) independence can hardly be assumed
between the response variables ui , uj . The mean and variance of the objective function
as by Eqs. (16) and (17) require knowledge of the mean vector µu and the covariance
matrix u = [σui uj ] i, j = 1, . . . , N of the response displacements, which still are to be
obtained.
The stochastic counterpart of the deterministic optimization problem in Eq. (2)
might be stated as to obtain best mean performance. There remains the issue of the
variability caused by the fluctuating factors, however, and its diminution by specifying
appropriate values for the design variables can be equally of importance. To illustrate
the argument of random fluctuations, consider the displacement minimization problem
for the four-bar truss shown in Figure 18.3. The plane structure is loaded by a static
horizontal force P = 1 acting on node no. 4. The elastic modulus of bars 1, 3 is E1 ,
that of bars 2, 4 is E2 . The elastic moduli are considered independent random variables
with mean and standard deviation µE1 = 210.0, σE1 = 21.0, and µE2 = 100.0, σE2 = 5.0.
The cross-section areas A1 (bars 1, 3) and A2 (bars 2, 4) are disposable as the design
variables. The design objective is to minimize at node no. 4 the horizontal displacement
u under constraint structural weight W: µW ≤ 5.0. The mass density of the material is
uniquely taken to # = 1.0.
In this design problem the structural weight is an active constraint such that only
one of the design parameters is independent and needs to be determined. Figure 18.4
shows the variation of mean and standard deviation of the observed displacement u
as a function of the parameter A1 , the selected independent variable. From the left
frame, the design that minimizes the mean value µu = E(u) is for A1 = 1.7678, the
right limit of the diagram. The appertaining value of the other parameter is A2 = 0,
which eliminates bars 2, 4. From the right frame of Figure 18.4, the above solu-
tion exhibits the largest standard deviation. The solution appertaining to minimum
Taylor approach to structural optimization and robustness 507

 103  104
4.04 4
+

4 3.5

+
3.96 3


σ
3.92 2.5

3.88 2
+
+
3.84 0 1.5
A*1 0.5 1 1.5 A1' 2 0 A*1 0.5 1 1.5 A'1 2
A1 A1

Figure 18.4 Mean value µ (left) and standard deviation σ (right) of displacement u versus cross-
section area A1 .

Figure 18.5 Probability (relative frequency) density distribution for two alternatively optimized
designs: minimum mean displacement u (left), least variance (right).

standard deviation is A$1 = 0.3531, A$2 = 2.0006. The associated mean displacement is
higher than before: 3.969 × 10−3 to 3.848 × 10−3 while 1.7 × 10−4 to 3.97 × 10−4 for
the standard deviation in the two alternative solutions. The effect on level and scatter
of u is demonstrated in Figure 18.5 where the probability (relative frequency) density
of u has been approached by synthetic sampling (Monte Carlo simulation) with 106
realizations from normal distributions of the random input variables.

2.3 Robust des ign


Apart from best performance in the mean, robustness of the design requires the vari-
ability of the performance to be possibly low. Usually the option of reducing input
508 Structural design optimization considering uncertainties

Frequency of occurrence
2nd design

1st design

µ1 µ2 f

Figure 18.6 Concept of robust design.

scatter is excluded and robustness against fluctuating factors is attempted by adjust-


ing the values of the design parameters. The concept of robust design is illustrated
in Figure 18.6, which refers to the probability distribution of a fluctuating objec-
tive function f , the structural performance measure of interest to be minimized.
The first design marked in the figure exhibits a smaller mean value of the perfor-
mance function, but a larger variability than the second design. The second design
is less sensitive to the scatter of input data, and is said to be more robust in this
respect.
At this point it is important to contrast robust design as pursued here and reli-
ability based optimization. Reliability refers to the probability of failure PF in
extreme situations where the defined failure criterion h exceeds a specified critical
value c:
 ∞
PF = P(h ≥ c) = ph dh (18)
c

with ph denoting the probability density function for h. Optimization minimizes


then the objective function under the constraint of a prescribed probability P0 of
survival (Luo and Grandhi 1997)

PS = 1 − PF ≥ P0 (19)

Methods to obtain the failure probability are described in (Hurtado 2004). There may
be several probabilistic safety constraints like the above in addition to other, ordinary
ones. The purpose here is to avoid, with a certain probability, system catastrophe in
the presence of random parameters. Robust design, on the other hand, addresses the
regular employment of the system. It aims at the reduction of performance variability
due to fluctuating input. Robustness is assessed by the scatter of the performance
function, most frequently measured by its standard deviation. Figure 18.7 distinguishes
between robustness and reliability. For the purpose of illustration the performance
function f in the robustness issue serves here also as the failure criterion for assessing
reliability (h = f ).
Taylor approach to structural optimization and robustness 509

Probability density
s Limit state

Robustness Reliability

µ Performance f

Figure 18.7 Distinguishing between robustness and reliability of design.

The task of robust optimum design involves both the mean value µfo of the objective
function fo (z) and its variance σf2o resp. the standard deviation σfo . In mathematical
terms:

find z
 
µfo (z)
minimizing fo (z) =
σfo (z) (20)
subject to µgci (z) + βi σgc i (z) ≤ 0, i = 1, . . . , k
and zL ≤ z ≤ zU

The design parameters in the vector z can be deterministic quantities or mean and
standard deviations of random variables. The constraint functions gci enter the opti-
mization problem by the respective mean value µgci and the standard deviation σgci .
The coefficient βi can be interpreted as the feasibility index for the individual con-
straint condition. Its value controls the impact of scatter on the constraint, and has
to be specified according to the requirements. As a rule, the larger the value of βi the
closer the constraint is met under fluctuating conditions.
Equation (20) states a problem of multicriteria optimization associated with the
simultaneous minimization of mean and standard deviation of the fluctuating objec-
tive function. The two criteria have been assembled to the vector valued objective
function fo (z); involvement of the standard deviation rather than the variance makes
the two quantities commensurable. A survey of vector or multicriteria optimization is
found in (Stadler 1984). The definition of a vector optimum is not unique; one option
is the Pareto optimum: it is achieved when further improvement in any one of the cri-
teria values implies worsening of at least one other criterion. The vector optimization
problem will be converted in the following to a scalar one by introducing the so-called
desirability function (Myers and Montgomery 1995). Thereby the single criteria, which
can be conflicting, are merged to a compound objective that compromises the require-
ments. Among various possibilities, a simple proposal defines the desirability function
as the weighted sum of the two objectives (Lee and Park 2001):

µfo (z) σf (z)


Fo (z, ξ) = (1 − ξ) +ξ o , 0<ξ<1 (21)
µopt σopt
510 Structural design optimization considering uncertainties

The denominators serve as standardization, here taken equal to the separate minima
of the mean value µopt and the standard deviation σopt of the objective function fo (z).
These quantities will be associated to the limits ξ = 1 and ξ = 0 of the weighting factor,
not included in Eq. (21).
In the multicriteria task, the value of 0 < ξ < 1 must be selected according to the
importance put on mean and variance of fo (z). With the desirability function Fo (z, ξ)
for ξ = const. instead of the vector fo (z) in Eq. (20), the minimization task can be carried
out using the standard optimization algorithms applicable to the deterministic problem
of Eq. (2) under observance of the due constraints and restrictions. The computed
optimum appertains to the desiraibility function Fo (z, ξ) defined by Eq. (21) and the
value selected for ξ; the associated µfo (z) and σfo (z) are inherent in the system. Effecting
the optimization with different values of the parameter ξ supplies a set of points in
the µfo , σfo –plane, which enables the analyst to decide on the selection of the design
variables.
The stochastic optimization problem involves the calculation of the mean µ and the
standard deviation σ of the objective function fo (z) = f [u(z), z], and of the constraints
in terms of the design variables z. Gradient based optimization techniques require in
addition availability of sensitivity expressions for the above quantities with respect to
the design variables z. Focusing on the implicit dependence via the displacement u and
neglecting the second-order terms in Eq. (16), the design sensitivity of the approximated
mean objective follows to

∂µf df ∂µu
= (22)
∂zk du µ ∂zk

and from Eq. (17) for the variance,


  t 2  
∂σf2 df ∂u df d f ∂µu
= + 2u (23)
∂zk du µ ∂zk du µ dudut µ ∂zk

3 Random response
Execution of the optimization task demands eventually the mean and the covariances
of the response quantities possibly also their design sensitivities, that is the deriva-
tives with respect to the design variables. To this end, let the displacement u of the
deformable system be stated as a function of deterministic parameters arranged in the
vector b and of random parameters constituting the vector α:

u = u(b, α)

The design variables may be part of b and/or α, the generalized input to the system.
Expanding u to the second order about the mean of α such that α = µα +
α, while
b is silent gives

1 2
u2 = u0 +
u +
u (24)
2
Taylor approach to structural optimization and robustness 511

The above distinguishes between zeroth-order, first-order and second-order terms:

u0 = u(b, µα ), (25)
 q
 
∂u ∂u

u = (α − µα ) = (αi − µαi )
∂α µ ∂αi µ
i=1

 q
∂ u 2 

2 u = (αi − µαi )(αj − µαj )
∂αi ∂αj µ
i,j=1

The second-order approximate of the mean displacement thus becomes


q 
1 ∂2 u
µu2 = u(b, µα ) + σα α
2 ∂αi ∂αj µ i j
i,j=1

1 2
= µu0 +
µu (26)
2

and the first-order approximation to the covariance matrix is


 t
∂u ∂u
u1 = α (27)
∂α µ ∂α µ

If α = {y z} is composed of two sets y and z where y comprises random input param-


eters other than the design ones z, and the two sets are statistically independent, then
the second-order term in Eq. (26) reads

r
  p
 
∂2 u ∂2 u

µu =
2
σy y + σz z (28)
∂yi ∂yj µ i j ∂zk ∂zl µ k l
i,j=1 k,l=1

and Eq. (27),


 t  t
∂u ∂u ∂u ∂u
u1 = y + z (29)
∂y µ ∂y µ ∂z µ ∂z µ

The design sensitivity of the mean response vector, Eq. (26), is obtained by
differentiation with respect to the design variables zk . Approximately:

∂µu ∼ ∂µu0 ∂u(b, µα )


= = (30)
∂zk ∂zk ∂zk

For the covariance matrix of the response, Eq. (27):


 t   t 
∂u1 ∂u ∂α ∂u ∂u ∂ ∂u
= +2 α (31)
∂zk ∂α µ ∂zk ∂α µ ∂α µ ∂z k ∂α µ S
512 Structural design optimization considering uncertainties

On the right, the subscript S refers to the symmetric part of the matrix expression in
the parentheses. Recall that in the expressions for µu and u random design variables
are represented by mean value and variance resp. covariance.
Usually, a displacement function is not available explicitly. In quasistatic deforma-
tion the response of the system is governed by the condition of equilibrium between
the applied forces and the induced stresses. In finite element terms

S(σ, X) = P(t, X) (32)

The vector X = o X + u comprises the coordinates of the mesh nodal points in the dis-
placed state, o X defines the original position. The stress in the interior is symbolized
by σ, the stress resultants at the nodal points are collected in the vector S(σ, X). They
are in equilibrium with the forces applied at the homologous positions, arranged in
the vector P(t, X), a function of time t and possibly depending on the actual geometry.
The stress σ as a consequence of the deformation depends on the material law appli-
cable under the circumstances. This completes the sequence of operations that defines
the response displacement u as an implicit function of the input parameters.

4 Elastic structures

4.1 L i nea r res po ns e


In addition to the nodal point coordinates, structural systems require the input of mem-
ber dimensions, which will be assumed collected in the array A in what follows. When
the displacement does not change appreciably the geometry, the static equilibrium of
the structure is established for the initial configuration o X. The stress resultants in the
elastic system are

S(σ, o A, o X) = K(κ, o A, o X)u (33)

with K the stiffness matrix of the structure and κ that of the elastic material. In elasticity
the stress σ is a function of the elastic constants in the matrix κ and of the strain that
derives from the displacements in u. The entities in o X and κ may be deterministic,
classified as elements of the vector b, or random, allocated to the vector α. The applied
forces in P(t, o X) are taken at fixed level, but they may be fluctuating depending on
parameters in α.
For the elastic structure the equilibrium condition, Eq. (32), becomes

K(b, α)u = P(b, α) (34)

and defines the displacement u as a function of the parameter sets b and α. Then, the
approximation to the mean vector and covariance matrix of the response displacement
are obtained as follows. Eq. (34) written in the form

Kµ µu0 = Pµ (35)
Taylor approach to structural optimization and robustness 513

determines the zeroth-order approximate µu0 of the mean displacement. The quantities

Kµ = K(b, µα ), Pµ = P(b, µα ) (36)

refer to mean input.


Differentiation of Eq. (34) with respect to the random variables αj yields the first
partial derivatives of the displacement
 
∂u ∂P ∂K
Kµ = − u , j = 1, . . . , q (37)
∂αj µ ∂αj ∂αj µ

All terms are evaluated at mean input and u = µu0 on the right-hand side. The solution
of Eq. (37) extends over q right-hand vectors, each associated with a single ran-
dom variable, to furnish the transformation operator in Eq. (27) for the displacement
covariance matrix
 
∂u ∂u ∂u ∂u
= ···
∂α µ ∂α1 ∂α2 ∂αq µ

Differentiating once more Eq. (34) gives the second partial derivatives of u with
respect to the variables in α. The second-order term to the mean displacement in
Eq. (26) follows then from

q  q 
 ∂2 u  ∂2 P ∂2 K ∂K ∂u

µu = Kµ
2
σα α = − u−2 σα α
∂αi ∂αj µ i j ∂αi ∂αj ∂αi ∂αj ∂αi ∂αj µ i j
i,j=1 i,j=1

(38)

The coefficient matrix in Eqs. (35) to (38) is uniquely the stiffness matrix Kµ of
the elastic structure for mean (nominal) input. The analysis encompasses two matrix
solutions with single vectors on the right-hand side, Eqs. (35) and (38), and one solu-
tion with q vectors on the right-hand side, equal to the number of the random input
variables, Eq. (37).
The design sensitivities of the displacement mean vector µu and of the covariance
matrix u , Eq. (30) and Eq. (31), require differentiation of Eqs. (35) and (37) with
respect to the design parameters zk . This gives for the mean

∂µu0 ∂Pµ ∂Kµ


Kµ = − µ (39)
∂zk ∂zk ∂zk u0

and for the covariance transformation


  
∂ ∂u ∂ ∂P ∂K ∂Kµ ∂u
Kµ = − u − , j = 1, . . . , q (40)
∂zk ∂αj µ ∂zk ∂αj ∂αj µ ∂zk ∂αj µ
514 Structural design optimization considering uncertainties

The latter equation is solved for q right-hand side vectors to furnish the entire matrix
operator needed in Eq. (31):
 
∂ ∂u ∂ ∂u ∂u ∂u
= ···
∂zk ∂α µ ∂zk ∂α1 ∂α2 ∂αq µ

4.2 L a rge d i sp lac ement s


If the structure undergoes large displacements the equilibrium condition must be stated
at the displaced geometry X = o X + u; changes in member dimensions are negligible
under the small strain assumption (Doltsinis 2003). So Eq. (32) assumes here the
nonlinear form

S(σ, o A, o X + u) = P(t, o X + u) (41)

Numerical solution of nonlinear systems relies on iterative techniques applied to the


residual vector R = P − S which is a function of the displacement u and depends on
the input parameters. When the displacement complies with equilibrium the residual
function vanishes. At fixed time, in terms of random parameters α and deterministic
parameters b for the input

R(b, α, u) = P − S = 0 (42)

This furnishes the zeroth-order equation for the mean displacement µu as

R(b, µα , µu0 ) = 0 (43)

The solution for the mean displacement µu0 is frequently obtained by the Newton-
Raphson iteration scheme

−(Gµ )n [(µu0 )n+1 − (µu0 )n ] = R[b, µα , (µu0 )n ] (44)

where the indices n, n + 1 point on consecutive iterations and the coefficient matrix −G
is the tangent operator

∂R ∂S ∂P
−G = − = − = KT − KL (45)
∂u ∂u ∂u
The contribution KT , the tangential stiffness matrix emanates from straining and
kinematic effects on the stress resultants S(σ, X). Symbolically,

∂S ∂S dσ ∂S
KT = = + = KM + KG (46)
∂u ∂σ du ∂X
The matrix KM projects the material constitutive properties onto the structural level,
the matrix KG is known as the geometric stiffness matrix of the structure. The load
correction matrix KL accounts for geometry dependent applied forces:

∂P ∂P
KL = = (47)
∂u ∂X
Taylor approach to structural optimization and robustness 515

The tangent operator is taken here at the mean values of α and u. It replaces
the elastic stiffness matrix of the linear problem. When the convergent solution of the
zeroth-order equation is achieved, the operator is available for the first-order and the
second-order approximations to follow.
The first-order equations are obtained from Eq. (42) by differentiation:
 
∂u ∂R
−Gµ = , j = 1, . . . , q (48)
∂αj µ ∂αj µ

The transformation operator for the displacement covariance matrix in Eq. (27)
requires solution of Eq. (48) for the q right-hand vectors associated with the single ran-
dom variables. The equation for the second-order term to the mean displacement relies
on the second derivatives of u with respect to the random variables in α. With reference
to Eq. (26)

q 
 ∂2 u
−Gµ
µu = −Gµ
2
σα α
∂αi ∂αj µ i j
i,j=1

 ∂2 R
q
dG ∂u

= + σα α (49)
∂αi ∂αj dαi ∂αj µ i j
i,j=1

The total derivative on the right-hand side of the equation includes the implicit depen-
dence of G(b, α, u) on the random parameters α via the displacements u; it is computed
by finite differences.
The equations for the nonlinear case are structured as Eqs. (35), (37) and (38)
appertaining to linearity, the tangent operator G replacing here the stiffness matrix K
of the linear elastic system. The derivatives on the structural level are actually assembled
from individual finite elements contributions.
Regarding the sensitivity with respect to the design variables z, Eq. (43) gives for the
mean displacement

∂µu0 ∂R
−Gµ = (50)
∂zk ∂zk u

the differentiation of the residual vector R = P − S on the right-hand side to be per-


formed at u = const. The design sensitivity of the covariance matrix of the response
displacement, Eq. (31), needs the derivative of the operator ∂u/∂α with respect to z.
From Eq. (48),
  
∂ ∂u ∂ ∂R ∂Gµ ∂u
−Gµ = + j = 1, . . . , q (51)
∂zk ∂αj µ ∂zk ∂αj µ ∂zk ∂αj µ

Obviously, Eqs. (50) and (51) represent generalized forms of Eqs. (39) and (40).
516 Structural design optimization considering uncertainties

4.3 Nonlinear dynamics


Incorporation of inertia effects complements Eq. (32) for the quasistatic deformation
of the finite element system to the equation of motion

Mü + S(u) = P(t, u) (52)

The vector ü = ∂2 u/∂t 2 comprises the acceleration at the mesh nodal points,
M denotes the mass matrix. Damping forces induced by the velocity u̇ = ∂u/∂t are
not included in Eq. (52).
Approximate integration (Doltsinis 1999, 2003) links the kinematic quantities,
acceleration ü, velocity u̇ and displacement u, such that at the end of an increment
in time τ = b t − a t:

b
u̇ = a u̇ + τ(c1 a ü + c2 b ü)
b
u = a u + τ a u̇ + τ 2 (c3 a ü + c4 b ü) (53)

The parameters c1 , c2 ; c3 , c4 control the performance of the numerical integration. They


can be specified as to reproduce time stepping schemes known in the literature (Tamma
et al. 2000). Utilization of Eq. (53) in Eq. (52) stated at current t = b t leaves an equation
for the acceleration ü = b ü, all kinematic quantities at past stage t = a t known. Explicit
schemes (c2 = c4 = 0) ensure linearity of the incremental problem.
The dynamic counterpart of the stochastic Eq. (42) at instant t = b t reads

R(b, α, b u) − M b ü = 0 (54)

The random parameters α affect the acceleration and induce fluctuations in velocity
and displacement, resp. in the deformed geometry. For the mean values Eq. (53) gives
1 2
b
µu̇ = a µu̇ + τ c1 (a µü ) + c2 (b µü ) (55)
1 2
b
µu = a µu + τ(a µu̇ ) + τ 2 c3 (a µü ) + c4 (b µü )

The respective operators that enter the first-order covariance transformation, Eq. (27),
derive to
) *
∂b u̇ ∂a u̇ ∂a ü ∂b ü
= + τ c1 + c2
∂α ∂α ∂α ∂α
) *
∂b u ∂a u ∂a u̇ ∂ a
ü ∂ b

= +τ + τ 2 c3 + c4 (56)
∂α ∂α ∂α ∂α ∂α

Thus the task reduces to the stochastic analysis of Eq. (54) for the acceleration.
Solving the zeroth-order equation

R(b, µα , b µu0 ) − Mb µü0 = 0 (57)


Taylor approach to structural optimization and robustness 517

for b µü0 by the iteration scheme of Eq. (44) introduces the tangent operator

∂ 1 b 2
−GD = M ü − R(b, α, b
u) = M − τ 2 c4 G (58)
∂b ü

with the matrix G defined by Eq. (45).


Differentiation of Eq. (54) with respect to the random parameters αj furnishes

∂b ü ∂M b ∂R ∂R ∂b u
M + ü = + (59)
∂αj ∂αj ∂αj ∂u ∂αj

Therefrom the first-order stochastic equations for b ü are deduced in conjunction with
Eq. (56) for ∂b u/∂αj as
) *  ) * 
∂b ü ∂R ∂b u ∂M b
−(GD )µ = +G − ü j = 1, . . . , q (60)
∂αj ∂αj ∂αj ∂αj
µ H µ

On the right-hand side of the equation, the subscript H refers to the historical contri-
butions (t = a t) to the partial derivative ∂b u/∂α included on the left-hand side of the
equation. The derivation of higher-order stochastic terms and of sensitivity expressions
is straightforward. Their reproduction is suppressed here for typographical brevity.

5 Path dependence – elastoplasticity


If the association of stress and strain is not unique but depends on the loading sequence,
the material constitutive law is incremental in nature – as in elastoplasticity – specified
by a momentary stiffness matrix κ. On the structural level this requests incrementation
of the loading process and accumulation of the ensuing displacement (Doltsinis 1999).
For the time step a t → b t:

b
u = a u +
u, b
S = S(a σ +
σ, a X +
u, o A) (61)

The displacement increment


u is governed by the equilibrium condition, Eq. (32),
stated at instant t = b t, the end of the current interval

b
S = S(a σ, a X, o A, κ,
u) = P(b t, a X +
u) = b P (62)

which may be contrasted to Eq. (41), the equilibrium condition for the elastic case.
Usually the incremental approach is employed in elasticity as well for various reasons,
comprising the algorithmic treatment of nonlinearities and the interest in the response
throughout the course of the loading programme (Doltsinis 2003).
The incremental counterpart of Eq. (42) in terms of the random parameters α and
the deterministic parameters b is

b
R(b, α,
u) = b P − b S = 0 (63)
518 Structural design optimization considering uncertainties

The equation for the zeroth-order approximation µ


u0 to the mean increment

b
R(b, µα , µ
u0 ) = 0 (64)

may be solved by an application of the iteration scheme of Eq. (44). The appertaining
matrix operator

∂R ∂S ∂P
−G = − = − = K T − KL (65)
∂(
u) ∂(
u) ∂(
u)

is set up for the state at instant t = b t. The tangent stiffness

∂b S(b, α,
u) dS(a σ +
σ, a X +
u) db S
b
KT = = = b (66)
∂(
u) d(
u) d u

is structured as the matrix KT of Eq. (46), but with the momentary elastoplastic material
stiffness matrix κ in place of the elastic κ.
Differentiation of Eq. (63) supplies the equations for the first-order terms:

 ) *
∂(
u) ∂b R
−Gµ = j = 1, . . . , q (67)
∂αj µ ∂αj
µ

The equation for the second-order approximation


2 µ
u to the mean incremental
displacement follows analogously to Eq. (49):

q  2
 ∂ (
u)
−Gµ
2 µ
u = −Gµ σαi αj
∂αi ∂αj µ
i,j=1
 

q
∂2 (b R) dG ∂(
u)
= + σαi αj (68)
∂αi ∂αj dαi ∂αj
i,j=1 µ

The total derivative on the right-hand side of the equation accounts for the addi-
tional dependence of G(b, α,
u) on the random parameters α via the incremental
displacements
u. It is evaluated by a finite difference scheme.
The mean value of the displacement increment approximated to the second order

1 2
µ
u2 = µ
u0 +
µ
u (69)
2

requires solution of Eqs. (64) and (68). The accumulated mean displacement at instant
t = b t follows to

b
µu = a µu + µ
u (70)
Taylor approach to structural optimization and robustness 519

The first-order approximate of the displacement covariance matrix for b u = a u +


u
reads
  t
∂a u ∂(
u) ∂a u ∂(
u)
b
u1 = + α + (71)
∂α ∂α µ ∂α ∂α µ

It requires q solutions of Eq. (67) which determine the respective columns of the
derivative matrix ∂(
u)/∂α. The historical term ∂a u/∂α results from past incremental
accumulation. Analysis of Eq. (71) reveals the composition of the covariance matrix:
  t 
∂a u ∂(
u)
b
u1 = u1 + 2
a
α + 
u1 (72)
∂α µ ∂α µ S

The index S points on the symmetric part of the matrix expression within the
parentheses.
The sensitivity of the mean displacement with respect to the design variable zk is

∂ b µu ∂a µu ∂µ
u
= + (73)
∂zk ∂zk ∂zk

From Eq. (64) by differentiation


) *
∂µ ∂b Rµ
−Gµ
u0 = (74)
∂zk ∂zk

u

which furnishes an approximation to the advancement of the sensitivity in Eq. (73)


within the increment. The design sensitivity of the covariance matrix of the ensuing
displacements as from Eq. (31) reads here
) * ) *t ⎡) * ) *t ⎤
∂ u1
b
∂ u b
∂α ∂ u
b
∂ u
b
∂ ∂ u ⎦
b
= + 2⎣ α (75)
∂zk ∂α ∂zk ∂α ∂α ∂zk ∂α
µ µ µ µ S

Evaluation of the above expression requires determination of the derivative matrix


) *  
∂ ∂b u ∂ ∂a u ∂ ∂(
u)
= + (76)
∂zk ∂α ∂zk ∂α µ ∂z k ∂α µ
µ

specifically of the incremental change on the right-hand side. From Eq. (67) one obtains
for the respective columns by differentiation,
 ) * 
∂ ∂(
u) ∂ ∂b R ∂Gµ ∂(
u)
−Gµ = + j = 1, . . . , q (77)
∂zk ∂αj µ ∂zk ∂αj ∂zk ∂αj µ
µ
520 Structural design optimization considering uncertainties

Computation requires formation of the q right-hand side vectors and solution with the
coefficient matrix −GT set up uniquelly with mean values at t = b t.
That far fluctuations of the incremental response due to the appearance of a σ and
a
X in Eq. (62) have been disregarded. These quantities are represented by their mean
values, which simplifies the stochastic analysis. For a rigorous treatment the argument
of the function for the residual vector in Eq. (63) is complemented to read
b
R(b, α, a σ, a X,
u) = 0 (78)

This does not modify the determination of the mean displacement increment to the
zeroth order by Eq. (64) except of the explicit appearance here of a µσ and a µX . But,
the first-order equation for the covariance operator, Eq. (67), now becomes
    
∂(
u) ∂b R b ∂S ∂a σ b ∂R ∂a X
−Gµ = − + j = 1, . . . , q (79)
∂αj µ ∂αj ∂σ ∂αj ∂X ∂αj
µ

On the right-hand side, the stress term in the middle can be computed by utilizing the
stress resultants module S(σ, X) with modified arguments: S(∂a σ/∂αj ,b X). The operator
to the last term on the right is identified as

∂R ∂P ∂S
= − = KL − KG (80)
∂X ∂X ∂X
with the matrices KL and KG defined by Eqs. (46), (47). The incremental computation
requires, in addition, the updates

∂a σ ∂a σ ∂(
σ) ∂a X ∂a X ∂(
u)
⇐ + , ⇐ + (81)
∂αj ∂αj ∂αj ∂αj ∂αj ∂αj

6 Applications

6.1 Perf o rm anc e o f t he inc r ement al ap p r o a ch


The incrementation procedure is exemplified for the planar 10-bar cantilever truss
depicted in Figure 18.8. The problem exhibits geometrical and material nonlinearity.
The two equally increasing forces P3y , P5y acting vertically at nodes no.3 and no.5 of
the truss induce large displacements. The bar members are assumed of elastic–material
with yield stress 250.0, the relationship between the stress σ and the strain ε being

1.0 × 104 dε elastic range
dσ =
(250.0/|ε|)dε elastic–plastic

The mass density of the material is taken as # = 1.0.


The horizontal position x4 of node no.4, the elastic moduli grouped as EI (bars
1,2), EII (bars 3,4), EIII (bars 5,6), EIV (bars 7,8,9,10) and the cross-section areas A of
the bar members are considered as random variables characterized by the mean value
Taylor approach to structural optimization and robustness 521

360 360

2 (1) 4 (2) 6

360
(5) (6)
(8) (7) (10) (9)

1 (3) 3 (4) 5

P3y P5y

Figure 18.8 Planar 10-bar cantilever truss.

200 40
Mean (present)
Mean (MCS)
150 Std. dev. (present) 30

Std. dev. of displacement


Std. dev. (MCS)
Mean displacement

100 20

50 10

0 0
0 500 1000 1500
Load magnitude

Figure 18.9 Mean and standard deviation of nodal displacement v 5 .

µ and variance σ 2 or coefficient of variation σ/µ. The mean values and variances
for the distinct parameters are specified as µx4 = 360.0, σx24 = 400.0, µE = 1.0 × 104
for all groups, σE2 = 9.0 × 104 for groups I, II, III, σE2 = 1.0 × 106 for group IV, µAi =
5.0 (i = 1, 2, . . . , 10), σAi /µAi = 0.05 (i = 1, 2, . . . , 6), σAi /µAi = 0.1 (i = 7, . . . , 10).
The performance of the incremental approximation to the nonlinear stochastic
problem is commented with reference to Figures 18.9–18.11. Figure 18.9 shows the
second-order mean and the first-order standard deviation of the vertical displacement
v5 of node no.5 as a function of the magnitude of the loading, which is increased by
10 equidistant incremental steps. The results of the incremental Taylor-series approx-
imation are compared with results from synthetic Monte Carlo sampling with 3000
realizations of the random input. It is seen from the figure that the mean values agree
well; the difference in the standard deviation may arise because the present, simplified
522 Structural design optimization considering uncertainties

700 70
Mean (present)
600 60
Mean (MCS)

Std. dev. of max. member stress


Mean max. member stress Std. dev. (present)
500 50
Std. dev. (MCS)
400 40

300 30

200 20

100 10

0 0
0 500 1000 1500
Load magnitude

Figure 18.10 Mean and standard deviation of maximum member stress.

0 0

Sensitivity of Standard Deviation


2 2
Sensitivity of mean

4 4

6 6

8 Sensitivity of mean (MCS) 8


Sensitivity of mean (present)
10 Sensitivity of Standard Deviation (MCS) 10
Sensitivity of Standard Deviation (present)

12 12
0 500 1000 1500
Load magnitude

Figure 18.11 Sensitiviy of mean and standard deviation of displacement v 5 .

approach neglects past scatter in stress and geometry. Figure 18.10 refers to the evo-
lution of the maximum member stress during the course of loading. The figure shows
that both techniques yield here practically the same results for mean value as well as
for standard deviation.
The design pays attention to the vertical displacement v5 of node no.5. The sen-
sitivity of mean value and standard deviation of this quantity with respect to the
cross-section area of bar no.5 are plotted in Figure 18.11 versus the magnitude of
the loading. The sensitivities obtained from Monte Carlo simulations base on finite
difference approximations. From both, Figure 18.11 and Figure 18.9, moderate dif-
ferences to the simplified incremental stochastic computation appear after a number
of incremental steps and are seen to accumulate slowly.
The design objective in this example problem is to minimize the vertical displace-
ment v5 at node no.5, at loading P3y = P5y = 1000.0. The design variables are the
mean cross-section areas of the bars, restricted by 0.05 ≤ µAi ≤ 10.0 (i = 1, 2, . . . , 10).
Taylor approach to structural optimization and robustness 523

Table 18.1 Optimal results for 10-bar truss.

Init. (Determ.) ξ=0 ξ = .25 ξ = .5 ξ = .75 ξ = 1.0

A1 4.00 (9.14) 9.54 8.34 7.15 6.73 6.19


A2 4.00 (0.05) 0.05 0.05 0.58 0.60 0.81
A3 4.00 (8.83) 8.02 7.83 7.98 7.89 7.76
A4 4.00 (6.05) 4.92 4.82 4.21 3.61 3.13
A5 5.00 (0.05) 0.31 0.27 0.05 0.05 0.05
A6 5.00 (0.05) 0.05 0.05 0.54 0.61 0.78
A7 4.00 (4.07) 4.80 4.93 5.54 5.65 5.75
A8 4.00 (7.61) 6.90 7.30 6.85 7.26 7.21
A9 4.00 (6.54) 7.42 7.96 7.46 7.51 7.45
A10 4.00 (0.05) 0.05 0.06 1.01 1.15 1.70
µf 92.35 (56.30) 59.30 59.62 61.68 62.66 64.51
σf 6.13 (3.53) 3.76 3.55 3.38 3.30 3.27
µW × 104 2.16 (1.80) 1.80 1.80 1.80 1.80 1.80

The structural weight constraint µW ≤ 1.8 × 104 and the member stress constraints
µ|σi | + 3σσi ≤ 400 (i = 1, 2, . . . , 10) are to be observed.
The optimal solutions obtained with different values of the weighting factor ξ in
the design desirability function are listed in Table 18.1. The value ξ = 0 is associated
with the stochastic mean value minimization, ξ = 1.0 appertains to the pure variance
minimization problem. The stochastic mean value design (ξ = 0) exhibits a standard
deviation of the objective σf = σv5 = 3.76 which reduces to σv5 = 3.27 for the most
robust design (ξ = 1). At the same time the mean value increases from µv5 = 59.30
to µv5 = 64.51, respectively. Increasing the weighting factor ξ is found to diminish
the variability of the observed performance measure at the penalty of an increasing
mean value. Inspection of the optimum solutions for ξ = 0 and ξ = 1 in Table 18.1 may
suggest different design topologies if the bar members with minor cross-section area
are discarded. In fact, according to mean value optimization node no.6 (upper corner
on the right) and the adjanced bar elements could be eliminated; the robust design
suggests that only the vertical bar 5 (middle of the truss) is unnecessary. Thereby it
becomes evident that the more robust design requires additional structural members.
The deterministic optimum (nominal i.e. mean input values; no randomness) is
equivalent to a mean value optimization with the zeroth-order approximation, only if
the constraints are free of variances. In the present case, the deterministic result violates
the stress constraints. For instance, the mean and standard deviation of bar 7 are 375.3
and 40.0, respectively, such that the stochastic stress constraint 375.3 + 3 × 40.0 > 400
exceeds the allowed value.

6.2 Structural compliance optimizatio n o f a 25-bar s p ac e trus s


The space truss under investigation resembles a power transmission power with topol-
ogy as shown in Figure 18.12. The geometry of the truss is defined by the nodal point
coordinates in Table 18.2. The task to be performed is the minimization of the struc-
tural compliance, the work performed by the applied forces on the displacements which
524 Structural design optimization considering uncertainties

1 1 2

8
9 54 32 7 6

10 3 12
6 13 5 11 4

15 23 18
19
22 14 20 17 24
21 25 16
7

10 Z 8

Y 9
X

Figure 18.12 Twentyfive-bar truss.

Table 18.2 Nodal coordinates of 25-bar truss.


Node x y z

1 −37.5 0.0 200.0


2 37.5 0.0 200.0
3 −37.5 37.5 100.0
4 37.5 37.5 100.0
5 37.5 −37.5 100.0
6 −37.5 −37.5 100.0
7 −100.0 100.0 0.0
8 100.0 100.0 0.0
9 100.0 −100.0 0.0
10 −100.0 −100.0 0.0

defines the objective, the performance function f = Pt u. The design variables are the
cross-section areas of the 25 bars which are reduced to six independent variables by
allocating equivalent members to the groups shown in Table 18.3. For each of the six
groups, the common cross-section and the elastic modulus are random quantities with
mean value µ and standard deviation σ resp. coefficient of variation (COV = σ/µ) as
given in Table 18.4. The elastic range of the structural material is limited by the yield
stress σs = Eεs with εs = 0.003. The stress-strain relationship is

Edε elastic range
dσ =
E(εs /|ε|)dε elastic–plastic.
The mass density of the material is # = 0.1.
Taylor approach to structural optimization and robustness 525

Table 18.3 Group membership for 25-bar truss.

Group number Bar members

I 1
II 2,3,4,5
III 6,7,8,9
IV 10,11,12,13
V 14,15,16,17,18,19,20,21
VI 22,23,24,25

Table 18.4 Random variables for the nonlinear 25-bar truss.


No. Variable Mean Std. deviation COV

1–5 EI −EV 1.0 × 107 2.0 × 105


6 EVI 1.0 × 107 1.5 × 106
7 P 3x 5.0 × 103 5.0 × 102
8 P 6x 5.0 × 103 5.0 × 102
9–14 AI −AVI 0.05

Table 18.5 Optimal solution for the nonlinear 25-bar truss.

Init. (Determ.) ξ=0 ξ = .25 ξ = .5 ξ = .75 ξ=1

AI 2.500 (1.956) 0.050 1.879 1.822 1.483 2.116


AII 2.500 (2.594) 3.608 2.093 1.798 1.546 1.929
AIII 2.500 (3.766) 3.692 3.468 3.374 3.463 3.333
AIV 2.500 (1.247) 0.794 0.920 0.979 1.060 1.409
AV 2.500 (1.138) 1.315 1.312 1.366 1.368 1.282
AVI 2.500 (6.286) 5.409 6.758 6.948 7.122 6.800
µf (× 105 ) 6.460 (4.344) 4.322 4.374 4.424 4.430 4.435
σf (× 105 ) 0.639 (0.293) 0.316 0.272 0.269 0.265 0.261
µW (×102 ) 8.268 (8.489) 8.500 8.500 8.500 8.500 8.500

The loading induces geometrical nonlinearities. It consists of forces P1x = P1y = P2x =
P2y = −1.0 × 105 with fixed magnitude imposed at nodes no.1, no.2 along
the x- and the y-direction, and of randomly fluctuating forces P3x , P6x acting
at nodes no.3, no.6 along the x-direction with mean value 5.0 × 103 (Table
18.4). The structural weight is constrained by µW ≤ 850, the member stress by
µ|σi | + 3σσi ≤ 4.0 × 104 (i = 1, 2, . . . , 25). The bounds for the design variables are
0.05 ≤ Aj ≤ 10.0 (j = I, . . . , VI).
Design optimization based on the desirability function F(µf , σf , ξ) is employed in
conjunction with the incremental stochastic analysis. The initial design, the determin-
istic optimum and the optimal solutions for different values of the weighting factor ξ
are listed in Table 18.5. The deterministic optimum refers to nominal, or mean input.
It is equivalent to mean value optimization (ξ = 0) when the mean is approximated to
the zeroth-order. An augmented factor ξ puts weight to the minimization of the stan-
dard deviation of the objective while the significance of the mean value diminishes. It
526 Structural design optimization considering uncertainties

0.33

0.31

σ (105)

0.29

0.27

0.25
4.3 4.35 4.4 4.45
 (105)

Figure 18.13 Pareto set for the non-linear 25-bar truss. Pairs of standard deviation σ and mean
µ of the objective that minimize the desirability function.

Figure 18.14 Antenna structure (example).

is seen from Table 18.5 that the robust solutions (ξ > 0) reduce the standard deviation
of the objective by 14–18% as compared to ξ = 0. The constraints posed are satis-
fied throughout. Mean values and standard deviations of the objective appertaining to
the optimum solutions that minimize the desirability function at distinct values of the
weighting factor ξ, form the Pareto set depicted in Figure 18.13.

6.3 Antenna st r uc t ur e und er lar g e d is p l a ce m e n t s


The antenna (radius 2.8 m, height 2.5 m, Figure 18.14) is considered under quasi-static
wind loading. Thereby, the elastic structure undergoes large displacements which cause
shape distortion and affect the pointing accuracy of the reflection surface. The aim of
the design is to minimize displacements and to ensure robustness against fluctuating
input.
Taylor approach to structural optimization and robustness 527

q = 6000 N/m2

Figure 18.15 Layout of the antenna structure model and sideload.

Table 18.6 Optimal solution for the antenna structure.

Design variable Lower Upper Initial ξ = 0.0 ξ = 0.5 ξ = 1.0


−6
A1 (×10 m2 ) 10.00 25.00 15.00 13.30 25.00 25.00
A2 (×10−6 m2 ) 10.00 25.00 15.00 13.62 18.47 19.15
A3 (×10−6 m2 ) 10.00 25.00 15.00 12.53 18.63 18.08
t (×10−3 m) 2.00 3.50 2.80 3.02 2.68 2.65
µW (kg) / 350.0 331.5 350.0 350.0 350.0
µf (×10−3 m) / / 102.7 87.28 90.56 91.27
σf (×10−3 m) / / 7.4 6.9 5.9 5.8

The finite element model of the simplified antenna structure consists of a complex
truss framework in three-dimensional space, which is covered by a membrane skin
(Figure 18.15). The wind loading is modelled as distributed surface force that acts
sidewards with a magnitude of 6000 N per square meter of projection area.
The bars forming the truss structure are collected into five groups according to their
position and orientation: the circumferential, radial and skew members supporting the
skin on the reflection surface, short and long bars at the bottom part. In the model the
skin cover consists of four peripheral regions. The uncertain parameters accounted for
are those having major effects. They are considered as uncorrelated random variables
that comprise: the five elastic moduli of the grouped bar members, the elastic modulus
of the skin material, the skin thickness in each of the four regions of the cover. The
mean value of the elastic moduli is µE = 7.1 × 1010 N/m2 , the coefficient of variation
is µ/σ = 0.1 for all random quantities.
The design variables are the skin thickness t and the cross-section areas A1 , A2 , A3 of
the bars belonging to the circumferential, the radial and the skew group at the reflection
part of the antenna, respectively. The design objective is to minimize the largest radial
displacement at the upper edge of the antenna structure under the constraint of total
structural weight µW ≤ 350.0 kg and restriction of the design variables (Table 18.6).
528 Structural design optimization considering uncertainties

The optimum solutions obtained for different values of the weighting factor ξ are
listed in Table 18.6. Optimization with ξ = 0.5 reduces the variability of the initial
design and enhances the robustness markedly beyond the result of mean value min-
imization (ξ = 0). The further improvement with ξ = 1.0 is not significant, however.

7 Conclusions
Structural analysis and design accounting for random input data necessitate employ-
ment of computational procedures which are different from deterministic methods
but base thereon. The prevalent approaches to stochastic structural analysis can be
classified either as synthetic, statistical techniques like the Monte Carlo sampling,
or as nonstatistical, analytical methods based on Taylor series expansion. The former
involve sampling of the input statistics, function evaluation and synthesis of the output
statistics. The latter methods use functional expansion presuming smoothness and dif-
ferentiability. By its nature, the Taylor approach is restricted to the analysis of systems
with moderate parameter variability; it provides the analyst with a powerful tool, how-
ever. The computational effort is determined by the number of random input variables
in contrast to the number of sampling units to be evaluated until statistical convergence
in the Monte Carlo technique. In turn, the latter supplies the complete output statistics
for the sample, while Taylor expansion describes random input and output up to the
second statistical moment regardless of the actual distribution. Methods of stochastic
analysis are surveyed in (Schuëller 2001).
Robust performance in service as considered here has to be contrasted to the relia-
bility task which concerns failure. In the design issue the Taylor approach allows for
the derivation of sensitivity expressions useful in utilizing gradient based optimiza-
tion algorithms. Optimizing for mean performance does not differ from the analogous
deterministic task, in principle, but robustness against uncertain factors observes also
the random variability of the performance as measured by the standard deviation.
Thereby a two-objective problem is posed, but standard algorithms are applicable in
conjunction with the desirability function, a scalar substitute that compromises the
demand for both best mean performance and least variability. While the desirability
function can be defined in various ways, adjustment of robustness is a property inher-
ent in the system. Given the input scatter, the variability of the output can be influenced
only by the input level. If the input level has no effect, there is no issue of robustness.
The argument is elucidated by considering a performance function as follows

f (z, α) = a + bt z + zt Bz + ct α + zt Cα (82)

with design variables in the vector z and random input in the vector α. This gives for
the variance
 t
2 ∼ ∂f ∂f
σf = α = (c + Cz)t α (c + Cz) (83)
∂α µ ∂α µ

the matrix C assumed symmetric. It turns out that the last term in Eq. (82) is decisive for
robustness. Accordingly, empirical approximations of the performance as a function
Taylor approach to structural optimization and robustness 529

of the design variables should explore interaction with the random input. In general
terms, designing for robustness requires that the variance of the performance function
is sensitive to the design variables.
In this chapter, stochastic analysis has been based on second-order expansion about
the mean of the random input variables. The mean response to the second order is of
relevance when considering a series of products. For assessing single events the response
for nominal input, the zeroth-order mean, appears a suitable reference in conjunction
with the scatter about this level. Scatter is characterized by the covariance matrix,
represented by the first-order approximation. The unified methodology worked out
encompasses linear elastic structures, geometric and material nonlinearity; path depen-
dence implies incrementation of the loading process. The associated algorithm extends
standard finite element procedures. It supplies mean vector and convariance matrix
of the response variables, the displacements, and evaluates design sensitivity expres-
sions. The results serve as input to conventional optimization that minimizes the
desirability function, a compromise between mean value and variability of the perfor-
mance measure. The importance put on each criterion has to be specified in advance
by the analyst. The designer will benefit from a diversity of importance settings,
however.
Numerical results obtained with the proposed analytic approach agree well with
synthetic (Monte Carlo) sampling. Apart from the applicability of the technique the
comparisons indicate superiority with respect to the computational effort. This should
not divert from the particularities of the method, however, which is local in nature.
Large input scatter as well as non-smooth response, for instance, necessitate employ-
ment of alternative techniques, like the statistical sampling. The latter proves useful in
exploring a wider range of parameter variation and is not restricted by discontinuities.
At an early stage, exploration of the design space will be more important than abso-
lute optimization to follow in the subsequent specification of details. Referring to the
exposition in (Doltsinis 2003) a design package should allow access to both the global
Monte Carlo technique and the local Taylor approach.
The development of the presented method relies on the availability of discrete ran-
dom variables. Continuous fluctuations within the domain like material properties or
geometrical dimensions have not been a subject. Such conditions define random fields
which can be described as

α(x) = µα (x) + β(x) (84)

and raise the task of discretized representation in the context of finite element simu-
lations. In Eq. (84) the spatially fluctuating part β(x) = α(x) − µα (x) with zero mean
defines the randomness of α(x). This description proves particularly useful for random
fields with a constant mean µα = µα (x) like the linear elastic properties of homoge-
neous materials. Incorporation of random fields in finite element expressions can be
effected in various manners. The mean point method for instance bases on the field
value at the mid-point of the finite element, the local averaging technique represents
the field by its element average, the weighted integral method employs weighted inte-
grals over the individual element domains. For a farther reaching brief outline refer to
(Der Kiureghian and Zhang 1999).
530 Structural design optimization considering uncertainties

Acknowledgment
The author is indebted to Mrs. Knapp Christiansen for linguistic assistance and for
the preparation of the manuscript in LaTeX.

References

Breipohl, A.M. 1970. Probabilistic Systems Analysis. Wiley, New York.


Der Kiureghian, A. & Zhang, Y. 1999. Space-variant finite element reliability analysis, Comput.
Methods Appl. Mech. Engrg. 168:173–183.
Doltsinis, I. 1999. Elements of Plasticity – Theory and Computation, WIT Press, Southampton.
Doltsinis, I. 2003. Inelastic deformation processes with random parameters – methods of analysis
and design. Comput. Methods Appl. Mech. Engrg. 192:2405–2423.
Doltsinis, I. 2003. Large Deformation Processes of Solids – From Fundamentals to Computer
Simulation and Engineering Applications, WIT Press, Southampton.
Doltsinis, I. & Kang, Z. 2004. Robust design of structures using optimization methods. Comput.
Methods Appl. Mech. Engrg. 193:2221–2237.
Doltsinis, I. & Rodic, T. 1999. Process design and sensitivity analysis in metal forming. Int. J.
Numer. Meth. Engrg. 45:661–692.
Doltsinis, I. (ed.) 1999. Stochastic Analysis of Multivariate Systems in Computational Mechanics
and Engineering, CIMNE, Barcelona.
Doltsinis, I., Kang, Z. & Cheng, G. 2005. Robust design of non-linear structures using
optimization methods. Comput. Methods Appl. Mech. Engrg. 194:1779–1795.
Hurtado, J.E. 2004. Structural reliability – Statistical learning perspectives, Springer-Verlag,
Berlin Heidelberg.
Kang Z. 2005. Robust Design Optimization of Structures under Uncertainties, Doctoral Thesis,
University of Stuttgart.
Kleiber, M. & Hien, T.D. 1992. The Stochastic Finite Element Method, Wiley, Chichester.
Lawrence, C., Zhou, J.L. & Tits, A.L. 2007. User’s guide for CFSQP version 2.5. Available
from http://www.aemdesign.com.
Lee, K.H. & Park, G.J. 2001. Robust optimization considering tolerances of design variables,
Computers and Structures 7:77–86.
Luo, X. & Grandhi, R.V. 1997. ASTROS for reliability-based multidisciplinary structural
analysis and optimization. Computers and Structures 62:737–745.
Myers, H. & Montgomery, D.C. 1995. Response Surface Methodology, Wiley, New York.
Rencher, A.C. 1995. Methods of Multivariate Analysis, Wiley, New York.
Schuëller, G.I. 2001. Computational stochastic mechanics–recent advances, Computers and
Structures 79:2225–2234.
Stadler, W. 1984. Multicriteria optimization in mechanics (A Survey). Appl. Mech. Rev. 37:
277–286.
Tamma, K.K., Zhou, X. & Sha, D. 2000. The time dimension: a theory towards the evolution,
classification, characterization and design of computational algorithms for transient/dynamic
applications. Archives of Computational Methods in Engineering 7:67–290.
Chapter 19

Info-gap robust design of passively


controlled structures with load and
model uncertainties
Izuru Takewaki
Kyoto University, Kyoto, Japan

Yakov Ben-Haim
Technion, Haifa, Israel

ABSTRACT: A new structural design concept is developed which incorporates uncertainties


in both the load and the structural parameters. Info-gap models of uncertainty (non-probabilistic
uncertainty models) are used to represent uncertainties in the Fourier amplitude spectrum of the
load (input ground acceleration) and in parameters of the vibration model of the structure. Since
non-probabilistic uncertainties are prevalent in many situations, this chapter shows that it is nec-
essary to satisfy critical performance requirements (rather than to optimize performance), and to
maximize the robustness to uncertainty. Earthquake input energy to passively controlled struc-
tures is introduced as a new measure of structural performance. It is usual that, while structural
properties and performances of ordinary structural systems are well recognized through vast
experiences and extensive databases, those of control devices added to those ordinary structural
systems are not necessarily reliable. It may therefore be reasonable to consider uncertainties
of damping coefficients of control devices. The design implications of the robust-satisficing
approach are demonstrated with these passively controlled structures.

1 Introduction
Load and structural model uncertainties are two major sources of actual uncertainties
in the design of civil, mechanical and aerospace structures. While simultaneous con-
sideration of both the load and structural model uncertainties is very important and
challenging, only a limited number of publications can be found on this subject (Cherng
and Wen 1994, Ghanem and Spanos 1991, Igusa and Der Kiureghian 1988, Jensen and
Iwan 1992, Jensen 2000, Katafygiotis and Papadimitriou 1996). Because civil engi-
neering structures are not mass-produced and the occurrence rate of large earthquakes
and other severe disturbances is very low, the probabilistic representation of the effect
of these disturbances on structural systems seems to be difficult in most cases.
The critical excitation method is one of the promising strategies for overcoming dif-
ficulties in modeling the non-probabilistic load uncertainty (Drenick 1970, Shinozuka
1970, Ben-Haim and Elishakoff 1990, Takewaki 2001a, b, 2002a, b, 2004, 2006,
Westermo 1985). In most of these critical excitation methods except Westermo (1985)
and Takewaki (2004, 2006), deformation or displacement parameters were treated as
response performance functions defining the criticality of the loads. In this chapter, the
earthquake input energy to passively controlled structures is introduced as a new mea-
sure of structural performance. This is because some control devices have limitation on
532 Structural design optimization considering uncertainties

energy dissipation capacity and modeling. It is usual that, while structural properties
and performance of ordinary structural systems are well recognized through a lot of
experiences and database, those of control devices added to those ordinary structural
systems are not necessarily reliable. In this situation, it may be reasonable to con-
sider uncertainties of damping coefficients of control devices and use the earthquake
input energy to those passively controlled structures as a new measure of structural
performance.
The purpose of this chapter is to propose a new structural design concept which
incorporates uncertainties in both the load and the structural parameters. For that
purpose, it is necessary to identify the critical load (excitation) and the corresponding
critical set of structural model parameters. It is clear that the critical load (excitation)
depends on the structural model parameters and it is extremely difficult to deal with
load and structural model parameter uncertainties simultaneously. Info-gap models of
uncertainty (non-probabilistic uncertainty models) by Ben-Haim (1996, 2001, 2005,
2006) are used to represent uncertainties in the Fourier amplitude spectrum of the load
(input ground acceleration) and in parameters of the vibration model of the structure.

2 Info-gap uncertainty analysis


As an example, let us consider that damping coefficients ci are very uncertain and can
be expressed in terms of the nominal values c̃i and the unknown uncertainty level α as
shown in Figure 19.1 (Takewaki and Ben-Haim, 2005).
   
 ci − c̃i 

C(α, c̃) = c :   ≤ α, i = 1, . . . , N , α ≥ 0 (1)
c̃i 

The info-gap model C(α, c̃) is not a single set of damping coefficients, but rather
an unbounded family of nested sets of coefficients. Let F(ω, c, k) denote the ‘energy
transfer function’ defined in the following section. The energy transfer function is a
function of the damping coefficients ci and the following info-gap model, which is
an unbounded family of nested sets of functions, may be introduced in terms of the
nominal function F̃ corresponding to the nominal damping coefficients c̃i .
   
 ci − c̃i 
F(α, F̃) = F(ω, c, k) :   ≤ α, i = 1, . . . , N , α ≥ 0 (2)
c̃i 

Realizable region of
damping coefficient

ac~i ac~i

0 ~
ci ci
Nominal value of
damping coefficient

Figure 19.1 Uncertain damping coefficient with unknown horizon of uncertainty α.


I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 533

The following family of sets of functions may also be considered for the definition
of the info-gap uncertainty model.

F∗ (α, F̃) = {F(ω, k) : |F(ω, k) − F̃(ω, k)| ≤ α}, α ≥ 0 (3)

F∗∗ (α, F̃) = {F(ω, k) : |F(ω, k) − F̃(ω, k)| ≤ αψ(ω)}, α ≥ 0 (4)

2.1 Inf o-gap robus tnes s function


The info-gap robustness is the greatest horizon of uncertainty, α, up to which the
performance function f (c, k) does not exceed a critical value, fC . Let us define the
following info-gap robustness function corresponding to the info-gap uncertainty
model represented by equation (1).
 
α̂(k, fC ) = max α : { max f (c, k)} ≤ fC (5)
c∈C(α,c̃)

Another info-gap robustness function corresponding to the info-gap uncertainty


model by equation (2) may be introduced by
 
α̂(k, fC ) = max α : { max f (c, k)} ≤ fC (6)
F∈F(α,F̃)

Let fC0 = f (c̃, k). Then one can show that α̂(k, fC0 ) = 0, as shown in Figure 19.2. We
define α̂(k, fC ) = 0 if fC ≤ fC0 (see Figure 19.2). The definitions in equations (5) and
(6) imply that the robustness is the maximum level of the structural model parameter
uncertainty, α, satisfying the performance requirement f (c, k) ≤ fC for all admissible
variation of the structural model parameter represented by equation (1) or (2).

Large
Info-gap robustness function

robustness
â2

Small
robustness
â1

0
fC0 fC1 fC2
Design requirement

Figure 19.2 Info-gap robustness function α̂ with respect to design requirement f C .


534 Structural design optimization considering uncertainties

m(üg  x)

m(üg  x)

ug dt

Figure 19.3 Free-body diagram for defining input energy.

3 Earthquake input energy to SDOF system


Much work has accumulated on the topics of the earthquake input energy since the
work by Housner (1959). In contrast to most of the previous works (Akiyama 1985),
the earthquake input energy is formulated here in the frequency domain (Page 1952;
Lyon 1975; Ordaz et al. 2003) to facilitate the derivation of a bound of the earthquake
input energy.
Consider a damped linear single-degree-of-freedom
 (SDOF) system of mass m, stiff-
ness k and damping coefficient c. Let ! = k/m, h = c/(2!m) and x denote the
undamped natural circular frequency, the damping ratio and the displacement of
the mass relative to the ground, respectively. The time derivative is denoted by over-
dot. The input energy to the SDOF system by a uni-directional ground acceleration
üg (t) = a(t) from t = 0 to t = t0 (end of input) can be defined by the work of the ground
on the structural system and is expressed by
 t0
EI = m(üg + ẍ)u̇g dt (7)
0

The term −m(üg + ẍ) with minus sign indicates the inertial force on the mass and is
equal to the sum of the restoring force kx and the damping force cẋ in the system as
shown in Figure 19.3. Integration by parts of equation (7) provides
 t0  t0 1 2t0
EI = m(ẍ + üg )u̇g dt = mẍu̇g dt + (1/2)mu̇2g
0 0 0
 t0 1 2 (8)
@ At0 t
2 0
= mẋu̇g 0 − mẋüg dt + (1/2)mu̇g
0 0

If ẋ = 0 at t = 0 and u̇g = 0 at t = 0 and t = t0 , the input energy can be reduced to the


following form.
 t0
EI = − müg ẋ dt (9)
0

For example, consider the recorded ground motion of El Centro NS 1940 (Imperial
Valley) shown in Figure 19.4. The time history of the input energy per unit mass is
shown in Figure 19.5. This was computed using equation (9) by regarding t0 as t.
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 535

0.4

Acceleration (g)
0.2

0.2

0.4
0 10 20 30 40 50
Time (s)

Figure 19.4 Ground motion of El Centro NS 1940 (Imperial Valley).


Input energy per unit mass (J/kg)

1.2
1
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50
Time (s)

Figure 19.5 Time history of input energy under El Centro NS 1940 (Imperial Valley).

It is known (Page 1952; Lyon 1975; Ordaz et al. 2003; Takewaki 2004, 2006)
that the input energy expressed by equation (9) can also be expressed in the frequency
domain.

 ∞  ∞  ∞ 
1
EI /m = − ẋa dt = − π Ẋeiω t dω a dt
−∞ −∞ 2 −∞
 ∞  ∞ 
1 D E
= − π ae dt HV (ω; !, h)A(ω) dω
iω t
2 −∞ −∞
 ∞
1 D E
= − π A(−ω) HV (ω; !, h)A(ω) dω (10)
2 −∞
 ∞
D E
= |A(ω)|2 −Re[HV (ω; !, h)]/π dω
0
 ∞
≡ |A(ω)|2 F(ω) dω
0

where HV (ω; !, h) is the velocity transfer function defined by Ẋ(ω) = HV (ω; !, h)A(ω)
and F(ω) = −Re[HV (ω; !, h)]/π. The functions Ẋ and A(ω) are the Fourier transforms
536 Structural design optimization considering uncertainties

Fourier amplitude of
2.5

acceleration (m/s)
2
1.5
1
0.5
0
0 20 40 60 80 100 120 140 160
Frequency (rad/s)

Figure 19.6 Fourier amplitude spectrum of ground acceleration of El Centro NS 1940 (Imperial
Valley).

of ẋ and üg (t) = a(t), respectively. The symbol i denotes the imaginary unit. HV (ω; !, h)
can be expressed by

HV (ω; !, h) = −iω/(!2 − ω2 + 2ih!ω) (11)

Equation (10) indicates that the earthquake input energy to damped linear elastic
SDOF systems does not depend on the phase of input motions and this fact is well
known (Page 1952, Lyon 1975, Ordaz et al. 2003; Takewaki 2004).
Figure 19.6 shows the Fourier amplitude spectrum |A(ω)| of El Centro NS 1940.

4 Earthquake input energy to MDOF system


Consider next a damped linear elastic multi-degree-of-freedom (MDOF) shear building
model of mass matrix [M] subjected to a uni-directional horizontal ground acceleration
üg (t) = a(t). The present method can be applied to both proportionally damped and
non-proportionally damped structures. Let {x} denote a set of the nodal horizontal dis-
placements relative to the ground. The time derivative is denoted by over-dot. The input
energy to the MDOF system by the ground motion from t = 0 to t = t0 (end of input)
can be defined by the work of the ground on the MDOF system and is expressed by

 t0
EI = {1}T [M]({1}üg + {ẍ})u̇g dt (12)
0

where {1} = {1 · · · 1}T . The term {1}T [M]({1}üg + {ẍ}) indicates the minus sign of the
sum of the horizontal inertial forces acting on the system as shown in Figure 19.7.
Integration by parts of equation (12) provides

1 2t0 
@ At0 t0
EI = (1/2){1} T
[M]{1}u̇2g + {ẋ} [M]{1}u̇g
T
0
− {ẋ}T [M]{1}üg dt (13)
0 0
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 537

Sum of inertial forces


 {1} T [ M ]({1} u g { x})

ug dug = ug dt : base displacement during dt

Figure 19.7 Free-body diagram for defining input energy to MDOF model.

If {ẋ} = {0} at t = 0 and u̇g = 0 at t = 0 and t = t0 , the input energy can be reduced
to the following form:
 t0
EI = − {ẋ}T [M]{1}üg dt (14)
0

The input energy can also be expressed in the frequency domain as in the SDOF
system. Let {Ẋ} denote the Fourier transform of {ẋ}. Application of the Fourier inverse
transformation of the relative nodal velocities {ẋ} to equation (14) leads to
 ∞   ∞
1
EI = − {Ẋ}T eiωt dω [M]{1}üg dt
−∞ 2π −∞
 ∞  ∞
1
= − {Ẋ}T [M]{1} üg eiωt dt dω (15)
2π −∞ −∞
 ∞
1
= − {Ẋ}T [M]{1}A(−ω)dω
2π −∞

A(ω) is the Fourier transform of ground acceleration üg (t) = a(t) and the symbol i
denotes the imaginary unit.
From the Fourier transform of the equations of motion, the Fourier transform Ẋ(ω)
of the nodal velocities can be expressed by

{Ẋ(ω)} = −iω(−ω2 [M] + iω[C] + [K])−1 [M]{1}A(ω) (16)


538 Structural design optimization considering uncertainties

After the substitution of equation (16) into equation (15), the input energy may be
computed by
 ∞
EI = FM (ω)|A(ω)|2 dω (17)
0

where

FM (ω) = Re[iω{1}T [M]T [Y(ω)][M]{1}]/π (18a)

[Y(ω)] = (−ω2 [M] + iω[C] + [K])−1 (18b)

5 Critical excitation problem


It is shown in this section that a critical excitation method for the earthquake input
energy can provide upper bounds on earthquake input energy. Westermo (1985) has
discussed a similar problem for the maximum input energy to an SDOF system sub-
jected to external forces. His solution is restrictive because it is of the form including
the velocity response quantity containing the solution itself implicitly. A more general
solution procedure will be presented here.
The capacity of ground motions is often defined in terms of the time integral of
squared ground acceleration (Arias 1970; Housner and Jennings 1975). This quantity
is well known as the Arias intensity measure except for difference in the coefficient.
The constraint on this quantity can be expressed by
 ∞  ∞
a(t)2 dt = (1/π) |A(ω)|2 dω = C A (19)
−∞ 0

where C A is the specified value of the time integral of squared ground acceleration. It is
also clear that the maximum value of the Fourier amplitude spectrum of input ground
acceleration is finite. The infinite spectrum may correspond to a perfect harmonic func-
tion or that multiplied by an exponential function (Drenick 1970) which is unrealistic
as an actual ground motion. The constraint on this property may be described by

|A(ω)| ≤ A (A : specified value) (20)

The critical excitation problem for the MDOF system may be stated as follows:
Find |A(ω)| that maximizes the earthquake input energy, equation (17), subject to
the constraints (19) and (20) on ground acceleration.
It is clear from the work (Takewaki 2001a, b, 2002b) on power spectral density
functions that, if A is infinite, |A(ω)|2 turns out to be the Dirac delta function which has
a non-zero value at the point maximizing F(ω). On the other hand, if A is finite, |A(ω)|2
2
yields a rectangular function attaining A in a certain range. The band-width of the
2
frequency can be obtained as
ω = πC A /A . The position of the rectangular function,
2( ω
i.e. the lower and upper limits, can be computed by maximizing A ωLU F(ω)dω. It is
noted that ωU − ωL =
ω. It can be shown that a good and simple approximation can
be made by (ωU + ωL )/2 = !. The essential feature of the solution procedure presented
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 539

A→ ∞ Dirac delta function

A : finite

Function F(ω)
2
A(ω)

Function F(ω)

ω L Ω ωU Frequency ω

Figure 19.8 Schematic diagram of solution procedure for critical excitation problem.

in this section is shown in Figure 19.8. It is interesting to note that Westermo’s periodic
solution (Westermo 1985) may correspond to the case of infinite A.

6 Info-gap robust design for load and model uncertainties

6.1 Info-gap models of load uncertaint y


Consider an uncertainty model of load which is expressed in terms of a Fourier ampli-
tude spectrum of the input acceleration. Let à and αs denote the nominal Fourier
amplitude spectrum and its uncertainty level. An info-gap model of load A(αs , Ã) for
αs ≥ 0 is introduced to represent uncertainty in the Fourier amplitude spectrum of the
input acceleration. The info-gap model of load may be defined by

A[αs , Ã2 (ω;
ω, CA )] = |A2 (ω)| = s∗ Ã2 (ω ;
ω/s∗ , CA ) : s∗
  
 s − s̃ 
= s/s̃,   ≤ αs , αs ≥ 0 (21)
s̃ 

In the past Takewaki and Ben-Haim (2005) proposed a similar info-gap model for
a power spectral density function.
The graphical expression of A(αs , Ã2 ) can be found in Figure 19.9. Note that the
quantity CA is related to the power or intensity of the input acceleration and is assumed
to be constant. This leads to the constant area of the critical rectangular function of
the squared Fourier amplitude spectrum. As the amplitude changes uncertainly, the
band-width varies correspondingly.
In order to explain the physical meaning of variation of the squared Fourier ampli-
tude spectrum shown in Figure 19.9, let us consider two waves as shown in Figures
19.10 and 19.11. These two waves have the same acceleration power CA . While Figure
19.10 represents a short-duration ground motion (near-field ground motion), Figure
19.11 presents a long-duration ground motion and simulates approximately a far-field
ground motion. Figure 19.12 is the Fourier amplitude spectrum of the wave shown in
Figure 19.10 and Figure 19.13 is the Fourier amplitude spectrum of the wave shown
in Figure 19.11. From these figures, it may be said that a smaller-level and wider-range
540 Structural design optimization considering uncertainties

A2(ω)
~
s3*A2 s1*  s2*  s3*

Nominal squared Fourier


amplitude spectrum

~
s2*A2 ~
~ A2
s1*A2

~ ω
∆ω

Figure 19.9 Variation of critical rectangular function of the squared Fourier amplitude spectrum
of input acceleration.

2
Acceleration (g)

1

2
0 5 10 15 20
Time (s)

Figure 19.10 Short-duration motion.

2
Acceleration (g)

1

2
0 5 10 15 20
Time (s)

Figure 19.11 Long-duration motion.

squared Fourier amplitude spectrum represents a variation to a short-duration ground


motion (near-field ground motion) and a larger-level and narrower-range squared
Fourier amplitude spectrum assumes a variation to a long-duration ground motion
(far-field ground motion).

6.2 Ro b ustn ess func t io n


The info-gap model for uncertainty in the dynamic model is F(αm , F̃) for αm ≥ 0 in
compliance with the definition (2). It should be noted that two different uncertainty
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 541

5
4

Fourier amplitude
3

1
0
0 2 4 6 8 10 12 14
Circular frequency (rad/s)

Figure 19.12 Fourier amplitude spectrum of short-duration motion.


5
Fourier amplitude

4
3
2
1
0
0 2 4 6 8 10 12 14
Circular frequency (rad/s)

Figure 19.13 Fourier amplitude spectrum of long-duration motion.

parameters αm and αs are used. The parameter αs for load uncertainty has been
introduced just above and αm for model uncertainty is defined here.
f (A, F, k) is the performance requirement based on the earthquake input energy for
the Fourier amplitude spectrum A based on energy transfer function F and design
k. As in the definition of the robustness in equations (5) and (6), the performance
requirement may be expressed by

f (A, F, k) ≤ fC (22)

The info-gap robustness function can then be introduced as a measure of robustness


for model uncertainty for a given load spectral uncertainty level αs .
⎧ ⎧ ⎫ ⎫

⎪ ⎪
⎪ ⎪
⎪ ⎪


⎪ ⎪
⎪ ⎪
⎪ ⎪

⎨ ⎨ ⎬ ⎬
α̂m (k, fC , αs ) = max αm : max f (A, F, k) ≤ fC (23)

⎪ ⎪
⎪ ⎪
⎪ ⎪


⎪ ⎪ F ∈ F(αm , F̃)
⎪ ⎪
⎪ ⎪

⎩ ⎩ ⎭ ⎭
A ∈ A(αs , Ã2 )

7 Numerical examples
Consider a six-degree-of-freedom mass-spring-dashpot system, equivalent to a six-
story shear building model as shown in Figure 19.14(a). This system has a uniform
542 Structural design optimization considering uncertainties

ug (t)

(a) (b) (c) (d)

Figure 19.14 Six-story shear building model; (a) Bare frame, (b) frame with an added damper in
the first story, (c) frame with an added damper in the third story, (d) frame with an
added damper in the sixth story.

5
Story number

1
0 1  106 2  106 3  106 4  106
Structural damping coefficient (Ns/m)

Figure 19.15 Structural damping coefficient.

structural damping, 3.76 × 105 (N · s/m), as shown in Figure 19.15. This structural
damping corresponds to the damping ratio = 0.04 in the fundamental vibration mode.
Each element has the same mass mi = 32 × 103 (kg) and every spring has the same
stiffness 3.76 × 107 (N/m). The undamped fundamental natural period of the model
is T1 = 0.72(s). An added viscous damper for passive control is installed in the first,
third or sixth story as shown in Figures 19.14(b)–(d). The magnitude of the added
viscous damper is shown in Figure 19.16. The nominal damping coefficient c̃d of the
added viscous damper is ten times the damping coefficient of the structural damping
in the same story. The uncertain damping coefficient cd of the added viscous damper
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 543

6 6
am  1.0
5 5 am  0.5
Story number

am  0.0

Story number
4 4
am  1.0
3 am  0.5 3
am  0.0
2 2

1 1

0 1  106 2  106 3  106 4  106 0 1  106 2  106 3  106 4  106


Added damping coefficient (Ns/m) Added damping coefficient (Ns/m)
(a) (b)

5
am  0.0
Story number

4 am  0.5
am  1.0
3

0 1  106 2  106 3  106 4  106


Added damping coefficient (Ns/m)
(c)

Figure 19.16 Added viscous damping coefficient; (a) first-story allocation, (b) third-story alloca-
tion, (c) sixth-story allocation.

is expressed by cd = c̃d (1 ± 0.5αm ) where αm is the unknown horizon of uncertainty


in the model coefficients. The uncertainty of the load is assumed to be expressed
by the variation s = s̃(1 ± αS ) of the squared rectangular Fourier amplitude spectrum
of the input ground acceleration where αS is the unknown horizon of uncertainty in
the load. The power of the input defined by Eq.(19) does not vary and is given by
C A = 11.4(m2 /s3 ) . The nominal level of the rectangular Fourier amplitude spectrum
is à = 2.91(m/s) and its nominal band-width is
ω̃ = 4.21(rad/s).
The worst case (critical case), up to uncertainties αm and αS , can be obtained from
cd = c̃d (1 − 0.5αm ) and s = s̃(1 + αS ). Although the problem of finding the worst case
is very complicated in general (Kanno and Takewaki 2007), the present case is almost
self-evident. This enables one to discuss the info-gap robustness function directly with
respect to the input energy performance.
544 Structural design optimization considering uncertainties

6  104
1st-story damping
3rd-story damping
5  104
6th-story damping

Function F(ω) 4  104

3  104

2  104

1  104

1  104
0 5 10 15 20
Circular frequency (rad/s)

Figure 19.17 Energy transfer functions F M (ω) defined by equation (17) for three models, one with
an added damper in the first story, one in the third story and the other in the sixth
story.

Figure 19.17 shows the energy transfer functions FM (ω) defined by equation (17) for
three models, one with an added damper in the first story, one in the third story and
the other in the sixth story. It can be observed that the energy transfer functions FM (ω)
of a passively controlled mass-spring-damper system with an added damper near the
fixed support (base) is smaller than that of the system with an added damper near its
tip. This means that the allocation of passive dampers into lower stories is effective in
reducing the earthquake input energy.
Figure 19.18(a) illustrates the plot of the info-gap robustness function α̂m versus
the specified limit value of the earthquake input energy for the load spectral uncer-
tainty αs = 0.0. Figures (b), (c) and (d) illustrate the plots of α̂m for the load spectral
uncertainties αs = 0.1, 0.3, 0.5. Comparing these figures we see that robustness to
model-uncertainty, α̂m , decreases as load-uncertainty, αS , increases. It can be observed
that a passively controlled mass-spring-damper system with an added damper near the
fixed support is ‘more robust’ than that with an added damper near its tip in all the
cases of the load spectral uncertainties.
Figure 19.19(a) shows the plot of the info-gap robustness function α̂m , of the model
with an added damper in the first story, versus the specified limit value of the earth-
quake input energy for various levels of load uncertainties. It can be understood that,
as the level of load uncertainty increases, the info-gap robustness function α̂m gets
smaller, i.e. less robust for variation of the structural parameter. Figures 19.19(b) and
(c) illustrate the plots of info-gap robustness function α̂m of the models with an added
damper in the third and sixth stories, respectively, with respect to the specified value of
the earthquake input energy for various levels of load uncertainties. A similar tendency
to Figure 19.19(a) can be observed.
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 545

1 1
as  0.0 as  0.1

0.8 0.8

0.6 0.6

am
τm

0.4 0.4

0.2 1st-story damping 0.2 1st-story damping


3rd-story damping 3rd-story damping
6th-story damping 6th-story damping
0 0
0 5  106 1  107 1.5  107 0 5  106 1  107 1.5  107
Input energy (Nm) Input energy (Nm)
(a) (b)

1 1
as  0.3 as  0.5
0.8 0.8

0.6 0.6
am

am

0.4 0.4

0.2 1st-story damping 0.2 1st-story damping


3rd-story damping 3rd-story damping
6th-story damping 6th-story damping
0 0
0 5  106 1  107 1.5  107 0 5  106 1  107 1.5  107
Input energy (Nm) Input energy (Nm)
(c) (d)

Figure 19.18 Plot of the info-gap robustness function α̂m versus the specified limit value of
the earthquake input energy for various load spectral uncertainties; (a) αs = 0.0,
(b) αs = 0.1, (c) αs = 0.3, (d) αs = 0.5.

Figure 19.20 presents the plot of the info-gap robustness function α̂m with respect to
the level of the load spectral uncertainty αs for the model with an added damper in the
first story. From this figure, the designer can understand the effect of the load spectral
uncertainty αs on the info-gap robustness function. It is also interesting to note that
the info-gap robustness function α̂m and the level of the load spectral uncertainty αs
introduce a new trade-off relationship.

8 Conclusions
This chapter has developed a new methodology for design of passively controlled
mass-spring-damper systems subject to severe uncertainties in both the loads and the
structural models. The following are the main results of this chapter.
546 Structural design optimization considering uncertainties

1 1
1st-story damping 3rd-story damping

0.8 0. 8

0.6 0. 6

as = 0.0 as = 0.0

am
am

0.4 0. 4
as = 0.1 as = 0.1
as = 0.3 as = 0.3
0.2 0. 2
as = 0.5 as = 0.5

0 0
0 5106 1107 1.5107 0 5106 1107 1.5107
Input energy (Nm) Input energy (Nm)
(a) (b)
1
6th-story damping
0. 8

0. 6
am

as = 0.0
0. 4
as = 0.1
as = 0.3
0. 2
as = 0.5
0
0 5106 1107 1.5107
Input energy (Nm)
(c)

Figure 19.19 Plot of the info-gap robustness function α̂m versus the specified limit value of the
earthquake input energy for various levels of load uncertainties; (a) model with an
added damper in the first story, (b) model with an added damper in the third story,
(c) model with an added damper in the sixth story.

(1) The earthquake input energy is an appropriate measure for evaluating the per-
formance level of passively controlled structures. A critical excitation problem
can be stated for the earthquake input energy as a criticality measure. The crit-
ical excitations depend upon the dynamic properties of the passively controlled
mass-spring-damper systems and it is necessary to deal with load and structural
model uncertainties simultaneously.
(2) Info-gap uncertainty models are very useful in describing both the load and struc-
tural model uncertainties. Determination of the critical states in the load and
structural parameters is an essential step to the investigation on the robustness
of the passively controlled mass-spring-damper systems.
I n f o-g a p r o b u s t d e s i g n o f p a s s i v e l y c o n t r o l l e d s t r u c t u r e s 547

0.8

0.7
EI8106 (Nm)

Model robustness function am


0.6

0.5 EI6106 (Nm)

0.4
EI4106 (Nm)
0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5
Spectral uncertainty as

Figure 19.20 Plot of the info-gap robustness function α̂m with respect to the level of the load
spectral uncertainty αs for various requirements of earthquake input energies
EI = 4.0 × 106 ,6.0 × 106 ,8.0 × 106 (Nm) (first-story damping model).

(3) A passively controlled mass-spring-damper system with an added damper near


the fixed support is more robust than that with an added damper near its tip.
The added robustness is evaluated quantitatively.
(4) The simultaneous consideration of the load and structural model uncertain-
ties introduces a new trade-off. The robustness to structural model uncertainty
increases as the uncertainty level of the load gets smaller.

Acknowledgements
Part of this chapter was written while one author (YBH) was a fellow of the Japan
Society for the Promotion of Science, at the University of Tokyo and Kyoto University.
The support of the JSPS is gratefully acknowledged. Part of this chapter is supported by
the Kajima Foundation and JSPS (2006). This support is also gratefully acknowledged.

References

Akiyama, H. 1985. Earthquake Resistant Limit-State Design for Buildings. University of Tokyo
Press, Tokyo, Japan.
Arias, A. 1970. A measure of earthquake intensity. In Seismic Design for Nuclear Power Plants,
R.J. Hansen (ed.), The MIT Press, Cambridge, MA, 438–469.
Ben-Haim, Y. 1996. Robust reliability in the mechanical sciences, Springer-Verlag, Berlin.
Ben-Haim, Y. 2005. Info-gap decision theory for engineering design, in Engineering Design
Reliability Handbook, E. Nikolaide, D. Ghiocel & S. Singhal (eds), CRC Press.
Ben-Haim, Y. 2006. Info-gap decision theory: decisions under severe uncertainty, 2nd edition,
Academic Press, London.
Ben-Haim, Y. & Elishakoff, I. 1990. Convex Models of Uncertainty in Applied Mechanics.
Elsevier Science Publishers, Amsterdam.
548 Structural design optimization considering uncertainties

Cherng, R.H. & Wen, Y.K. 1994. Reliability of uncertain nonlinear trusses under random
excitation, I., II. J. of Engineering Mechanics, ASCE 120(4):733–757.
Drenick, R.F. 1970. Model-free design of aseismic structures. J. Engrg. Mech. Div., ASCE
96(EM4):483–493.
Ghanem, R.G. & Spanos, P.D. 1991. Stochastic Finite Elements: A spectral approach. Springer,
Berlin.
Housner, G.W. 1959. Behavior of structures during earthquakes. Journal of the Engineering
Mechanics Division, ASCE 85(4):109–129.
Housner, G.W. & Jennings, P.C. 1975. The capacity of extreme earthquake motions to damage
structures. Structural and geotechnical mechanics: A volume honoring N.M. Newmark edited
by W.J. Hall, 102–116, Prentice-Hall Englewood Cliff, NJ.
Igusa, T. & Der Kiureghian, A. 1988. Response of uncertain systems to stochastic excitations.
J. Eng. Mech., ASCE 114(5):812–832.
Jensen, H. 2000. On the structural synthesis of uncertain systems subjected to environmental
loads. Structural and Multidisciplinary Optimization 20:37–48.
Jensen, H. & Iwan, W.D. 1992. Response of systems with uncertain parameters to stochastic
excitations. J. Eng. Mech., ASCE 114:1012–1025.
Kanno, Y. & Takewaki, I. 2007. Worst-case plastic limit analysis of trusses under uncertain
loads via mixed 0–1 programming. Journal of Mechanics of Materials and Structures 2(2):
247–273.
Koyluoglu, H.U., Cakmak, A.S. & Nielsen, S.R.K. 1995. Interval algebra to deal with pattern
loading and structural uncertainties. J. Eng. Mech., ASCE 121(11):1149–1157.
Lyon, R.H. 1975. Statistical energy analysis of dynamical systems, The MIT Press,
Cambridge, MA.
Ordaz, M., Huerta, B. & Reinoso, E. 2003. Exact computation of input-energy spec-
tra from Fourier amplitude spectra Earthquake Engineering and Structural Dynamics 32:
597–605.
Page, C.H. 1952. Instantaneous power spectra. Journal of Applied Physics 23(1):103–106.
Qiu, Z. & Wang, X. 2003. Comparison of dynamic response of structures with uncertain-
but-bounded parameters using non-probabilistic interval analysis method and probabilistic
approach. Int. J. Solids and Structures 40:5423–5439.
Shinozuka, M. 1970. Maximum structural response to seismic excitations. J. Engrg. Mech. Div.,
ASCE 96(EM5):729–738.
Takewaki, I. 2001a. A new method for nonstationary random critical excitation. Earthquake
Engineering and Structural Dynamics 30(4):519–535.
Takewaki, I. 2001b. Probabilistic critical excitation for MDOF elastic-plastic structures on
compliant ground. Earthquake Engineering and Structural Dynamics 30(9):1345–1360.
Takewaki, I. 2002a. Critical excitation method for robust design: A review. Journal of Structural
Engineering, ASCE 128(5):665–672.
Takewaki, I. 2002b. Robust building stiffness design for variable critical excitations. Journal of
Structural Engineering, ASCE 128(12):1565–1574.
Takewaki, I. 2004. Bound of earthquake input energy. Journal of Structural Engineering, ASCE
130(9):1289–1297.
Takewaki, I. & Ben-Haim, Y. 2005. Info-gap robust design with load and model uncertainties.
Journal of Sound and Vibration 288(3):551–570.
Takewaki, I. 2006. Critical Excitation Methods in Earthquake Engineering. Elsevier Science
Publishers, Amsterdam.
Uang, C.M. & Bertero, V.V. 1990. Evaluation of seismic energy in structures. Earthquake
Engineering and Structural Dynamics 19:77–90.
Westermo, B.D. 1985. The critical excitation and response of simple dynamic systems. Journal
of Sound and Vibration 100(2):233–242.
Chapter 20

Genetic algorithms in structural


optimum design using convex
models of uncertainty
Sara Ganzerli & Paul De Palma
Gonzaga University, Spokane,WA, USA

ABSTRACT: This chapter focuses on the use of convex models of uncertainty with genetic
algorithms for optimal structural design. The chapter is comprised of five sections. Section 1
is a literature review of convex models and their application to optimal structural design and
other engineering fields. Section 2 explores the use of convex models to deal with uncertainties
as an alternative to the more traditional probabilistic approach. In this section the superposition
method to implement the uniform bound convex model is illustrated. Section 3 underlines the
benefits of incorporating genetic algorithms in optimal structural design. Section 4 presents
applications of convex models to optimal structural design and Section 5 suggests new avenues
for research and application of convex models. This chapter grows from research conducted since
2000 by the Gonzaga University Center for Evolutionary Algorithms. A preliminary version was
published by Millpress as (Ganzerli et al. 2003).

1 Literature survey on convex models of uncertainty


applied to optimal structural design
The book, “Convex Models of Uncertainty in Applied Mechanics’’ by Ben-Haim and
Elishakoff, established the foundation for the convex model theory and application in
1990. Since then, much work has been done on convex models. Together with proba-
bility and fuzzy sets, convex models can be considered part of the uncertainty triangle
(Elishakoff 1995). Convex models are especially useful for problems where data on
the uncertain parameters is scarce, as in the case of severe uncertainties. One of the
most widely used of the convex models is the uniform bound model. Here the convex
set that encloses the uncertain parameters has the shape of a rectangle in two dimen-
sions. The uniform bound convex model has been implemented in structural design
using a technique called “anti-optimization’’ (Elishakoff et al. 1994). This technique
requires that a complete optimization routine be invoked for each cycle of the algo-
rithm that minimizes the cost function. The nested routine maximizes the effects due
to the uncertain parameters on the structure. That is, it generates the worst-case sce-
nario. This process is computationally quite expensive. In addition, it requires that the
constraints, i.e. stresses and displacements, be written as an explicit function of the
design variables, i.e. the cross-sectional areas.
Another approach, called the superposition method, has been proposed by (Ganzerli
and Pantelides 2000). This method allows one to account for a large number of
uncertainties in structures with many members. It eliminates the nested optimization
550 Structural design optimization considering uncertainties

and does not require that the structural response be written as a function of the
uncertain parameters. The superposition method is in the applications presented in
Section 4.
Ben-Haim et al. (1996) proved the efficiency of convex models in identifying the
worst-case scenario due to multiple uncertainties. Perhaps surprisingly, for structures
subjected to uncertain static loads, the maximum structural response due to the uncer-
tain parameters cannot be identified simply by increasing each of the loads to their
maximum value. However, the convex model design is robust against constraint vio-
lations for any load condition within the established bounds. Convex models have
efficiently been applied in the study of thin-walled stiffened composite panels that are
highly sensitive to geometrical imperfections. In (Elseifi et al. 1998), a comparison
between the convex models and a Monte Carlo simulation led to similar results but
with an effort and cost reduction favoring the convex models. Recently, convex mod-
els have been employed in several fields of engineering. Tonon et al. (2001) illustrate
hybrid systems that combine the three methods available to deal with uncertainties:
probability, fuzzy sets, and convex models. Optimization of laminated composites
considering uncertainty using convex models is the focus of three studies: (Kim and
Sin 2001; Cho and Rhee 2003; Cho and Rhee 2004). Attoh-Okine (2002) examines
the convex model method for pavement design where layer coefficients are uncertain.
Spletzer and Taylor (2003) suggest convex models as a new approach to the multi-robot
localization problem. Recently, attention has been given to convex models and inter-
val analysis as comparable methods to deal with uncertainty (Qiu 2003 and Qiu and
Wang 2006).
An extension of convex models to decision-making theory has been recently pre-
sented in two books by (Ben-Haim 1996 and 2001). Ben-Haim showed that for the
sake of robustness, the uncertainty must be allowed to vary with no imposition of
fixed bounds. In addition, he presents design curves that, plotting robustness against
uncertainty and structural cost, illustrate the tradeoffs. Design curves include an array
of possible solutions to a design problem and constitute a useful tool for the designer,
an area explored in (Ganzerli et al. 2005). Robustness for trusses is presented in (Au
et al. 2003) and (Kanno and Takewaki 2006a and 2006b).
Traditional optimization techniques encompass: (1) mathematical programming,
(2) optimality criteria, (3) approximation methods, and (4) steepest descent methods.
Optimal structural design has been implemented for many years (Kirsch 1981). In
traditional optimization, the domain is searched using the gradient of the function to
be optimized, the objective function. A problem with this method is that in order to
calculate the gradient, the function must be continuous.
Genetic algorithms (GA), loosely based on Darwin’s theory of natural selection,
are not subject to this limitation. First proposed by John Holland of the University of
Michigan in 1975, they were not widely used until one of his students (Goldberg 1989)
showed that they could help solve difficult problems. Since then, genetic algorithms
have been employed in many science and engineering fields to successfully solve opti-
mization problems. The literature on the theory of practice of GA is extensive. A focal
point is the annual Genetic and Evolutionary Computation Conference (GECCO). GA
represent a step forward in the area of optimization, because they do not require the
gradient of the function to be optimized. Thus, they are effective in solving complex
problems with multiple objective functions that are discontinuous.
Genetic algorithms in structural optimum design using convex models 551

In structural design, genetic algorithms have been used extensively in the past decades
to study minimum volume problems. In particular, many chapters deal with the opti-
mization of trusses. Rajeev and Krishnamoorthy (1997) examined the use of GA for
discrete optimization of truss structures. Keeping in mind that practical problems often
involve discrete variables, they have developed a simple GA. Here the algorithm assigns
a penalty to structures that violate constraints, say, the known structural response of
a member of a given material to a specified static load. Rajeev and Krishnamoorthy
present the complete optimization history of a three bar truss. The method is then
applied to larger trusses composed of up to 160 members.
Optimization of large trusses using traditional algorithms was presented by (Schemit
and Lai 1994). Ghasemi et al. (1999) have considered the optimization of trusses
with both continuous and discrete variables. They demonstrate the efficiency of GA
in solving large two-dimensional trusses. They present a solved example for a 940-
member truss.
The literature includes very few examples of chapters that combine convex models
with genetic algorithms. Cho and Rhee (2003 and 2004) have studied the layup opti-
mization for free edge strength using GA for the optimal design and convex models to
deal with uncertainties. Ganzerli et al. (2002 and 2003) have implemented GACON,
a GA-based optimization routine, combined with the uniform bound convex model to
deal with load uncertainties. Combining genetic algorithms with convex models shows
considerable promise. It is an area open to investigation.

2 Uniform bound convex model


Convex models of uncertainty are a non probabilistic method. Convex models are
appealing because they are easy to use and do not depend on knowledge of the statistical
distribution of the uncertain parameter values. They are especially useful when an
info-gap situation arises, i.e. when severe uncertainties must be handled.
In this section convex models are employed in the structural design of trusses subject
to an uncertain static load condition. The uniform bound convex model was chosen
because of its easy implementation. The variation of uncertain parameters from their
nominal values is required to be bounded by a convex set, also called the convex
domain. The uniform bound convex set is represented by a rectangle in the plane,
when only two uncertain parameters are present. The convex domain can be general-
ized to three dimensions when it assumes the shape of a parallelepiped, also known
as “box’’. Furthermore, if a generic n-number of uncertain parameters is present,
the convex domain is a multidimensional “box’’. To illustrate the implementation of
the uniform bound convex model, let us consider a three bar truss represented in
Figure 20.1.
For the sake of simplicity and without loss of generality, only two uncertain param-
eters are handled. These are the static loads P1 and P2 . The three bar truss has
two degrees of freedom x1 and x2 . Superscripts U, L, and 0 will denote the upper,
lower, and nominal values of the loads respectively. Hereafter, the subscript i will
indicate the number of degrees of freedom, j the number of truss members, and n
the number of uncertain parameters. The loads might vary from their nominal val-
ues Pn0 by a percent value βn. So, for example, load 1 at its minimum is designated
552 Structural design optimization considering uncertainties

x2

P1
x1

P2

Figure 20.1 Three-bar truss.

( P1L, P2U ) P2 (P1U, P2U )

( P1n, P2n ) P1

(P1L, P2L ) (P1U, P2L)

Figure 20.2 The convex domain.

as P1L . Therefore, the convex set can be defined as follows for the three-bar truss of
Figure 20.1:

 
P10 − β1 ≤ P10 ≤ P10 + β1
CP = (1)
P20 − β2 ≤ P20 ≤ P20 + β2

In general terms it is possible to express the domain as

Pn0 − β1 ≤ Pn0 ≤ Pn0 + β1 (2)

The representation of this convex set is given in Figure 20.2. P1 is the x coordinate,
P2 the y. The nominal values, the points of no uncertainty, are at the origin.
It is clear that the range of possible values that can be assumed by either P1 or P2 is
within the box. Nevertheless, it has been demonstrated by (Elishakoff et al. 1994) that
the worst effect due to the uncertainties needs to be searched on the convex hull, i.e.
the rectangle’s perimeter. For the particular case of a linear problem, the search can
Genetic algorithms in structural optimum design using convex models 553

actually be limited to the vertices of the convex set, where the uncertain parameters
assume their maximum and minimum values.
The convex structural response can be conveniently found through a superposi-
tion method first proposed in (Ganzerli and Pantelides 2000). This method is simple
and can be used to handle many uncertain parameters and large structures whenever
superposition applies. In the structural analysis convex values for the displacements
and internal forces are used instead of the nominal ones. The convex portion is added
to the nominal values of the structural response (displacements, forces and stresses)
calculated when all the given nominal loads are acting on the structure. That is, the
convex values are superimposed on the nominal values. The latter consists of summing
the nominal structural responses calculated with one load at a time and multiplying
them by the percent of uncertainty for that load. The resulting expressions for the
convex displacement and force vectors are:

xi,con = xi (P10 , P20 ) ± {β1 |xi (P10 )| + β2 | xi (P20 )|} (3)


Fj,con = Fj (P10 , P20 ) ± {β1 |Fj (P10 )| + β2 |Fj (P20 )|} (4)

where i = number of degrees of freedom (varies from one to eight for the 10-bar truss),
j = number of members (varies from one to ten for the 10-bar truss), xi,con and Fj,con are
the convex displacements and internal forces, xi (P10 , P20 ) and Fj (P10 , P20 ) are the nominal
displacements and internal forces calculated loading the structure with both P10 and P20 ,
|xi (P10 )| and |Fj (P10 )| are the absolute values of the nominal displacements and internal
forces calculated loading the structure with only P10 (P2 = 0), |xi (P20 )| and |Fj (P20 )| are
the absolute values of the nominal displacements and internal forces calculated loading
the structure with only P20 (P1 = 0), β1 and β2 are the percents of uncertainty for P1
and P2 respectively.
In Eqs. (3) and (4) the ± sign is in agreement with the sign of the first term. In
other words, if xi (P10 , P20 ) and Fj (P10 , P20 ) are positive, the plus/minus sign will turn into
a plus sign and vice versa. This guarantees that the nominal displacements and forces
are always increased when uncertainty is present. The worst-case scenario, due to the
uncertain parameters, is captured by the equations. Convex stresses can be obtained
directly from the convex internal forces just by dividing the latter by the member cross
sectional areas (Wang 1986). Figure 20.3 shows how the method of superposition is
implemented in the case of two uncertain parameters P1 and P2 .
In sum, to account for uncertainties in the structural design, it is sufficient to substi-
tute the convex responses for the nominal cases in the structural analysis routine. One
important note is that, although the uniform bound convex model is implemented
here jointly with genetic algorithms, it can also be introduced in the conventional
(non-optimal) design of structures.
As mentioned, the superposition method is easy to implement, but it presents a few
drawbacks. The main one is that it can be applied only when the conditions to use
superposition are met. For example, it cannot handle nonlinear structures. Another
drawback is that the superposition method requires the solution of multiple structural
analyses, adding computational burden.
554 Structural design optimization considering uncertainties

P1 P1

P2 P2
xi, con = xi (P10 , P20)  {b1 |xi(P10)|  b2 |xi(P20)|}

Fj, con = Fj (P10 , P20)  {b1 |Fj(P10)|  b2 |Fj(P20)|}

Figure 20.3 The convex displacements, x i,con , and forces F j,con , are obtained using superposi-
tion. β1 and β2 are the percents of uncertainty for P 1 and P 2 respectively convex
domain.

3 Optimal structural design using


genetic algorithms
Genetic algorithms have been a well-established optimization technique for a decade
(Haupt and Haupt 1998). They offer at least two advantages over calculus-based opti-
mization algorithms. The function to be optimized, known as the objective function,
must be continuous. Since genetic algorithms are based loosely on the process of nat-
ural selection found in nature, they do not require a continuous objective function.
Though aspects of natural processes can be modeled using continuous functions, of
course, the GA works at a lower level, modeling not just behavior, but the evolutionary
processes that produces behavior. For example, GA loosely mimics differential repro-
duction in nature, one result of which is adaptation. Further, in structural design, the
design variables are often the member cross-sectional areas. Since these are manufac-
tured only in specific, discrete values, considering values other than these imposes an
unnecessary computational burden on the optimization process.
It is extremely difficult to determine if the solution to an optimization problem is truly
optimal or nearly optimal. In the case of volume minimization for a truss structure,
the problem space grows exponentially with the size of the truss. The true optimal
solution could be found performing an exhaustive search of the problem space. But in
practical terms, this is impossible. A problem whose optimal solution is theoretically
possible, but practically impossible, is called intractable. Another famous problem that
is classified as intractable is the traveling salesman problem. It can be stated like this:
given a set of cities with known distances between them, construct a tour of minimum
distance visiting each city once and returning to the origin. The traveling salesman
problem belongs to the class of NP-complete problems, the most difficult one to tackle
from a computational point of view. Overbay et al. (2006) demonstrated that the truss
problem belongs to the NP-complete set. Heuristic techniques, such as GA, are the
only practical solution to intractable problems. Here is another reason to use GA in
optimal structural design. Although no solution found with GA is guaranteed to be
optimal, using GA helps find a nearly optimal solution. For all practical purposes, this
represents an improvement with respect to a solution that is not optimized.
Genetic algorithms in structural optimum design using convex models 555

In the following section, examples of structural optimization are presented.


The goal is to minimize the volume of a truss with fixed geometry and load condi-
tions. The design process sets constraints on the maximum stresses and displacements,
so that the structure’s safety and serviceability do not fall below a specified minimum.
The optimal design problem for a truss can be stated like this:

min f (A, P)
such that
gk (A, P) ≤ gk , allowable (k = 1, . . . , m) (5)

where f = objective function, the truss volume, expressed as a function of the


cross-sectional areas (A) and the external loads (P), A and P = design parameters,
gk (A, P) = constraints, k = number of constraints to be satisfied by the optimal design,
gk,allowable = allowable value for constraint gk .
If gk (A, P) exceeds gk,allowable for any constraint, the particular configuration under
consideration is unfit.
As already stated, GA is an optimization method that mimics the natural selec-
tion process. An initial population of individuals (trusses) is randomly generated and
ranked based on desired characteristics. Genes are the truss’ cross-sectional areas. The
population can be ranked in a fitness hierarchy, where the fittest truss is the one whose
members have the least volume while still meeting the structural constraints. Those
trusses that do not meet the constraints are assigned a cost penalty, pushing them
lower in the fitness hierarchy.
After the initial population has been generated, individuals have to be paired so that
they can “mate’’ and produce the next generation. Since the strength of differential
reproduction consists in a parent passing characteristics to its offspring, GA next needs
to establish how these characteristics, these “genes,’’ will be inherited. This is referred
to as crossover. An important element is random mutation. A fixed percentage of the
genes present in the population are mutated. Mutation ensures that some alleles (gene
values) will be introduced that were not randomly generated at the beginning. This
reduces the chance that the algorithm will converge on a local minimum. The cycle is
repeated until an acceptable optimal solution is reached.
The genetic algorithm that produced the results presented in this chapter was custom-
designed, using object-oriented programming techniques. However, standard libraries
are available for the implementation of GA. Worth mentioning is GAlib, a collection
of GA routines that are available at no cost (Wall 1995).
In the next section examples are solved. A benchmark ten-bar truss is included
along with a 64-bar truss where the 64 cross-sectional areas are treated as independent
variables.

4 Applications including the use of convex models of


uncertainty together with genetic algorithms to
optimize structural design
This section includes examples of truss optimal design considering both the nomi-
nal and convex model solutions. A 10-bar truss is presented first as a benchmark.
To demonstrate the efficacy of GA in handling large structures, a 64-bar truss is shown.
556 Structural design optimization considering uncertainties

X8 X6
F E D
1 X7 2 X5
9
8
5 6 9144 mm
10 (360 in)
7 X2 X4
3 4
X1 X3
A B C
P1 P2
9144 mm 9144 mm
(360 in) (360 in)

X2
A node 3 bar Load
Degrees of freedom
P2
X1

Figure 20.4 10-bar truss: geometry; loading conditions; degrees of freedom.

In this example, all of the cross sectional areas are independent variables. The examples
feature a nominal and a convex model solution, where uncertainties are accounted for
in the static loads. The solutions are obtained using both discrete optimization, i.e. inte-
ger values, and continuous optimization, i.e. floating points. For comparison with the
literature, examples were solved using US units. However, SI units or their conversion
factor, are given as well.

4.1 1 0-b ar trus s


The 10-bar truss of Figure 20.4 is a commonly used benchmark. The truss is composed
of aluminum, with a Young’s modulus of 68.9 GPa (104 ksi). The static loads P1 and
P2 have a nominal value of 444.8 kN (100 kip). The design variables, represented by
genes, are the cross-sectional areas of the ten members. The problem is to minimize
the total volume of the truss without violating any constraint. Constraints are imposed
on member stresses that may not exceed 172.4 MPa (25 ksi), except member 9, whose
stress may not exceed 517.1 MPa (75 ksi). The allowable limits for stresses are set for
both tension and compression. The search range for the design variables is limited to
between 64.5 mm2 (0.1 in2 ) and 6451.6 mm2 (10 in2 ).
The results obtained are presented in Table 20.1.
The 10-bar truss is solved providing three optimal designs. The first two consider
nominal loads and no uncertainties. Design 1, presented in Column 2, is obtained
using integers. Design 2, shown in Column 3, is achieved using floating points. The
opportunity given by GA to perform discrete optimization, as well as continuous, is
significant in structural design. The integer solution converges faster than the floating
point one and can be used for a preliminary estimate of member sizes. Design 3,
presented in Column 4, differs from the others in that uncertainties are accounted for.
Genetic algorithms in structural optimum design using convex models 557

Table 20.1 10-bar Truss. Resultsa .

Member Design 1 Design 2 Design 3


Nominal integer Nominal floating Convex floating
Squared inches Squared inches Squared inches

1 8 7.437 8.286
2 1 0.574 0.554
3 4 3.437 3.888
4 9 8.576 9.315
5 1 0.101 0.100
6 1 0.576 0.554
7 7 6.469 6.950
8 5 4.853 5.495
9 4 3.230 4.054
10 1 0.813 0.784
Volume (in3 ) 17 300 15 270 16 970
Volume (m3 ) 0.283 0.250 0.278
a Note: 1 in2 = 645.2 mm2 .

Each load, P1 and P2 , has an uncertainty of 10%, which means that βn in Eqs. 1 through
4 is equal to 0.1. It is clear that the convex model solution is more conservative than
the nominal one obtained with floating points. However, this sacrifice in cost results
in robustness against uncertainties.
Convex models are useful in identifying the worst case scenario due to uncertainty
in the static load condition. It is reasonable to hypotize that the worst case scenario
could be obtained with a truss whose loads are increased by the percentage of uncer-
tainty of their nominal values. However, (Elishakoff et al. 1994) and (Ganzerli and
Pantelides 2000) have demonstrated that the case scenario obtained using convex mod-
els is the most robust against variation of the loads with respect to their nominal values.
Optimal designs obtained for both the convex model and the “assumed’’ worst load
condition, obtained by increasing or decreasing each load magnitude according to its
percentage of uncertainty, were compared. Whereas the convex model design does not
violate the stress constraints in any member for any load combination, the “assumed’’
worst case scenario presents a large number of violations, for the majority of load
combinations. This is true with respect to both stress and displacement constraints.

4.2 64-bar trus s


A 64-bar truss is used to demonstrate the algorithm efficiency for large problems. See
Figure 20.5. The truss is composed of aluminum, with a Young’s modulus of 68.9 GPa
(104 ksi). Adjacent nodes are 5080 mm (200 in) apart in the horizontal and vertical
directions. Nodes 21, 22, 27, and 28 have no degrees of freedom, but all other nodes
have two. Two designs are proposed. In one case, the design variables represent the
member cross-sectional areas and constitute 64 independent variables. In the other
case, the 64 variables are linked so that they are reduced to only 19 independent
variables. The linking is shown in Figure 20.6. A set of members is represented by one
variable in this case, and each member in the set will have the same value. Set 1, for
558 Structural design optimization considering uncertainties

21 22

19 20

17 18

2 4 6 8 16 24 26 28
311.4 kN
(70 kip)
(LC1)

1 3 5 7 15 23 25 27

88.9 kN
(20 kip) 13 14
(LC3)

11 12

88.9 kN
(20 kip)
(LC3) 9 10
444.8 kN
(100 kip)
(LC2)
3 node 88.9 kN Load magnitude, 88.9 kN (20 kip),
(20 kip) and load condition, (LC3)
(LC3)

Figure 20.5 64-bar truss. Node numbering and loading conditions.

example, includes members 1–3, 3–5, 2–4, and 4–6. The members are named using the
two end joint numbers. For example, the notation 1–3 refers to the member included
between nodes 1 and 3.
In addition, the truss is solved using both integer and floating point values.
Each design variable, A, has the following range:

• For integers: 645.2 mm2 (1 in2 ) ≤ A ≤ 20645.1 mm2 (32 in2 )


• For floating points: 645.2 mm2 (1 in2 ) ≤ A ≤ 12903.2 mm2 (20.0 in2 ).

The constraints on the truss are:

• Displacement: nodes 1 (vertical) and 9 (horizontal) are limited to 254 mm (10 in)
• Stresses: no member’s stress may exceed 172.4 MPa (25 ksi) in either tension or
compression.
Genetic algorithms in structural optimum design using convex models 559

16 16
14 14
16
15 15
14 14
11
11 11
8 8
1 1 7 7 7 17 17
2 2 3 3 9 9 13 13 12 12 18 18 19 19
2 3 9 8 15 12 19
7
1 1 7 10 10 7 17 17
8 8
10
6 6
4 4
6
5 5
4 4

1 design variables linked in group 1

Figure 20.6 64-bar truss. Design variables linking.

The loads are imposed simultaneously for both the nominal and convex cases. The
loads for both cases are:

• Loading condition 1 (LC1): 311.4 kN (70 kips) horizontally to the right on nodes
1 and 2
• Loading condition 2 (LC2): 444.8 kN (100 kips) vertically downwards on nodes
9 and 10
• Loading condition 3 (LC3): 88.9 (20 kips) vertically downwards on node 1 and
88.9 (20 kips) horizontally to the right on node 9.

For the convex case, the uncertainties are:

• LC1: ±20% (β1 = 0.2)


• LC2: ±10% (β2 = 0.1)
• LC3: ±10% (β3 = 0.1)
560 Structural design optimization considering uncertainties

In summary, three variations are used on the trial runs, with two options for each
variation. Variations and options are:

• Value type (integer or floating point)


• Model (nominal or convex)
• Number of variables (64 when not linked or 19 when linked).

The results for the eight designs are shown in Table 20.2.
Column 1 contains the member names and the set of linked variables. Columns 2 to
5 show the integer solutions. Columns 2 and 3 are the nominal designs solved using
64 independent variables and 19 independent variables. Similarly, columns 4 and 5
display the results for the convex model design. Column 4 shows the values for 64
independent variables and Column 5 shows the design with 19 independent variables.
The last four columns are analogous to columns 2–5 but solved with floating points.
A ranking of the volumes obtained shows expected results. The integer results lead
to a higher volume than the floating points. The nominal case displays a lower volume
than the convex model case. The extra-volume for convex models is a structural cost
added to safeguard against uncertainties. The 64 independent variables give a more
detailed description of the design and show lower volumes with respect to the designs
solved with the linked variables. However, the computational cost is higher for the 64
independent variables, because the genetic algorithm converges at a slower rate.
The 64-bar truss was previously solved for the nominal case with linked variables in
(Ghasemi et al. 1999). Some variations from (Ghasemi et al. 1999) in the application
of the load condition were made in this chapter. The authors have also solved the
64-bar truss with the same criteria as (Ghasemi et al. 1999) and have obtained results
that are in agreement. It is worth mentioning that increasing the independent variables
from 19 to 64 increases the complexity of the problem substantially. The examples
demonstrate the power and flexibility of the method for large structures under severe
uncertainties.

5 Suggestions for further studies


The implementation of convex models offers many research opportunities. Especially
interesting is the study of nonlinear structures, an area that has not yet received much
attention. An innovative approach to the convex model would be to study its appli-
cation for plasticity under uncertain loads. It would also be valuable to proceed with
the study of robustness, allowing the uncertainties to vary. Convex models are an
attractive alternative to the study uncertainties, an area that is growing increasingly
important in engineering.

5.1 C o nv ex mo d e ls fo r plas t ic it y
It is relevant to underline that the superposition method is just one of the options
available to implement convex models. At the same time, the uniform bound convex
model is one convex set among others. Using superposition would not permit the
solution of nonlinear structures. To overcome this problem, it is possible to use an
energy bound method suitable for the purpose of handling plasticity (Ben-Haim and
Elishakoff 1990; Pantelides and Tzan 1996). Nevertheless, the method has not been
Genetic algorithms in structural optimum design using convex models 561

Table 20.2 64-bar Truss. Resultsa,b .

Member/ Integer results for areas Floating points results for areas
linked set
Squared inches Squared inches

Nominal Convex Nominal Convex

Un-linked Linked Un-linked Linked Un-linked Linked Un-linked Linked


U-N-I L-N-I U-C-I L-C-I U-N-F L-N-F U-C-F L-C-F

2-4/1 3 4 3 4 2.071 3.862 2.859 3.851


1-3/1 4 4 4 4 2.917 3.862 3.692 3.851
4-6/1 2 4 3 4 1.786 3.862 3.342 3.851
3-5/1 4 4 5 4 4.143 3.862 4.431 3.851
1-2/2 1 1 1 1 1.736 1.114 1.753 0.642
2-3/2 1 1 1 1 1.053 1.114 0.917 0.642
1-4/2 1 1 1 1 0.125 1.114 0.465 0.642
3-4/3 1 1 1 1 0.208 0.953 0.548 0.809
4-5/3 1 1 1 1 0.382 0.953 1.128 0.809
3-6/3 1 1 1 1 1.232 0.953 0.238 0.809
10-12/4 4 5 4 5 3.396 5.071 3.881 5.062
9-11/4 5 5 5 5 4.316 5.071 4.679 5.062
12-14/4 3 5 3 5 2.473 5.071 3.084 5.062
11-13/4 5 5 6 5 5.036 5.071 6.586 5.062
9-10/5 1 1 1 1 1.904 0.667 1.731 0.665
9-12/5 1 1 1 1 0.360 0.667 0.449 0.665
11-10/5 1 1 1 1 0.986 0.667 0.940 0.665
11-12/6 1 2 1 2 0.889 0.781 0.637 0.830
11-14/6 1 2 1 2 0.083 0.781 0.083 0.830
13-12/6 1 2 2 2 1.087 0.781 1.196 0.830
6-8/7 1 6 2 6 1.000 5.794 1.935 5.997
5-7/7 5 6 6 6 4.595 5.794 6.033 5.997
13-7/8 6 7 7 8 6.263 7.212 7.343 7.533
14-15/8 2 7 3 8 2.541 7.212 3.154 7.533
8-17/8 8 7 9 8 7.203 7.212 8.737 7.533
16-18/8 2 7 2 8 1.847 7.212 1.101 7.533
16-24/7 1 6 1 6 0.265 5.794 2.203 5.997
15-23/7 6 6 7 6 6.102 5.794 6.692 5.997
5-6/9 1 1 1 1 0.271 0.916 0.687 1.771
6-7/9 1 1 1 1 0.907 0.916 0.015 1.771
5-8/9 1 1 1 1 0.263 0.916 2.885 1.771
13-14/10 1 1 1 1 0.143 0.590 0.428 0.904
7-14/13 1 1 1 1 0.430 0.590 0.091 0.904
13-15/13 1 1 1 1 1.236 0.590 2.260 0.904
17-18/11 1 1 1 1 0.043 0.204 0.825 0.165
17-16/11 1 1 1 1 0.191 0.204 0.464 0.165
8-18/11 1 1 1 1 0.118 0.204 0.616 0.165
23-24/12 1 1 1 1 0.625 0.206 1.443 1.239
16-23/12 1 1 1 1 0.052 0.206 1.338 1.239
15-24/12 1 1 1 1 0.109 0.206 0.223 1.239
7-8/8 7 7 8 8 7.035 7.212 7.092 7.533
15-16/8 2 7 3 8 2.886 7.212 1.645 7.533
8-16/7 1 6 2 6 0.858 5.794 2.541 5.997

(Continued)
562 Structural design optimization considering uncertainties

Table 20.2 (Continued)

Member/ Integer results for areas Floating points results for areas
linked set
Squared inches Squared inches

Nominal Convex Nominal Convex

Un-linked Linked Un-linked Linked Un-linked Linked Un-linked Linked


U-N-I L-N-I U-C-I L-C-I U-N-F L-N-F U-C-F L-C-F

7-15/7 6 6 7 6 5.393 5.794 6.071 5.997


8-15/13 1 1 1 1 0.553 0.423 2.705 0.051
7-16/13 1 1 1 1 0.010 0.423 0.026 0.051
17-19/14 7 7 8 7 7.312 7.076 8.815 7.436
18-20/14 2 7 2 7 2.022 7.076 1.814 7.436
19-21/14 7 7 8 7 7.326 7.076 8.850 7.436
20-22/14 3 7 3 7 1.433 7.076 1.035 7.436
19-18/15 1 1 1 1 0.116 0.241 0.048 0.280
17-20/15 2 1 2 1 0.128 0.241 0.160 0.280
19-20/16 2 1 1 1 0.174 0.223 0.434 0.096
21-20/16 1 1 1 1 0.267 0.223 0.160 0.096
19-22/16 1 1 1 1 0.152 0.223 0.137 0.096
24-26/17 1 5 1 6 0.121 5.670 1.297 5.557
23-25/17 6 5 7 6 6.315 5.670 6.943 5.557
26-28/17 1 5 1 6 0.060 5.670 0.418 5.557
25-27/17 6 5 7 6 6.088 5.670 7.094 5.557
24-25/18 1 2 1 1 0.391 0.259 0.612 0.748
23-26/18 1 2 1 1 0.194 0.259 0.587 0.748
25-26/19 1 1 1 1 0.918 0.222 0.322 0.832
26-27/19 1 1 1 1 0.405 0.222 1.676 0.832
25-28/19 1 1 1 1 0.534 0.222 0.090 0.832
Variable U-N-I L-N-I U-C-I L-C-I U-N-F L-N-F U-C-F L-C-F
Volume (in3 ) 31840 43090 35320 44520 25160 37970 31950 40460
Volume (m3 ) 0.522 0.706 0.579 0.730 0.412 0.622 0.524 0.663
a Note: 1 in2 = 645.2 mm2 .
b Note: In this Table the following symbols are used. U = unlinked (64 independent variables); L = linked
(19 independent variables); N = nominal case; C = convex model case; F = floating points; I = integer values.

fully explored. If convex models could be shown to be viable for nonlinear structures,
their reputation in the uncertainty arena would grow. As with other techniques for
handling uncertainty, probability methods, for example, nonlinear structures present
a higher degree of difficulty.

5.2 Ro b ustn ess o f s t r uc t ur es


The convex model method requires that the uncertainties are bound within a convex
set. Therefore, the percentage of uncertainty βn is fixed. It is desirable to study different
degrees of uncertainty. In order to do so, several values of βn should be considered.
Allowing the percent of uncertainty to vary, a nested series of convex sets is obtained.
The larger the uncertainty, the larger is the set. Robustness expresses the greatest level
Genetic algorithms in structural optimum design using convex models 563

of uncertainty at which failure cannot occur. Therefore, it is advantageous to allow the


uncertainties to have a large variation from the nominal value without the collapse of
the structure. In other words, a large robustness is sought (Ben-Haim 2001). However,
to tolerate a large uncertainty, it is necessary to sacrifice the performance of the design.
One can think of the performance as the structural volume expressed as a function of
the cross-sectional areas and the load magnitudes. In optimal design, we would say
that the performance is the value of the objective function dependent upon the design
parameters and the uncertainties. The critical performance is the minimum level of
acceptable performance.
The robustness can be plotted versus the performance to obtain a design curve. It has
been demonstrated that the design curve is monotonic (Ben-Haim 1996). This implies
that there is a trade-off in deciding which point on the design curve is the “working
point,’’ i.e., the optimal design. The decision of where to choose the working point on
the design curve takes place during the design process, and it is dependent on the desired
robustness for the structure. Robust design of trusses has gained recent attention in
(Au et al. 2003) and (Kanno and Takewaki 2006a and 2006b) and is an area open to
further investigation.

Acknowledgments
Funding for this project has been provided by the McDonald Work Award and the Gon-
zaga Research Council. The results for the optimal structural designs were obtained
by Matthew Burkhart, Andrew Burton, and Jared Smith, Gonzaga University alumni
of the Department of Computer Science. The authors would like to acknowledge the
other members of GUCEA (Gonzaga University Center for Evolutionary Algorithms)
who are currently collaborating with them on research. A special thank you goes to
Dr. Shannon Overbay for her involvement in the research on genetic algorithms.

References

Attoh-Okine, N.O. 2002. Uncertainty analysis in structural number determination in flexible


pavement design – A convex model approach. Construction and Building Materials 16(2):
67–71.
Au, F.T.K., Cheng, Y.S., Tham, L.G. & Zeng, G.W. 2003. Robust design of structures using
convex models. Computers and Structures 81(28–29):2611–2619.
Ben-Haim, Y. 1996. Robust reliability in the mechanical sciences. Berlin: Springer-Verlag.
Ben-Haim, Y. 2001. Information-gap decision theory: Decisions under severe uncertainty. New
York: Academic Press, Inc.
Ben-Haim, Y. & Elishakoff, I. 1996. Convex models of uncertainty in applied mechanics. New
York: Elsevier.
Ben-Haim, Y., Chen, G. & Soong, T.T. 1996. Maximum structural response using convex
models. ASCE J. Engineering Mechanics 122:325–333.
Cho, M. & Rhee, S.Y. 2003. Layup optimization considering free-edge strength and bounded
uncertainty of material properties. AIAA Journal 41(11):2274–2282.
Cho, M. & Rhee, S.Y. 2004. Optimization of laminates with free edges under bounded
uncertainty subject to extension, bending and twisting. International Journal of Solids and
Structures 41(1):227–245.
Elishakoff, I. 1995. An Idea on the uncertainty triangle. The Shock and Vibration Digest
22(10): 1–1.
564 Structural design optimization considering uncertainties

Elishakoff, I., Haftka, R.T. & Fang, J. 1994. Structural design under bounded uncertainty
optimization with anti-optimization. Computers and Structures 53(6):1401–1405.
Elseifi, M.A, Gurdal, Z. & Nikolaidis, E. 1998. Convex and probabilistic models of
uncertainties in geometric imperfections of stiffened composite panels. In Anon (ed.),
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference
and Exhibit and AIAA/ASME/AHS Adaptive Structures Forum, Long Beach,CS, USA, April
20–23 1998 Part 2 (of 4):1131–1140.
Ganzerli, S. & Pantelides, C.P. 2000. Optimum structural design via convex model superposi-
tion. Computers and Structures 74(6):639–647.
Ganzerli, S. & Burkhart, M.F. 2002. Genetic algorithms for optimal structural design using con-
vex models of uncertainties. In Spanos & Deodatis (eds), Fourth International Conference on
Computational Stochastic Mechanics (CSM4); Proc. Intern. Conf., Kerkyra (Corfu), Greece,
9–12 June 2002. Rotterdam: Millpress.
Ganzerli S., De Palma, P., Stackle, P. & Brown, A. 2005. Info-gap uncertainty in structural
optimization via genetic algorithms. In G. Augusti, G.I. Schuëller & M. Ciampoli (eds), ICOS-
SAR’05, 9th International Conference on Structural Safety and Reliabilit; Proc. Intern. Conf.,
Rome, Italy, June 19–22, 2005. Rotterdam: Millpress: 2325–2330.
Ganzerli, S., DePalma, P., Smith, J.D. & Burkhart, M.F. 2003. Efficiency of genetic algo-
rithms for optimal structural design considering convex models of uncertainty. In Armen Der
Kiureghian, Samer Madanat & Juan M. Pestana (eds), Ninth International Conference on
Statistic and Probability on Civil Engineering (ICASP9); Proc. Intern. Conf., Berkeley, CA,
July 6–9, 2003. Rotterdam: Millpress: 1003–1010.
Ghasemi, M.R., Hinton, E. & Wood, R.D. 1999. Optimization of trusses using genetic
algorithms for discrete and continuous variables. Engineering Computations 16(3):272–301.
Goldberg, D.E. 1989. Genetic algorithms in search optimization and machine learning. New
York: Addison-Wesley.
Gonzaga University Center for Evolutionary Algorithms (GUCEA), http://www.cs.gonzaga.edu/
gucea/
Haupt, L.H. & Haupt, S.E. 1998. Practical genetic algorithms. New York: John Wiley &
Sons, Inc.
Holland, J.H. 1975. Adaptation in natural and artificial systems. Ann Arbor: The University of
Michigan Press.
Kanno, I. & Takewaki, Y. 2006a. Robustness analysis of trusses with separable load and
structural uncertainties. International Journal of Solids and Structures 43(9):2646–2669.
Kanno, I. & Takewaki, Y. 2006b. Sequential semidefinite program for maximum robustness
design of structures under load uncertainty. Journal of Optimization Theory and Applications
130(2):265–287.
Kim, T.-U. & Sin, H.-C. 2001. Optimal design of composite laminated plates with the discrete-
ness in ply angles and uncertainty in material properties considered. Computers and Structures
79(29–30):2501–2509.
Kirsch, U. 1981. Optimum structural design. New York: McGraw-Hill, Inc.
Overbay, S., Ganzerli, S., De Palma, P., Stackle, P. & Brown, A. 2006. Trusses, NP-
Completeness, and Genetic Algorithms. In F.A. Charney, D.E. Grierson, M. Hoit &
J.M. Pestana (eds), 17th Analysis and Computation Specialty Conference; Proc. Conf., Saint
Louis, MO, May 18–20, 2006. Reston: ASCE Publications.
Pantelides, C.P. & Tzan, S.-R. 1996. Convex models for seismic design of structures – I.
Earthquake Eng. Struct. Dyn. 25(9):927–944.
Rajeev, S. & Krishnamoorthy, C.S. 1997. Genetic algorithms-based methodologies for design
optimization of trusses. ASCE J. of Structural Engineering 123(3):350–358.
Genetic algorithms in structural optimum design using convex models 565

Schemit, L.A. & Lai, Y.C. 1994. Structural optimization based on preconditioned conjugate
gradient analysis methods. International Journal for Numerical Methods in Engineering 37(6):
943–964.
Spletzer, J.R. & Taylor, J.C. 2003. A bounded uncertainty approach to multi-robot localization.
In Anon (ed.), IEEE International Conference on Intelligent Robots and Systems; Proc. Intern.
Conf., Las Vegas, October 27-312003. Institute of Electrical and Electronics Engineers Inc.
Tonon, F., Bernardini, A. & Elishakoff, I. 2001. Hybrid analysis of uncertainty: Probability,
fuzziness and anti-optimization. Chaos, Solutions and Fractals 12(8):1403–1414.
Qiu, Z. 2003. Comparison of static response of structures using convex models and interval
analysis method. Numerical Methods in Engineering 56(12):1735–1753.
Qiu, Z. & Wang, X. 2006. Interval analysis method and convex models for impulsive response of
structures with uncertain-but-bounded external loads. Acta Mecanica Sinica 22(3):265–276.
Wall, M. 1995. GAlib. A C++ library of genetic algorithm components. Available at
http://lancet.mit.edu/ga
Wang, C.K. 1986. Structural analysis on microcomputers. New York, NY: Macmillan.
Chapter 21

Metamodel-based computational
techniques for solving structural
optimization problems considering
uncertainties
Nikos D. Lagaros
National Technical University of Athens, Athens, Greece

Yiannis Tsompanakis
Technical University of Crete, Chania, Greece

Michalis Fragiadakis
University of Thessaly, Volos, Greece

Vagelis Plevris
National Technical University of Athens, Athens, Greece

Manolis Papadrakakis
National Technical University of Athens, Athens, Greece

ABSTRACT: Uncertainties are inherent in engineering problems due to various numerical


modeling “imperfections’’ and due to the inevitable scattering of the design parameters from
their nominal values. Under this perspective, there are two main optimal design formulations
that account for the probabilistic response of structural systems: Reliability-based Design Opti-
mization (RBDO) and Robust Design Optimization (RDO). In this work both type of problems
are briefly addressed and realistic engineering applications are presented. The optimization part
of the proposed probabilistic formulations is solved utilizing efficient evolutionary methods.
In both types of problems the probabilistic analysis is carried out with the Monte Carlo Sim-
ulation (MCS) method incorporating the Latin Hypercube Sampling (LHS) technique for the
reduction of the sample size. In order to achieve further improvement of the computational
efficiency a Neural Network (NN) is used to replace the time-consuming FE analyses required
by the MCS. Moreover, various sources of randomness that arise in structural systems are taken
into account in a “holistic’’ probabilistic perception by implementing a Reliability-based Robust
Design Optimization (RRDO) formulation, where additional probabilistic constraints are incor-
porated into the standard RDO formulation. The proposed RRDO problem is formulated as a
multi-criteria optimization problem using the non-dominant Cascade Evolutionary Algorithm
(CEA) combined with the weighted Tchebycheff metric.

1 Introduction
The basic engineering task during the development of any structural system is, among
others, to improve its performance in terms of constructional or life-cycle cost and
structural behaviour. Improvements can be achieved either by using design rules based
568 Structural design optimization considering uncertainties

on the experience of the engineer, or via an automated manner by using optimization


methods that lead to optimum structural designs. Strictly speaking, optimal means
that for the formulation considered, no better solution exists. Taking into account the
complexity of a structural optimization problem it is obvious that finding the global
optimum solution is not an easy task. In real world applications, if uncertainties have
not been taken into account, the significance of the optimum solutions would be lim-
ited. This is because, although nearly perfect structural models can be simulated in a
computing environment, real world structures always have imperfections or deviations
from their nominal state defined by the design codes. The optimum that is obtained
through the numerical simulation is never materialized in an absolute way and as a
result a near optimal solution is always applied in practice. A formulation of a struc-
tural optimization problem that ignores the scattering of the various design parameters
is defined as a deterministic one. A numerically feasible optimum design, according
to the deterministic formulation, once applied in a real physical system, may lose its
feasibility due to the unavoidable dispersion on the values of structural parameters
(material properties, dimensions, loads, etc). This happens because the performance
of the applied design may be far worse than expected.
In order to account for the randomness of the most important parameters that affect
the simulation and the response of a structure, a different formulation of the optimiza-
tion problem based on stochastic analysis methodologies has to be used. The recent
developments on the stochastic analysis methods (Schuëller 2005), has stimulated the
interest for the probabilistic optimum design of structures. Over the last decade efficient
probabilistic-based optimization formulations have been developed in order to account
for the various uncertainties that are involved in structural design. There are two dis-
tinguished design formulations that account for the probabilistic systems response:
Robust Design Optimization (RDO) (see Messac and Ismail-Yahaya (2002), Jung and
Lee (2002), Doltsinis and Kang (2004), Lagaros and Papadrakakis (2006), among
others), while detailed literature overview on RDO problems can be found in the
work of (Park et al. 2006), and Reliability-based Design Optimization (RBDO)
(see Frangopol and Soares (2001), Agarwal and Renaud (2004), Tsompanakis and
Papadrakakis (2004), Youn et al. (2005), Agarwal and Renaud (2006), Ba-abbad
et al. (2006) among others). RDO methods primarily seek to minimize the influence
of stochastic variations on the nominal values of the design parameters. On the other
hand, the main goal of RBDO methods is to design for minimum weight/cost, which
satisfies the allowable probability of failure for certain limit state(s). In this study
three characteristic probabilistic optimization problems of realistic steel structures are
presented, in which efficient metamodels based on Neural Networks (NN) are incorpo-
rated in order to improve the computational efficiency of the proposed methodologies.
In all test examples considered, the randomness of loads, material properties, and
member dimensions is taken into consideration using the Monte Carlo Simulation
(MCS) method combined with Latin Hypercube Sampling (LHS). In order to deal
with the increased computational cost required, despite the use of the LHS technique,
by the MCS for lower limits of the probability of violation of the constraints, a NN-
based methodology is adopted for obtaining computationally inexpensive estimates of
the response required during the stochastic analysis. The use of NN is motivated by the
approximate concepts inherent in stochastic analysis and the time consuming repeated
analyses required for MCS. In each case a specially tailored NN is trained, utilizing
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 569

available information generated from selected conventional analyses. Subsequently,


the trained NN is used to fast and accurately predict the output data for the next sets
of random variables. It appears that the use of a properly selected and trained NN can
eliminate any limitation on the sample size used for MCS and on the dimensionality
of the problem, due to the drastic reduction of the computing time required for the
repeated finite element analyses.
Firstly, the reliability-based sizing optimization of large-scale multistorey 3D frames
is investigated. The objective function is the weight of the structure while the con-
straints are both deterministic (stress and displacement limitations) and probabilistic
(the overall probability of failure of the structure). Randomness of loads, material
properties, and member geometry are taken into consideration in the reliability anal-
ysis using Monte Carlo simulation. The probability of failure of the frame structures
is determined via a limit elasto-plastic analysis. The optimization part is solved using
Evolution Strategies (ES), while the limit elasto-plastic analyses required during the
MCS are replaced by fast and accurate NN predictions.
Secondly, an efficient methodology is presented for performing RBDO of steel struc-
tures under seismic loading. Optimum earthquake-resistant design of structures using
probabilistic analysis and performance-based design criteria is an emerging field of
structural engineering. The modern conceptual approach of seismic structural design
constitutes the so-called Performance-based Earthquake Engineering or PBEE (for
details see the excellent book by (Bozorgnia and Bertero 2004)). An important ingredi-
ent of PBEE is structural reliability (Wen 2000): a straightforward consideration of all
uncertainties and variabilities that arise in structural design, construction and service-
ability in order to be able to calculate the level of confidence about the structure’s ability
to meet the desired performance goals. Due to the uncertain nature of the earthquake
loading, structural design is often based on design response spectra of the region of
interest and on some simplified assumptions on the structural behaviour under earth-
quake. In this test example the reliability-based sizing optimization of multistorey steel
frames under seismic loading is investigated, in which the optimization part of RBDO
is solved utilizing Evolution Strategies (ES) algorithm. The objective function is the
weight of the structure, while the constraints are both deterministic (stress and dis-
placement restrictions imposed by the design codes) and probabilistic (limitation on
the overall probability failure of the structure which is defined in terms of maximum
interstorey drift).
Finally, a hybrid Reliability-based Robust Design Optimization (RRDO) formula-
tion is presented, where probabilistic constraints are incorporated into the standard
RDO formulation. A similar RRDO formulation has been used in the work of Youn
and (Choi 2004), where a performance moment integration method is proposed that
employs a numerical integration scheme for output response to estimate the product
quality loss. The proposed RRDO is formulated as a multi-criteria optimization prob-
lem using the non-dominant Cascade Evolutionary Algorithm (CEA) combined with
the weighted Tchebycheff metric. The main goal of this approach is to account for
the influence of probabilistic constraints in the framework of structural RDO prob-
lems, by comparing the RRDO formulation with the standard one. For this purpose,
a characteristic test example of a 3D steel truss is investigated, where the objective
functions considered in the RRDO formulation are the weight and the variance of the
response of the structure, represented by a characteristic node displacement. During
570 Structural design optimization considering uncertainties

the optimization process each structural design is checked whether it satisfies the pro-
visions of the European design codes for steel structures (EC3 2003) with a prescribed
probability of violation.

2 Formulations of probabilistic structural


optimization problems
Generally, in structural optimization problems the aim is to minimize the weight of
the structure under certain deterministic behavioural constraints usually imposed on
stresses and displacements. The significant developments of stochastic analysis meth-
ods have stimulated the interest for their application in structural design resulting to
two main categories of probabilistic optimum design formulations: Reliability-based
Design Optimization (RBDO) and Robust Design Optimization (RDO). The main goal
of RBDO methods is to achieve increased safety levels of the structure with respect
to variations of the random design parameters, while RDO methods primarily seek
to minimize the influence of stochastic variations on the mean design of a structural
system. Since the aforementioned method can be complementary to each other, hybrid
Reliability-based Robust Design Optimization (RRDO) formulations have also been
presented, where probabilistic constraints are incorporated into the standard RDO
formulation. There are also several other probabilistic optimization formulations, for
example those based on convex set models, evidence theory, possibility theory, etc,
which are described in other chapters of the present volume. In the sequence, the
three aforementioned major types of stochastic optimization formulations are briefly
described.

2.1 Rel i a b i l i ty-b as e d d es ig n o pt imizat i o n


In reliability-based optimal design additional probabilistic constraints are imposed in
the standard deterministic formulation, in order to take into account various random
parameters and to ensure that the probability of failure for the whole structure or some
of its critical members is within acceptable limits. The probabilistic constraints enforce
the condition that the probability of exceeding a certain limit state’s threshold value
is smaller than a certain value (usually from 10−3 to 10−5 ). Under this perspective, a
discrete RBDO problem can be formulated in the following form:

min CIN (s, x)


subject to gj (s, x) ≤ 0 j = 1, . . . , m (1)
pf (s, x) ≤ pall

where CIN (s, x) is the objective function (i.e. the structural weight or the initial con-
struction cost) to be minimized, s (which can take values only from the given discrete
set Rd ) and x are the vectors of the design and random variables, respectively. Regard-
ing the constraints, gj (s, x) are the deterministic constraint functions and pf (s, x) is the
probability of failure of the design that it is bound by an upper allowable probability
equal to pall . Most frequently, the deterministic constraints of the structure are the
member stresses and nodal displacements or interstorey drifts. Furthermore, due to
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 571

engineering practice demands, the members are divided into groups having the same
design variables. This linking of elements results in a trade-off between the use of more
material and the need of symmetry and uniformity of structures due to practical con-
siderations. Furthermore, it has to be taken into account that due to manufacturing
limitations the design variables are not continuous but discrete since cross-sections
belong to a certain set.

2.2 Robust des ign optimization


In practical applications, optimizing a single objective function, most often the mate-
rial weight or cost, cannot capture every aspect related to the performance of the
structure. Actually, in real world optimization problems, there are several conflicting
and usually incommensurable criteria that have to be dealt with simultaneously. Such
problems are called multi-objective or multi-criteria optimization problems. In addi-
tion, in the majority of cases the objective functions are conflicting and as a result
there exists no unique point which represents the optimum for all of them. Conse-
quently, the common optimality condition used in single-objective optimization must
be replaced by a “multi-collective’’ concept, the so-called Pareto optimum. Thus, in
the multi-criteria formulation of a robust design structural sizing optimization prob-
lem, implemented in this work, an additional objective function is considered which is
related to the influence of the random nature of the structural parameters on the per-
formance of the structure. The aim is to minimize both the weight and the variance of
the response of the structure. The mathematical formulation of the RDO problem is as
follows

min [CIN (s, x), StDevu (s, x)]T


(2)
subject to gj (s, x) ≤ 0 j = 1, ..., k

where CIN (s, x) is the initial construction cost and StDevu (s, x) is the standard deviation
of the response that correspond to the two objectives to be minimized, s and x are the
vectors of the design and random variables respectively and gj (s, x) are the deterministic
constraint functions.

2.3 Reliability-bas ed robus t des ign opt i mi zati o n


In a combined RRDO formulation the constraint functions can also vary, due to the
random nature of the structural parameters. In the proposed RRDO formulation the
probability of violation of the constraints is taken into account as an additional con-
straint function. The mathematical formulation of the RRDO problem implemented
in this work is as follows

min [CIN (s, x), StDevu (s, x)]T


subject to gj (s, x) ≤ 0 j = 1, ..., k (3)
pv,max (s, x) ≤ pall

where CIN (s, x) is the initial construction cost and StDevu (s, x) is the standard devia-
tion of the response that correspond to the two objectives to be minimized, s and x are
572 Structural design optimization considering uncertainties

the vectors of the design and random variables respectively, gj (s, x) are the determin-
istic constraint functions, while pv,max (s, x) is the maximum probability of violation,
among the k behavioural constraint functions, that it is bound by an upper allowable
probability equal to pall . In this study three types of deterministic behavioural con-
straints are imposed to the sizing optimization problem of the truss structure examined:
(i) stress, (ii) compression force (for buckling) and (iii) displacement constraints. On
the other hand, the employed probabilistic constraint enforces the condition that the
probabilities of violation of certain limit state functions are smaller than a certain value.

3 Solving the optimization problem


As mentioned in the previous section, two types of optimization problems are encoun-
tered in the framework of this study: a single and a multi-objective one. Evolutionary
based algorithms are employed for tackling both of them. The two most widely used
optimization algorithms belonging to the class of evolutionary computation that imi-
tate nature by using biological methodologies are the Genetic Algorithms (GA) and
Evolution Strategies (ES). Initially the ES method was introduced in the seventies for
mathematical type of optimization problems (see Schwefel 1981). In this work ES
are used as the optimization tool for addressing demanding probabilistic optimization
problems. Both GA and ES imitate biological evolution in nature and have three charac-
teristics that make them differ from mathematical optimization algorithms: (i) instead
of the usual deterministic operators, they use randomised operators, (ii) instead of
a single design point, they work simultaneously with a population of design points,
(iii) they can handle continuous, discrete and mixed optimization problems. The sec-
ond characteristic allows for a natural implementation of ES on parallel computing
environments (Papadrakakis et al. 1999).
Structural optimization problems have been mainly treated with mathematical pro-
gramming algorithms, such as the sequential quadratic programming (SQP) method,
which need gradient information. In structural optimization problems, and especially
when uncertainties are considered, the objective function and the constraints are par-
ticularly highly non-linear functions of the design variables, thus the computational
effort spent in gradient calculations is usually excessive. In studies by (Papadrakakis
et al. 1999) and (Lagaros et al. 2002), it was found that probabilistic search methods
are computationally more efficient than mathematical programming methods, even
though more optimization steps are required in order to reach the optimum. In the for-
mer case the optimization steps are computationally less expensive than in the latter
case since there is no need for gradient information.

3.1 So l v i n g th e s ing le o b je c t ive o pt imi z a t i o n p r o b l e m


The absence of sensitivity analysis in evolutionary methods has even greater importance
in the case of probabilistic problems, since the calculation of the derivatives of the
reliability constraints is very time-consuming. Furthermore, these methodologies can
be considered, due to their random search, as global optimization methods because
they are capable of finding the global optimum, whereas mathematical programming
algorithms may be trapped in local optima. As it can be seen in Flowchart 21.1,
the ES optimization procedure initiates with a set of parent vectors. If any of these
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 573

1. Selection step: selection of si (i=1, 2, . . . , µ) parent design vectors


2. Analysis step: perform structural analysis (i=1, 2, . . . , µ)
3. Constraints check: all parent become feasible
4. Offspring generation: generate sj , ( j=1, 2, . . . , λ) offspring design vectors
5. Analysis step: perform structural analysis ( j=1, 2, . . . , λ)
6. Constraints check: if satisfied continue, else go to step 4
7. Selection step: selection of the next generation parent design vectors
8. Convergence check: If satisfied stop, else go to step 4

Flowchart 21.1 The ES algorithm for single-objective optimization problems.

parent vectors gives an infeasible design, then it is modified until it becomes feasible.
Subsequently, the offspring design vectors are generated and checked if they are in the
feasible region. According to the (µ+λ) selection scheme, in every generation the values
of the objective function of the parent and the offspring vectors are compared and the
worst vectors are rejected, while the remaining ones are considered to be the parent
vectors of the new generation. This procedure is repeated until the chosen termination
criterion is satisfied.

3.2 Solving the multi-objective optimizati o n p ro bl e m


A number of techniques have been developed in the past, that adequately deal with
the multi-objective optimization problem (Coello-Coello 2000, Mattson et al. 2004,
Marler and Arora 2004). The multi-objective algorithm employed in this work belongs
to the hybrid methods, where an evolutionary algorithm is combined with a scalar-
izing function. In general, when using scalarizing functions, locally Pareto optimal
solutions are obtained. Global Pareto optimality can be guaranteed only when the
objective functions and the feasible region are both convex or quasi-convex and convex,
respectively. For non-convex cases, such as the majority of structural multi-objective
optimization problems, a global single objective optimizer is required. In this work
the non-dominant Cascade Evolutionary Algorithm using the augmented Tchebycheff
metric (CEATm) is employed for the solution of the Pareto optimization problem
at hand. This implementation was proposed by the authors in a previous work by
(Lagaros et al. 2005), where more details of the present implementation can be found.
The basic steps of the CEATm algorithm are outlined below in Flowchart 21.2, where
it is obvious that the CEATm optimization scheme can easily be applied in two parallel
computing levels, an external and an internal one. In addition, the multi-objective opti-
mization problem is converted into a series of single objective optimization problems,
where the solution of each subproblem can be performed concurrently.

4 Probabilistic analysis using Monte


Carlo simulation
The reliability of a structure or its probability of failure is an important factor in the
design procedure since it quantifies the probability under which a structure will fulfill
574 Structural design optimization considering uncertainties

Independent run i, i=1, . . . , nrun


Generate/update the weight coefficients w i,j j=1, . . . , m of the Tchebycheff metric.
CEATm LOOP
1. Initial generation:
1a. Generate sk (k=1, . . . , µ) vectors
1b. Structural analysis step
1c. Evaluation of the Tchebycheff metric
1d. Constraint check: if satisfied k=k+1 else k = k. Go to step 1a
2. Global non-dominant search: Check if global generation is accomplished. If yes, then
non-dominant search is performed, else wait until global generation is accomplished.
3. New generation:
3a. Generate s (=1, . . . , λ) vectors
3b. Structural analysis step
3c. Evaluation of the Tchebycheff metric
3d. Constraint check: if satisfied =+1 else =. Go to step 3a
4. Selection step: selection of the next generation parents according to (µ + λ) or (µ, λ) scheme
5. Global non-dominant search: Check if global generation is accomplished. If yes, then
non-dominant search is performed, else wait until global generation is accomplished.
6. Convergence check: If satisfied stop, else go to step 5
END OF CEATm LOOP
End of Independent run i

Flowchart 21.2 The CEATm algorithm for multi-objective optimization problems.

its design requirements. Structural reliability analysis, or probabilistic analysis is a tool


that assists the design engineer to take into account all possible uncertainties during
the design, construction phases and lifetime of a structure in order to calculate its
probability of failure pf , or probability of a limit state violation pviol . In structural
reliability analysis problems, the probability of violation of a limit state function,
expressed as G(x) < 0, can be written as


pviol = fx (x) dx (4)
G(x)≥0

where x = [x1 , x2 , . . . , xM ]T is a vector of the random structural parameters and fx (x)


denotes the joint probability of violation for all random structural parameters.
In probabilistic analysis of structures the Monte Carlo Simulation (MCS) method is
very popular and particularly applicable when an analytical solution is not attainable.
This is mainly the case in problems of complex nature with a large number of random
variables where all other probabilistic analysis methods are not applicable. Despite its
simplicity, MCS method has the capability of handling practically every possible case
regardless of its complexity; it requires, though, excessive computational effort. In
order to improve the computational efficiency of MCS, various techniques have been
proposed.
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 575

Since MCS is based on the theory of large numbers (N∞ ) an unbiased estimator of
the probability of violation is given by

1 
∞ N
pviol = I(xj ) (5)
N∞
j=1

in which xj is the j-th vector of the random structural parameters, and I(xj ) is an
indicator for successful and unsuccessful simulations defined as

1 if G(xj ) ≥ 0
I(xj ) = (6)
0 if G(xj ) < 0

In order to estimate pviol an adequate number of N independent random samples are


produced. The value of the violation function is computed for each random sample xj
and the Monte Carlo estimation of pviol is given in terms of sample mean by

NH
pviol ∼
= (7)
N

where NH is the number of successful simulations and N the total number of


simulations.
In general, a vast number of simulations have to be performed in order to achieve
great accuracy, especially for low values of probability of failure. In an effort to
reduce the excessive computation cost of crude MCS using purely random sampling
methodologies, which is considered as the drawback of the method, various sampling
reduction techniques have been proposed. Among them are the importance sam-
pling, adaptive sampling technique, stratified sampling, antithetic variate technique,
conditional expectation technique, and Latin Hypercube Sampling (LHS), which was
introduced by (MacKay et al. 1979). Although LHS is generally recognized as one of
the most efficient size reduction techniques it has been proven to be efficient only in the
case that relatively large probability of violation is to be calculated and in the case of
the calculation of statistical quantities like the mean value and the standard deviation.
In most other cases MCS-LHS performs like the crude MCS (Owen 1997).
In the LHS method, the range of probable values for each random variable is divided
into M non-overlapping segments of equal probability of occurrence. Thus, the whole
parameter space, consisting of N parameters, is partitioned into MN cells. Then the
random sample generation is performed, by choosing M cells from the MN space with
respect to the density of each interval, and the cell number of each random sample
is calculated. The cell number indicates the segment number that the sample belongs
to with respect to each of the parameters. Using LHS technique, sampling is realized
independently, whereas, matching of random samples is performed either randomly,
or in a restricted manner. All necessary random samples are produced and they are
accepted only if they do not agree with any previous combination of the segment
numbers. The advantage of the LHS approach is that the random samples are generated
from all the ranges of possible values, thus giving a more thorough insight into the
tails of the probability distributions.
576 Structural design optimization considering uncertainties

5 NN-based MCS for stochastic analysis


Over the last ten years artificial intelligence techniques like neural networks (NNs)
have emerged as a powerful tool that could be used to replace time consum-
ing procedures in many engineering applications (Lagaros and Tsompanakis 2006),
(Tsompanakis et al. 2007). Some of the fields where NNs have been successfully
applied are: pattern recognition, regression (function approximation/fitting), optimiza-
tion, nonlinear system modelling, identification, damage assessment, etc. Function
approximation involves approximating the underlying relationship from a given finite
input-output data set. Feed-forward NNs, such as multi-layer perceptrons (MLP) and
radial basis function networks, have been widely used as an alternative approach
to function approximation since they provide a generic functional representation
and have been shown to be capable of approximating any continuous function
with acceptable accuracy. A trained neural network presents some distinct advan-
tages over the numerical computing paradigm. It provides a rapid mapping of a
given input into the desired output quantities, thereby enhancing the efficiency of
the structural analysis process. This major of a trained NN over the conventional
procedure, under the provision that the predicted results fall within acceptable tol-
erances, leads to results that can be produced in a few clock cycles, representing
orders of magnitude less computational effort than the conventional computational
process.
In this work the application of NNs is focused on the simulation (i.e. probabilistic
analysis of structures) of demanding computational problems of probabilistic mechan-
ics. Many sources of uncertainty (material, geometry, loads, etc) are inherent in
structural design and functioning. Probabilistic analysis of structures leads to safety
measures that a design engineer has to take into account due to the aforementioned
uncertainties. Probabilistic analysis problems, especially when earthquake loadings
are considered, are highly computationally intensive tasks since in order to calculate
the structural behaviour under seismic loads a large number of numerical analyses
are required. In general, soft computing techniques are used in order to reduce the
aforementioned computational cost. The aim of the present study is to train a neural
network to provide computationally inexpensive estimates of analysis outputs required
during the MCS process.
In the present work the ability of neural networks to predict characteristic mea-
sures that quantify the response of a structure considering uncertainties is presented.
This objective comprises the following tasks: (i) select the proper training set, (ii) find
suitable network architecture, and (iii) perform the training/testing of the neural net-
work. The learning algorithm, which was employed for the training, is the well-known
Back-Propagation (BP) algorithm (Rummelhart and (McClelland 1986). An important
factor governing the success of the learning procedure of NN architecture is the selec-
tion of the training set. A sufficient number of input data properly distributed in the
design space together with the output data resulting from complete structural analyses
are needed for the BP algorithm in order to provide satisfactory results. Overload-
ing the network with unnecessary similar information results to over training without
increasing the accuracy of the predictions. The required training patterns are generated
randomly using the LHS technique, where a parametric study is performed for defining
the size of the training set for the efficient training of NN. The basic NN configuration
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 577

Input layer Hidden layer Output layer

Figure 21.1 Typical neural network configuration.

employed for all the test cases examined in this study is selected to have one hidden
layer, as shown in Figure 21.1.

6 Numerical results

6.1 RBDO of s teel 3D frames under s tati c l o adi ng us i ng


elasto-plas tic analys is
Firstly, the reliability-based sizing optimization of multistorey 3D frame structures
under static loading is investigated. The objective function is the weight of the structure
while the constraints are both deterministic (stress and displacement limitations) and
probabilistic (the overall probability of failure of the structure). Randomness of loads,
material properties, and member geometry are taken into consideration in reliability
analysis using the MCS method. The probability of failure of the frame structures is
determined via a limit elasto-plastic analysis. The optimization part is solved using ES
and two methodologies combining evolution strategies and neural networks (ES-NN)
are examined.
In the first one, a trained NN utilizing information generated from a number of
properly selected design vectors, computed by conventional finite element and reliabil-
ity analyses, is used to perform both deterministic and probabilistic constraints checks
during the optimization process. The data obtained from these analyses are processed
in order to obtain the necessary input and output pairs which are subsequently used
for training the NN. The trained NN is then applied to predict the response of the
structure in terms of deterministic and probabilistic constraints checks due to dif-
ferent sets of design variables. The NN training is considered successful when the
predicted values resemble closely the corresponding values of the conventional anal-
yses which are considered exact. In the second methodology the limit elasto-plastic
578 Structural design optimization considering uncertainties

analyses required during the MCS are replaced by NN predictions of the structural
behaviour up to collapse. For every MCS that is required in order to perform the
probabilistic constraints check, a NN is trained utilizing available information gener-
ated from selected conventional elasto-plastic analyses. The limit state analysis data
are processed to obtain input and output pairs, which are used for training the NN.
The trained NN is then used to predict the critical load factor due to different sets
of basic random variables. A fully connected network, as shown in Figure 21.1,
is used.

6.1.1 R eli a b il ity-b a s e d s t r u ct u r a l o p t im iza t io n us i ng M C S, E S and N N


In reliability analysis of elasto-plastic structures using MCS the computed critical load
factors are compared to the corresponding external loading leading to the computa-
tion of the probability of structural failure. The probabilistic constraints enforce the
condition that the probability of a local failure of the system or the global system
failure is smaller than a certain value (i.e. 10−5 to 10−3 ). In this work the overall
probability of failure of the structure, as a result of limit elasto-plastic analyses, is
taken as the global reliability constraint. The probabilistic design variables are cho-
sen to be the cross-sectional dimensions of the structural members and the material
properties (E, σy ).
MCS requires a number of limit elasto-plastic analyses that can be dealt indepen-
dently and concurrently. This allows the natural implementation of the MCS method in
parallel computing environment as well. The most straightforward parallel implemen-
tation of the MCS method is to assign one limit elasto-plastic analysis to every processor
without any need of inter-processor communication during the analysis phase. In the
present study the parallel computations were performed on a Silicon Graphics Power
Challenge shared memory computer where the number (p) of activated processors is
equal to the number of the parent or offspring design vectors since µ = λ.

6.1.2 N N u sed f o r d e t e r m in is t ic a n d p r o b a bi l i s t i c c o ns t r ai nt s c he c k
In this methodology, a trained NN utilizing information generated from a number of
properly selected design vectors is used to perform both the deterministic and proba-
bilistic constraints checks during the optimization process. After the selection of the
suitable NN architecture the training procedure is performed using a number (M) of
data sets, in order to obtain the I/O pairs needed for the NN training. The trained NN
is then applied to predict the response of the structure in terms of deterministic and
probabilistic constraint checks due to different sets of design variables.
The combined ES-NN optimization procedure is performed in two phases. The
first phase includes the training set selection, the corresponding structural analysis
and MCS for each training set required to obtain the necessary I/O data for the NN
training, and finally the training and testing of a suitable NN configuration. The sec-
ond phase is the ES optimization stage where the trained NN is used to predict the
response of the structure in terms of the deterministic and probabilistic constraint
checks due to different sets of design variables. This algorithm is summarized in
Flowchart 21.3.
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 579

• NN training phase:
1. Training set selection step: select M input patterns
2. Deterministic constraints check: perform the check for each input pattern vector
3. Monte Carlo Simulation step: perform MCS for each input pattern vector
4. Probabilistic constraints check: perform the check for each input pattern vector
5. Training step: training of the NN
6. Testing step: test the trained NN

• ES-NN optimization phase:


1. Parents Initialization
2. NN (deterministic-probabilistic) constraints check: all parents become feasible
3. Offspring generation
4. NN (deterministic-probabilistic) constraints check: if satisfied continue, else go to step 3
5. Parents’ selection step
6. Convergence check

Flowchart 21.3 The ES-NN1 methodology.

6.1.3 NN predicti on of the cri ti cal l oad i n st ru ct u ra l fa i l u re


In the second methodology the limit elasto-plastic analyses required during the MCS
are now replaced by NN predictions of the structural behaviour up to collapse. For
every MCS an NN is trained utilizing available information generated from selected
conventional elasto-plastic analyses. The limit state analysis data is processed to obtain
input and output pairs, which are used for training the NN. The trained NN is
then used to predict the critical load factor due to different sets of basic random
variables.
At each ES cycle (generation) a number of MCS are carried out. In order to replace the
time consuming limit elasto-plastic analyses by predicted results obtained with a trained
NN, a training procedure is performed based on the data collected from a number of
conventional limit elasto-plastic analyses. After the training phase is concluded the
trained NN predictions replace the conventional limit elasto-plastic analyses, for the
current design. This algorithm is summarized in Flowchart 21.4.

6.1.4 Twent y-st orey space frame RBDO exam p l e


A characteristic 3D building frame shown in Figure 21.2, has been tested in order to
illustrate the efficiency of the proposed methodologies for reliability-based sizing opti-
mization problems. The cross section of each member of the space frame considered
is assumed to be a W-shape and for each structural member one design variable is
allocated corresponding to a member of the W-shape data base. The objective func-
tion is the weight of the structure. The deterministic constraints are imposed on the
interstorey drifts and for each group of structural members, on the maximum stresses
due to axial forces and bending moments. The values of allowable axial and bend-
ing stresses are Fa = 150 MPa and Fb = 165 MPa, respectively, whereas the allowable
interstorey drift is restricted to 1.5% of the height of each storey.
580 Structural design optimization considering uncertainties

1. Parents Initialization
2. Deterministic constraints check: all parents become feasible
3. Monte Carlo Simulation step:
3a. Selection of the NN training set
3b. NN training for the limit load
3c. NN testing
3d. Perform MCS using NN
4. Probabilistic constraints check: all parents become feasible
5. Offspring generation
6. Deterministic constraints check: if satisfied continue, else go to step 5
7. Monte Carlo Simulation step:
7a. Selection of the NN training set
7b. NN training for the limit load
7c. NN testing
7d. Perform MCS using NN
8. Probabilistic constraints check: if satisfied continue, else go to step 5
9. Parents’ selection step
10. Convergence check

Flowchart 21.4 The ES-NN2 methodology.

The probabilistic constraint is imposed on the probability of structural collapse due


to successive formation of plastic hinges and is set to pall = 0.001. The probability
of failure caused by uncertainties related to material properties, geometry and loads
of the structures is estimated using MCS with the LHS technique. External loads,
yield stresses, elastic moduli and the dimensions of the cross-sections of the structural
members are considered to be random variables. The loads follow a log-normal prob-
ability density function, while random variables associated with material properties
and cross-section dimensions follow a normal probability density function.
The twenty-storey space frame shown in Figure 21.2 consists of 1,020 members with
2,400 degrees of freedom. This example is selected in order to show the efficiency of
the proposed methodologies in relatively large-scale RBDO problems. The basic load
of the structure is a uniform vertical load of 4.78 kPa at each storey and a horizontal
pressure of 0.956 kPa acting on the x-z face of the frame. The members of the frame
are divided into eleven groups, as shown in Figure 21.4, and the total number of
design variables is eleven. The deterministic constraints are twenty-three, two for the
stresses of each element group and one for the interstorey drift. The type of probability
density functions, mean values, and variances of the random parameters are shown in
Table 21.1. A typical load-displacement curve of a node in the top-floor is depicted in
Figure 21.3, corresponding to the following design variables: 14WF176, 14WF158,
14WF142, 14WF127, 12WF106, 12WF85, 10WF60, 8WF31, 12WF27, 16WF36,
16WF36.
For this test case the (µ + λ)-ES approach is used with µ = λ = 10, while a sample size
of 500 and 1,000 simulations is taken for the MCS-LHS. Table 21.2 depicts the perfor-
mance of the optimization procedure for this test case. As can be seen, the probability
of failure corresponding to the optimum computed by the deterministic optimiza-
tion procedure is much larger than the specified value of 10−3 . For this example the
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 581

group 11
group 8
1 5 9 13 17
group 10

group 9
2 6 10 14 18
group 10
group 7

3 7 11 15 19
group 11

4 8 12 16 20
group 6

Plan view
group 5
group 2 group 3 group 4
group 1

12

24 24 24

Front elevation

Figure 21.2 Description of the twenty-storey frame.

Table 21.1 Characteristics of the random variables.

Random variable Probability density function (pdf ) Mean value Standard deviation (σ)

E N 200 0.10E
σy N 25.0 0.10σy
Design variables N si 0.1si
Loads Log-N 5.2 0.2
582 Structural design optimization considering uncertainties

Load factor
2.5

2.0

1.5

1.0

0.5

10 20 30 40 (inches)
x-displacement – node 1 top storey

Figure 21.3 Load-displacement curve.

Table 21.2 Performance of the methods.

Optimization ES pf ** Optimum Sequential Parallel time (h)


procedure Gens. weight (kN) time (h)
p=5 p = 10 p = 20

DBO 83 0.197 10−0 6,771 2.0 0.7 0.3 0.3


RBDO (500 siml.) 126 0.103 10−2 9,114 141.0 28.4 14.1 7.1
RBDO-NN1 (500 siml.) 129 0.102 10−2 9,121 34.5 7.2 3.5 1.8
RBDO-NN2 (500 siml.) 126 0.103 10−2 9,114 15.8 3.3 1.7 0.9
RBDO (1,000 siml.) 120 0.103 10−2 9,156 250.3 50.1 25.1 12.6
RBDO-NN1 (1,000 siml.) 127 0.101 10−2 9,172 68.5 13.8 6.9 3.5
RBDO-NN2* 122 0.97 10−3 9,255 17.0 4.1 2.2 1.2

* For 100,000 simulations.


** For each final design and with 100,000 simulations using the NN2 scheme.

increase on optimum weight achieved, when probabilistic constraints are considered,


is approximately 26% of the deterministic one, as can be observed in Table 21.2.
In Table 21.2 showing the results of the test example, DBO stands for the conven-
tional Deterministic-based Optimization approach, RBDO stands for the conventional
Reliability-based Optimization approach, while RBDO-NNi corresponds to the
proposed Reliability-based Optimization with NN incorporating algorithm i (i = 1, 2).
For the application of the RBDO-NN1 methodology the number of NN input units
is equal to the number of design variables. Consequently, the NN configuration used in
this case has one hidden layer with 15 nodes resulting in an 11-15-1 NN architecture
which is used for all runs. The training set consists of 200 training patterns capturing
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 583

the full range of possible designs. For the application of the RBDO-NN2 methodology
the number of NN input units is equal to the number of random variables, whereas
one output unit is needed corresponding to the critical load factor. Consequently the
NN configuration with one hidden layer results in a 3-7-1 NN architecture which is
used for all runs. The number of conventional step-by-step limit analysis calculations
performed for the training of NN is 60 corresponding to different groups of random
variables properly selected from the random field. As can be seen in Table 21.2 the
proposed RBDO-NN2 optimization scheme manages to achieve the optimum weight in
one tenth of the CPU time required by the conventional RBDO procedure in sequential
computing implementation. Table 21.2 also depicts the performance of the proposed
methodologies in a straightforward parallel mode, with 5, 10 or 20 processors in which
5, 10 or 20 Monte Carlo simulations are performed independently and concurrently. It
can be seen that the parallel versions of RBDO, RBDO-NN1 and RBDO-NN2 reached
the perfect speedup irrespectively of the number of processors used.
The aim of the proposed RBDO procedure was threefold. To reach an optimized
design with controlled safety margins with regard to various model uncertainties, while
at the same time minimizing the weight of the structure and reducing substantially the
required computational effort. The solution of realistic RBDO problems in structural
mechanics is an extremely computationally intensive task. In the test example con-
sidered in this study the conventional RBDO procedure was found over seventy times
more expensive than the corresponding deterministic optimization procedure. The goal
of decreasing the computational cost by one order of magnitude in sequential mode
was achieved using: (i) NN predictions to perform both deterministic and probabilistic
constraints check, or (ii) NN predictions to perform the structural analyses involved
in MCS. Furthermore, the achieved reduction in computational time was almost two
orders of magnitude in parallel mode with the proposed NN methodologies.

6.2 RBDO of s tructures under s eis mic l o adi ng


In this section the reliability-based sizing optimization of multistorey framed structures
under earthquake loading is investigated. The discrete RBDO problem is formulated
in the form of Eq. (1), where CIN (s, x) is the initial construction cost to be minimized,
s and x are the vectors of the design and random variables respectively, gj (s) are the
deterministic stress and displacement constraints. The overall probability of failure
of the structure, as a result of multi-modal response spectrum analysis, is taken as
the global reliability constraint. Failure is detected when the maximum interstorey
drift exceeds a threshold value, here considered as 4% of the storey height, defined as
p(θ10/50 > θall ) the probability that the drift θ10/50 for the 10/50 hazard level exceeds the
allowable drift θall , that is bound by an upper allowable probability equal to pall . For
rigid frames with W-shape cross sections as in this study, the design constraints were
taken from the design requirements specified by Eurocode 3 (2003) and Eurocode 8
(2004).
During the solution of the optimization problems a number of MCS runs are carried
out for each different set of design variables. In order to replace the time consuming
FE analyses by predicted results obtained with a trained NN, a training procedure
is performed based on the data collected from a number of previously performed FE
584 Structural design optimization considering uncertainties

analyses. After the training phase is concluded the NN predictions replace all conven-
tional FE analyses, for the current design. For the selection of the suitable training pairs,
the sample space for each random variable is divided into equally spaced distances.
The central points within the intervals are used as inputs for the FE analyses.
The random variables considered are the cross-sectional dimensions of the struc-
tural members, the material properties (E, σy ) and the loading conditions. Under the
proposed approach the FE analyses required during the MCS are replaced by NN pre-
dictions of the structural response. For every design a NN is trained utilizing available
information generated from selected conventional FE analyses. The trained NN is then
used to predict the structural response for different sets of random variables depending
on the type of problem examined.

6.2.1 Ea rth q u a ke lo a d in g o f s t e e l f r a m e s
In Eurocodes earthquake loading is taken as a random action, therefore it must be
considered for the structural design with the following loading combination:
 
Sd = G “+’’ Ed “+’’ ψ2i Qki (8)
kj

where “+’’ implies “to be combined with’’,  implies “the combined effect of’’, Gkj
denotes the characteristic value of the permanent action j, Ed is the design value of the
seismic action, and Qki refers to the characteristic value of the variable action i, while
ψ2i is the combination coefficient for quasi permanent value of the variable action i,
here taken as 0.30. Design code checks are implemented in the optimization algorithm
as constraints. Each structural member should be checked for actions that correspond
to the most severe load combination obtained from Eq. (8) and the permanent load
combination:
 
Sd = 1.35 Gkj “+’’ 1.50 Qki (9)

It should be pointed out that the seismic action is obtained from the elastic spectrum
reduced by the behaviour factor q. This is done because the structure is expected
to absorb energy by deforming inelastically. Maximum values for the q-factor are
suggested by design codes and vary according to the material and the type of the
structural system. For the framed steel structures considered in this study q = 4.0.
The most common approach for the definition of the seismic input is the use of
design code response spectra, a general approach that is easy to implement. However,
if higher precision is required, the use of spectra derived form natural earthquake
records is more appropriate. Since significant dispersion on the structural response
due to the use of different natural records has been observed, these spectra must be
scaled to the same desired earthquake intensity. The most commonly applied scaling
procedure is based on the peak ground acceleration (PGA). Dynamic analysis of simple
frames is most frequently performed using the multi-modal response spectrum analysis,
which is based on the mode superposition approach and is briefly described in the next
section.
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 585

6.2.2 D yn a mic anal y si s usi ng Mul ti-modal Re s p o n s e Sp e ct ru m


In general, the equations of equilibrium for a finite element system in motion can be
written in the usual form

Mü(t) + C u̇(t) + Ku(t) = R(t) (10)

where M, C, and K are the mass, damping and stiffness matrices; R(t) is the external
load vector, while u(t), u̇(t) and ü(t) are the displacement, velocity, and acceleration
vectors of the finite element assemblage, respectively. A design approach based on the
Multi-modal Response Spectrum (MmRS) analysis, which, in turn, is based on the
mode superposition approach, has been used in the present study.
The MmRS method is based on a simplification of the mode superposition approach
with the aim to avoid time history analyses which are required by both the direct inte-
gration and mode superposition approaches. In the case of the multi-modal response
spectrum analysis Eq. (10) is modified according to the modal superposition approach
to a system of independent equations

Mi ÿi (t) + C i ẏi (t) + Ki yi (t) = Ri (t) (11)

where

Mi = Ti Mi , C i = Ti Ci , Ki = Ti Ki and R(t) = Ti R(t) (12)

are the generalized values of the corresponding matrices and the loading vector, while
i is the i-th eigenmode shape matrix. According to the modal superposition approach
the system of N differential equations, which are coupled with the off-diagonal terms
in the mass, damping and stiffness matrices, is transformed to a set of N independent
normal-coordinate equations. The dynamic response can therefore be obtained by solv-
ing separately for the response of each normal (modal) coordinate and by superposing
the response in the original coordinates.
In the MmRS analysis a number of different formulas have been proposed to
obtain reasonable estimates of the maximum response based on the spectral values
without performing time history analyses for a considerable number of transformed
dynamic equations. The simplest and most popular one is the Square Root of Sum of
Squares (SRSS) of the modal responses. According to this estimate the maximum total
displacement is approximated by

N 1/2

umax = u2i,max
i=1 (13)
ui,max = i yi,max

where ui,max corresponds to the maximum displacement vector corresponding to the


i-th eigenmode.
Table 21.3 List of natural accelerograms.

Earthquake name (Date) Site\Soil Conditions Orientation MS PGA PGV a/v


(g) (cm/sec) (sec)

1 Victoria Mexico (06.09.80) Cerro Prieto\Alluvium 45 6.4 0.62 31.57 19.30


2 Kobe (16.01.95) Kobe\Rock 0 6.95 0.82 81.30 9.91
3 Imperial Valley (19.05.40) El Centro Array\CWB: D, USGS: C 180 7.2 0.31 29.80 10.32
4 Duzce (12.11.99) Bolu\CWB: D, USGS: C 90 7.3 0.82 62.10 12.99
5 San Fernando (09.02.1971) Pacoima dam\Rock 164 6.61 1.22 112.49 10.69
6 Gazli (17.05.1976) Karakyr, CWB:A 90 7.3 0.72 71.56 9.83
7 Friuli (06.05.1976) Bercis\CWB: B 0 6.5 0.03 1.33 21.17
8 Aigion (17.05.90) OTE building\Stiff soil 90 4.64 0.20 9.76 20.00
9 Central California (25.04.54) Hollister City Hall\CWB: D, USGS: C 271 – 0.05 3.90 12.77
10 Alkyonides (24.02.81) Korinthos OTE building\Soft soil 90 6.69 0.31 22.70 13.34
11 Northridge (17.01.94) Jensen filter Plant\CWB: D, USGS: C 292 6.7 0.59 99.10 5.86
12 Athens (07. 09.99) Sepolia (Metro Station)\Unknown 0 5.6 0.24 17.89 13.32
13 Cape Mendocino (25.04.92) Petrolia\CWB: D, USGS: C 90 7.1 0.66 89.72 7.24
14 Erzihan,Turkey (13.03.92) Erzikan East-East Comp\CWB: D, USGS: S 270 6.9 0.49 64.28 7.56
15 Kalamata (13.09.86) Kalamata, Prefecture\Stiff soil 0 5.75 0.21 32.90 6.41
16 Iran (16.09.78) Tabas\CWB: S 0 7.4 0.85 121.40 6.89
17 Loma Prieta1 (18.10.89) Hollister Diff Array\CWB: D 255 7.1 0.28 35.60 7.69
18 Loma Prieta2 (18.10.89) Coyote Lke dam\CWB: D 285 7.1 0.48 39.70 11.95
19 Mammoth Lakes (27.05.80) McGee Creek\CWB: D 0 5 0.33 8.55 37.29
20 Irpinia, Italy (23.11.80) Sturno\Unknown 270 6.5 0.36 52.70 6.66

*Ms : Surface moment magnitude.


M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 587

5 5 5 5

1 2 2 2 1
4  4 4 4

1 2 2 2 1
3 3 3 3

1 2 2 2 1

Figure 21.4 RBDO Test example – Geometry, member grouping.

6.2.3 Three-storey plane frame under earthquake loads RBDO example


One test example has been considered in the present study in order to illustrate the effi-
ciency of the proposed methodology for reliability-based sizing optimization problems
under earthquake loading. This test example is a four-bay, three-storey moment resist-
ing plane frame shown in Figure 21.4. The frame has been previously studied by Gupta
and Krawinkler (2000), where a detailed description of the structure is given. The
frame consists of rigid moment connections and fixed supports. Each bay has a span
of 9.15 m (30 ft), while each storey is 3.96 m (13ft) high. The permanent action consid-
ered is equal to 5 kN/m2 while the variable action is equal to 2 kN/m2 , both distributed
along the beams. The frame is considered to be part of a 3D structure where each frame
is 4.5 m (15ft) apart. The median spectrum used for the determination of the base shear
corresponds to a peak ground acceleration of 0.32 g. Structural members are divided
into five groups, as shown in Figure 21.4, corresponding to the five design variables of
a discrete structural optimization problem. The cross-sections are W-shape beam and
column sections available from manuals of the American Institute of Steel Construction
(AISC). The objective function is the weight of the structure, to be minimized.
In this study a suite of twenty natural accelerograms, shown in Table 21.3, is used. It
can be seen that each record corresponds to different earthquake magnitudes and soil
conditions. The records of this suite comprise a wide range of PGA and peak acceler-
ation over peak displacement ratio (a/v) values. The latter parameter is considered to
describe the damage potential of the earthquake more reliably than PGA. The records
are scaled to the same PGA and their response spectra that are subsequently derived
are shown in Figure 21.5. It has been observed that the response spectra follow the
lognormal distribution. Therefore the median spectrumx̂, also shown in Figure 21.5,
and the standard deviation δ are calculated from the above suite of spectra using the
following expressions:
 n
i=1 ln (Rdi (T))
x̂ = exp (14)
n
588 Structural design optimization considering uncertainties

1.6

1.4

1.2
Spectral acceleration (g)

1
Median spectra
0.8

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
Period T (sec)

Figure 21.5 Natural record response spectra and their median.

Table 21.4 Characteristics of the random variables.


Random variable Probability density Mean value Standard
function deviation

E N 2.1 106 MPa 0.10E


σy N 235 MPa 0.10σy
Seismic load Log-N Median Spectrum δ (Eq. 15)
(Eq. 14)

 n ! "2 1/ 2
i=1 ln (Rdi (T)) − ln (x̂)
δ= (15)
n−1

where Rdi (T) is the response spectrum value for period equal to T of the i-th record
(i = 1, . . . , n, where n = 20 in this study). For a given period value, the acceleration Rd
is obtained as a random variable following the log-normal distribution with its mean
value equal to x̂ and the standard deviation equal to δ.
The deterministic constraints are related to stress and displacement constraints for
steel frames according to Eurocodes. The probabilistic constraint is imposed on the
probability of structural collapse which is set equal to pall = 0.001. The probability
of failure caused by uncertainties related to seismic loads and material properties of
the structure is estimated using MCS with the LHS technique. The earthquake ground
motion parameter, as described in Eq. (14), the yield stress and the elastic modulus
are considered to be random variables. The type of probability density functions,
mean values, and variances of the random parameters are shown in Table 21.4. The
seismic action follows a log-normal probability density function, while the rest of
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 589

Table 21.5 Performance of the methods.

Optimization ES pf Time Time


procedure cycles sequential (h) parallel (p = 5) (h)

DBO 157 0.0932 0.3 0.08


RBDO-MCS (5,000 siml.) 65 0.0008 557.3 140.1
RBDO-LHS (1,000 siml.) 72 0.001 149.1 40.6
RBDO-NN (100,000 siml.) 68 0.0009 42.1 16.2

the random variables follow a normal probability density function. For more details
on probabilistic formulations of uncertainties the reader is referred to JCSS (2001)
guidelines.
For this test case the (µ + λ)-ES approach is used with µ = λ = 5 (equal to the number
of design variables), while a sample size of 5000 simulations is taken. Table 21.5
depicts the performance of the optimization procedure for this test case. As it can
be seen, the probability of failure corresponding to the optimum computed by the
deterministic optimization procedure is much larger than the specified value of 10−3 ,
thus unacceptable. On the other hand, the increase in safety results also in a significant
increase on optimum weight. When probabilistic constraints are considered the weight
increase is approximately 26% compared to the deterministic one, from 125.3 to
167.4 tn. Furthermore, the computation times are also enlarged in the case of RBDO,
however, the use of NN as well as parallel computation reduces drastically the excessive
computational cost of the process.
As far as the NN implementation is concerned, it was performed in a similar manner
as the ES-NN1 algorithm that was described in the previous section. The NN configu-
ration used has the typical architecture shown in Figure 21.1. It consists of three layers:
one input, one hidden, and one output layer with varying number of nodes per layer.
After an initial investigation on the optimum number of hidden layers and their nodes,
one hidden layer was used having 10 nodes. The input data of the NN are the eleven
random variables (two for each of the five element groups plus the seismic coefficient),
while the output is one, i.e. the maximum interstorey drift value, which defines the
limit-state violation. Thus, the NN configuration that was used was the following:
16-20-1. The training-testing set of the NN consisted of two hundred input/output
pairs, twenty of which were used for testing the generalization capabilities of the
trained NN. The application of NN reduces the computing time in a fraction of the
time required for the conventional FE analyses. In addition, it does not affect the accu-
racy of the MCS method, in fact it can increase it since the fast NN approximations
allow the use of much greater sampling size.

6.3 Hybrid RRDO 3D trus s tes t exampl e


For the purposes of this study a 3D steel truss structure has also been considered. For
this test example, two objective functions have been taken into account, the initial
construction cost and the standard deviation of a characteristic node displacement
590 Structural design optimization considering uncertainties

representing the response of the structure. Two sets of constraints are enforced, deter-
ministic constraints on stresses, element buckling and displacements imposed by the
European design codes and probabilistic ones. Furthermore, due to manufacturing
limitations the design variables are not continuous but discrete since cross-sections
belong to a certain predefined set provided by the manufacturers. The discrete design
variables are treated in the same way as in single optimum design problems using the
discrete evolution strategies. The design variables considered are the dimensions of the
members of the structure taken from the Circular Hollow Section (CHS) table of the
Eurocode. The random variables related to the cross-sectional dimensions, for both
test examples, are two per design variable: the external diameter D and the thickness t
of the circular hollow section. Apart from the cross-sectional dimensions of the struc-
tural members, the material properties (modulus of elasticity E and yield stress σy ) and
the lateral loads have also been considered as random variables. The robustness of
the constraints is also considered using the overall probability of maximum violation
of the behavioural constraints, as a result of the variation of the uncertain structural
parameters.
The test example considered is the 3D truss tower shown in Figures 6(a) to 6(c). The
height of the truss tower is 128 m, while its basis is a rectangle of side 17.07 m. The FE
model consists of 324 nodes and 1254 elements which are divided into 12 groups that
play the role of the design variables. The applied loading consists of: (i) self weight
(dead load), (ii) live loads and (iii) wind actions according to the (Eurocode 1 2003).
The type of probability density function, the mean value, and the variance of the
random variables are shown in Table 21.6.
In the present implementation an investigation is performed on the ability of the NN
to predict the required data for the evolution of the RRDO process. The inputs of the
NN correspond to the random variables, while the outputs are the characteristic node
displacement and the maximum displacement, stress and compression force required
for the calculation of the probability of violation. The appropriate selection of I/O
training data constitutes the most important parts in the NN training. The number of
training patterns may not be the only concern, as the distribution of samples is of great
importance also. Having chosen the NN architecture and trained the neural network,
the probability of violation and the standard deviation of the response can be obtained
in orders of magnitude less computing time. The modulus of elasticity, yield stress,
diameter D and thickness t of the circular hollow cross-section as well as the loading
have been considered as random variables for the structures examined. The inputs of
the NN correspond to the random variables, while the outputs are the characteris-
tic node displacement and the maximum displacement, stress and compression force
required for the calculation of the probability of violation.
The previously described multi-criteria optimization (CEATm) algorithm employed
is denoted as CEATm(µ + λ)nruns,csteps where µ, λ are the number of the parent and off-
spring vectors used in the ES optimization strategy, nruns is the number of independent
CEA runs and csteps is the number of cascade stages employed. The basic steps inside
an independent run of the multi-objective algorithm when the NN is embedded in the
optimization process, as adopted in this test case, are described in Flowchart 21.5.
For the solution of the multi-objective optimization problem in question the
non-dominant CEATm(µ + λ)nrun,csteps optimization scheme was employed, where
µ = λ = 5, nrun = 10 and csteps = 3. The resultant Pareto front curves for the RDO
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 591

Independent run do i = 1, nrun


CEATm LOOP
1. Initial generation:
do while sk not feasible k = 1, µ
Generate sk (k=1,… , µ) vectors
Analysis step
Evaluation of the Tchebycheff metric
Deterministic constraints check: if satisfied continue else regenerate sk design
Monte Carlo Simulation step:
Selection of the NN training set
NN training for the limit load
NN Monte Carlo Simulations
Probabilistic constraint check
end do
2. Global non-dominant search: Check if global generation is accomplished. If yes, then
non-dominant search is performed, else wait until global generation is accomplished.
3. New generation:
do while s not feasible  = 1, λ
Generate s (k = 1, . . . , µ) vectors
Analysis step
Evaluation of the Tchebycheff metric
Deterministic constraints check: if satisfied continue else regenerate sk design
Monte Carlo Simulation step:
Selection of the NN training set
NN training for the limit load
NN Monte Carlo Simulations
Probabilistic constraint check
end do
4. Selection step: selection of the next generation parents according to (µ + λ) or
(µ, λ) scheme
5. Global non-dominant search: Check if global generation is accomplished. If yes, then
non-dominant search is performed, else wait until global generation is accomplished.
6. Convergence check: If satisfied stop, else go to step 3
END OF CEATm LOOP
End do of Independent run

Flowchart 21.5 The CEATm algorithm combined with NN.

Table 21.6 3D truss tower example: Characteristics of the random variables.

Random Description Probability Mean Standard σ/µ 95% of Values


variable density value (µ) deviation (σ) interval
function

E (kN/m2 ) Young’s Modulus Normal 2.10E + 08 1.50E + 07 7.14% (1.81E + 08, 2.39E + 08)
σy (kN/m2 ) Allowable stress Normal 355000 35500 10.00% (2.85E + 05, 4.25E + 05)
F (kN) Horizontal loading Normal Fµ 0.4 Fµ 40.00% (2.16 Fµ , 17.84 Fµ )
D CHS Diameter Normal d ∗i 0.02 d i 2% (0.9618 d i , 1.039 d i )
t CHS Thickness Normal t ∗i 0.02 t i 2% (0.9618 t i , 1.039 t i )

* Taken from the Circular Hollow Section (CHS) table of the Eurocode, for every design.
592 Structural design optimization considering uncertainties

4.27

29.26
x

40.23
y
Characteristic
128.01

node
58.52

17.07
19.21

(a) (b) (c)

Figure 21.6 3D truss tower example: (a) 3D view, (b) Side view, (c) Top view.

formulations are depicted in Figure 21.7, with the structural weight on the horizontal
axis and the standard deviation of the characteristic node displacement on the vertical
axis. The displacement in the x-direction of the top node is selected as the characteris-
tic one (Figure 21.6c). As can be seen in Figure 21.7 the trend on the influence of the
probabilistic constraint is similar to that of the first example, where the Pareto front
curves coincide in different parts.
Four different formulations of the RDO problem have been considered in this
study: (i) the standard RDO formulation, (ii) RRDO with allowable probability
equal to 2% denoted as RRDO_2%, (iii) RRDO with allowable probability equal
to 0.1% denoted as RRDO_0.1% and (iv) RRDO with allowable probability equal
to 0.01% denoted as RRDO_0.01%. As can be seen in Figure 21.7 the presence of
the probabilistic constraint influences the Pareto curves near the DBO area (designs
Ai, i = 1, . . . , 4) of the Pareto front, where the weight of the structure is the dominant
criterion. On the contrary, the four Pareto front curves almost coincide at the areas
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 593

5.5E-02

A4 (4497.3, 5.1E-02) RDO


A2 (4231.2, 5.0E-02) RRDO 2%
5.0E-02 A1 (3874.7, 4.9E-02) A3 (4314.6, 4.9E-02) RRDO 0.1%
RRDO 0.01%
Standard deviation (m)

4.5E-02

4.0E-02

3.5E-02

B1 (4997.3, 3.0E-02)
3.0E-02 B3 (4982.8, 2.9E-02)
B2 (4964.7, 3.0E-02) B4 (4983.7, 2.9E-02)

2.5E-02
3750.0 4000.0 4250.0 4500.0 4750.0 5000.0 5250.0
Weight (kN)

Figure 21.7 3D truss tower example: Comparison of the Pareto front curves.

where the importance of the second criterion (standard deviation of the response)
increases.
The performance of the NN prediction is depicted in Figures 21.8a to 21.8d, where
the prediction of the characteristic displacement, the maximum displacement, the max-
imum compressive force and the maximum tensile force are shown, respectively. Three
different training sets, of size 100, 200 and 500, respectively have been examined, ran-
domly generated using LHS, while 50 patterns have been used for testing. As can be
seen in Figure 21.8, 100 samples are enough for efficiently training the NN. The MCS
sample sizes used in this test example are 10,000, 100,000 and 500,000. In the RDO
and RRDO_2% formulations a sample size of 10,000 simulations has been used, while
in RRDO_0.1% and RRDO_0.01% a sample size of 100,000 and 500,000 simulations
has been employed. The different formulations and consequently the different sample
sizes lead to a significantly different computing cost. In order to reduce the increased
computing cost, especially of the last two formulations, a neural network formulation
has been applied.
The NN configuration implemented in this example has one hidden layer with 50
nodes resulting in a 27-50-4 NN architecture (see Figure 21.1), which is used for all
runs. The computing cost is depicted in Table 21.7, where the conventional and the
corresponding NN computing times are reported. It has to be mentioned that the
denoted basic computing costs for the RRDO_0.1% and RRDO_0.01% formulations
are estimations due to the excessive computing cost required for these two cases. It
can be seen that the NN-based methodology requires up to four orders of magnitude
less computing time compared to the conventional one.
594 Structural design optimization considering uncertainties

Table 21.7 3D truss tower example: Computing times.

Formulation No of simulations Time (hours)

Basic NN

RDO 10,000 5.33E+01 5.87E−01


RRDO 2% 10,000 5.42E+01 5.96E−01
RRDO 0.01% 100,000 5.59E+02* 6.15E−01
RRDO 0.001% 500,000 2.79E+03* 6.14E−01

* Estimated.

6.0E-01 6.0E-01

5.0E-01
100 5.0E-01
200 100
500
Real
Real

4.0E-01 200
500
4.0E-01
3.0E-01

2.0E-01 3.0E-01
2.0E-01 3.0E-01 4.0E-01 5.0E-01 6.0E-01 3.0E-01 4.0E-01 5.0E-01 6.0E-01
Predicted Predicted
(a) (b)
70.0 200.0

180.0 100
60.0 100 200
200 160.0 500
500
Real

Real

50.0
140.0
40.0
120.0

30.0 100.0
30.0 40.0 50.0 60.0 70.0 100.0 120.0 140.0 160.0 180.0 200.0
Predicted Predicted
(c) (d)

Figure 21.8 3D truss tower: Performance of NN with respect to the number of the training
patterns (a) characteristic displacement, (b) maximum displacement, (c) maximum
compressive force, and (d) maximum tensile force.

7 Conclusions
In most cases the optimum design of structures is based on nominal values of the design
parameters and is focused on the satisfaction of the deterministically defined design
code provisions. The deterministic optimum is not always a “safe’’ design, since there
are many random factors that affect the design, i.e. manufacturing and performance of
a structure during its lifetime. In order to find a “real’’ optimum the designer has to take
into account all necessary random variables. In order to alleviate this deficiency, two
types of formulations have been proposed in the past: RBDO and RDO. In the present
work, apart from presenting successful RBDO applications, the combined RRDO is
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 595

also proposed, where probabilistic constraints are incorporated into the robust design
optimization formulation.
In the examined RBDO formulations, under static or dynamic loads, the reliability
analysis of the structure has to be performed in order to determine its optimum design
taking into account a desired level of probability of structural failure. Only after form-
ing and solving this RBDO problem, even with additional cost in weight and computing
time, can a “global’’ and realistic optimum structural design be found. The aim of the
proposed RBDO procedure is to increase the safety margins of the optimized structures
under various uncertainties, while at the same time minimizing its weight, and reduc-
ing substantially the required computational effort. The solution of realistic RBDO
problems in structural mechanics is an extremely computationally intensive task. As
it can be observed from the numerical results, the computational cost for the solution
of realistic RBDO problems is orders of magnitude larger than the corresponding cost
for a deterministic optimization problem. Due to the size and complexity of RBDO
problems, a non-conventional, stochastic evolutionary optimization method – such as
ES – appears to be a suitable choice.
In a similar manner, in order to implement the hybrid RRDO formulation, structural
reliability analyses for every candidate design have to be performed for the evaluation
of the probability of violation. Depending on the value of the allowable probability
of violation, different sample sizes are employed in order to calculate with sufficient
accuracy the statistical quantities under consideration i.e. the standard deviation of the
response and the probability of violation of the constraints. The Pareto front curves
obtained for the presented RRDO formulation and the RDO formulation appear to be
different when the weight objective function is predominant, while they approach each
other in the areas of the Pareto fronts where the significance of the standard deviation
of the response criterion increases. In other words, for the same standard deviation
value, the optimum weight achieved by the RRDO formulation are larger than the
corresponding weight achieved by the conventional RDO approach. Furthermore, it
was observed that the presence of the standard deviation as an objective function forces
the RDO formulation to produce results very close to those obtained by the RRDO
formulation close to the right end of the Pareto front curve.
Concluding, the aim of this work was twofold: to examine the influence of the
probabilistic parameters and constraints in structural optimization, and to deal with
computationally demanding tasks in probabilistic mechanics. The computational effort
involved in the conventional MCS becomes excessive in large-scale problems, espe-
cially when earthquake loading is considered, due to the enormous sample size and the
computing time required for each Monte Carlo run. Although the LHS technique has
been implemented for improving the computational efficiency of the MCS method, the
computational cost remains excessive, making the solution of large-scale probabilistic
optimization problems computationally unsolvable. Thus, a neural network assisted
methodology has been proposed in order to obtain the structural response results
required during the Monte Carlo simulations inexpensively. The achieved reduction
in computational time was several orders of magnitude compared to the conventional
procedure making tractable the optimization of real world structures under probabilis-
tic constraints. The use of NN can practically eliminate any limitation on the scale of
the problem and the sample size used for MCS without deteriorating the accuracy of
the results.
596 Structural design optimization considering uncertainties

References

Agarwal, H. & Renaud, J.E. 2006. A new decoupled framework for reliability based design
optimization. AIAA J. 44(7):1524–1531.
Agarwal, H. & Renaud, J.E. 2004. Reliability based design optimization using response surfaces
in application to multidisciplinary systems. Eng. Optim. J. 36(3):291–311.
Ba-abbad, M.A., Nikolaidis, E. & Kapania, R.K. 2006. New approach for system reliability-
based design optimization. AIAA J. 44(5):1087–1096.
Bozorgnia, Y. & Bertero, V.V. (eds) 2004. Earthquake engineering: from engineering seismology
to performance-based engineering. CRC Press Publications.
Coello-Coello, C.A. 2000. An updated survey of GA-based multi-objective optimization
techniques. ACM Computing Surveys 32(2):109–143.
Doltsinis, I. & Kang, Z. 2004. Robust design of structures using optimization methods. Comput.
Methods Appl. Mech. Engrg. 193:2221–2237.
EC1. 2003. Eurocode 1 : Basis of design and actions on structures – Part 2–4: Actions on
Structures – Wind actions. CEN, ENV 1991-2-4, European Committee for Standardization,
Brussels.
EC3. 2003. Eurocode 3: Design of steel structures, Part 1.1: General rules for buildings. CEN,
ENV 1993-1-1/1992, European Committee for Standardization, Brussels.
EC8. 2004. Eurocode 8: Design of structures for earthquake resistance – Part 1. European
standard. CEN-ENV-1998-1, European Committee for Standardization, Brussels.
Frangopol, D.M. & Soares, C.G. 2001. Reliability-oriented optimal structural design. J. Reliab.
Eng. Syst. Saf. 73(3):195–3069.
Gupta, A. & Krawinkler, H. 2000. Behaviour of ductile SMRFs at various seismic hazard levels.
ASCE Journal of Structural Engineering 126(1):98–107.
JCSS. 2001. Probabilistic Model Code. Joint Committee on Structural Safety, http://www.jcss.
ethz.ch [accessed 15 September 2006].
Jung, D.H. & Lee, B.C. 2002. Development of a simple and efficient method for robust
optimization. Int. J. Numer. Meth. Engng. 53:2201–2215.
Lagaros, N.D. & Papadrakakis, M. 2006. Robust seismic design optimization of steel structures,
J. Struct. Multidisc. Optim. Available on-line, Doi: 10.1007/s00158-006-0047-5.
Lagaros, N.D. & Tsompanakis, Y. (eds) 2006. Intelligent computational paradigms in
earthquake engineering. Idea Publishers.
Lagaros, N.D., Papadrakakis, M. & Kokossalakis, G. 2002. Structural optimization using
evolutionary algorithms. Computer & Structures 80(7–8):571–587.
Lagaros, N.D., Plevris, V. & Papadrakakis, M. 2005. Multi-objective design optimiza-
tion using cascade evolutionary computations. Comput. Methods Appl. Mech. Engrg.
194(30–33):3496–3515.
Marler, R.T. & Arora, J.S. 2004. Survey of Multi-objective Optimization Methods for
Engineering. J. Struct. Multidisc. Optim. 26(6):369–395.
Mattson, C.A., Mullur, A.A. & Messac, A. 2004. Smart pareto filter: Obtaining a minimal
representation of multiobjective design space. Eng. Optim. J. 36(6):721–740.
McKay, M.D., Beckman, R.J. & Conover, W.J. 1979. A comparison of three methods for select-
ing values of input variables in the analysis of output from a computer code. Technometrics
21(2):239–245.
Messac, A. & Ismail-Yahaya, A. 2002. Multiobjective robust design using physical program-
ming. J. Struct. Multidisc. Optim. 23:357–371.
Owen, A.B. 1997. Monte Carlo variance of scrambled net quadrature. SIAM J. Num. Anal.
34(5):1884–1910.
Papadrakakis, M., Tsompanakis, Y. & Lagaros, N.D. 1999. Structural shape optimization using
evolution strategies. Eng. Optim. J. 31:515–540.
M e t a m o d e l-b a s e d c o m p u t a t i o n a l t e c h n i q u e s i n p r o b a b i l i s t i c o p t i m i z a t i o n 597

Park, G.-J., Lee, T.-H., Lee, K.H. & Hwang, K.-H. 2006. Robust design: An overview. AIAA J.
44(1):181–191.
Rummelhart, D.E. & McClelland, J.L. 1986. Parallel Distributed Processing, Volume 1:
Foundations. The MIT Press, Cambridge.
Schuëller, G.I. 2005. Special Issue on Computational Methods in Stochastic Mechanics and
Reliability Analysis. Comput. Methods Appl. Mech. Engrg. 194(12–16):1251–1795.
Schwefel, H.P. 1981. Numerical optimization for computer models. Wiley & Sons,
Chichester, UK.
Tsompanakis, Y. & Papadrakakis, M. 2004. Large-scale reliability-based structural optimiza-
tion. J. Struct. Multidisc. Optim. 26:1–12.
Tsompanakis, Y., Lagaros, N.D. & Stavroulakis, G. 2007. Soft computing techniques in parame-
ter identification and probabilistic seismic analyses of structures, J. Advances in Eng. Software
(in press).
Wen, Y.K. 2000. Reliability and performance-based design. Proceedings of the 8th ASCE Spe-
cialty Conference on Probabilistic Mechanics and Structural Reliability, University of Notre
Dame, Indiana, USA, 24–26 July.
Youn, B.D. & Choi, K.K. 2004. The performance moment integration method for reliability-
based robust design optimization. Proceedings of the ASME Design Engineering Technical
Conference, Salt Lake City, Utah, USA, September 28–October 2.
Youn, B.D., Choi, K.K. & Du, L. 2005. Enriched performance measure approach for reliability-
based design optimization. AIAA J. 43(4):874–884.
References

AASHTO, American Association of State Highway and Transportation Officials 1992. Standard
specifications for highway bridges. Washington, D.C.: American Association of State Highway
and Transportation Officials. 15th edition.
Abramowitz, M. & Stegun, I.A. 1972. Handbook of mathematical functions. 10th ed.
New York: Dover.
Adams, B.M., Eldred, M.S. & Wittwer, J.W. 2006. Reliability-based design optimiza-
tion for shape design of compliant micro-electro-mechanical systems. In Proceedings of
the 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Number
AIAA-2006-7000, September 6–8, Portsmouth, VA.
Adams, B.M., Bichon, B.J., Eldred, M.S., Carnes, B., Copps, K.D., Neckels, D.C., Hopkins,
M.M., Notz, P.K., Subia, S.R. & Wittwer, J.W. 2006. Solution-verified reliability analysis and
design of bistable mems using error estimation and adaptivity. Technical Report SAND2006-
6286, Sandia National Laboratories, 2006, October, Albuquerque, NM.
Adelman, H. & Haftka, R. 1986. Sensitivity analysis of discrete structural systems. AIAA
Journal 24:823–832.
Agarwal, H. & Renaud, J.E. 2004. Reliability based design optimization using response surfaces
in application to multidisciplinary systems. Eng. Optim. J. 36(3):291–311.
Agarwal, H. & Renaud, J. 2006. A new decoupled framework for reliability based design
optimization. AIAA Journal 44(7):1524–1531.
Agarwal, H., Renaud, J.E., Lee, J.C. & Watson, L.T. 2004. A unilevel method for reliability-
based design optimization. In Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC Struc-
tures, Structural Dynamics, and Materials Conference, Number AIAA-2004-2029, April
19–22, Palm Springs, CA.
Agarwal, H., Renaud, J.E., Preston, E.L. & Padmanabhan, D. 2004. Uncertainty Quantification
Using Evidence Theory in Multidisciplinary Design Optimization. Reliability Engineering and
System Safety 85:281–294.
Agmon, N., Alhassid, Y. & Levine, R.D. 1979. An algorithm for finding the distribution of
maximal entropy. Journal of Computational Physics 30:250–258.
Aitchison, J. & Dunsmore, I.R. 1975. Statistical Prediction Analysis. Cambridge University
Press, Cambridge.
Akgul, F. & Frangopol, D.M. 2003. Probabilistic analysis of bridge networks based on sys-
tem reliability and Monte Carlo simulation. In Applications of Statistics and Probability
in Civil Engineering. A. Der Kiureghian, S. Madanat, & J.M. Pestana (eds), Rotterdam,
The Netherlands, pp. 1633–1637. Millpress.
Akiyama, H. 1985. Earthquake Resistant Limit-State Design for Buildings. University of Tokyo
Press, Tokyo, Japan.
Akpan, U.O., Rushton, P.A. & Koko, T.S. 2002. Fuzzy Probabilistic Assessment of the Impact
of Corrosion on Fatigue of Aircraft Structures, Paper AIAA-2002-1640.
Allen, J.J. 2005. Micro Electro Mechanical System Design. Boca Raton: Taylor and Francis.
600 References

Allen, M. & Maute, K. 2004. Reliability-based design optimization of aeroelastic structures.


Structural and Multidisciplinary Optimization 27(4):228–242.
Allen, M., Raulli, M., Maute, K. & Frangopol, D. 2004. Reliability-based analysis and design
optimization of electrostatically actuated MEMS. Computers and Structures 82(13–14):
1007–1020.
Ambartzumian, R., Der Kiureghian, A., Ohaniana, V. & Sukiasiana, H. 1998. Multinor-
mal probability by sequential conditioned importance sampling: Theory and application.
Probabilistic Engineering Mechanics 13(4):299–308.
Ananthasuresh, G.K., Kota, S. & Gianchandani, Y. 1994. A methodical approach to the design
of compliant micromechanisms. In Proc. IEEE Solid-State Sensor and Actuator Workshop,
Hilton Head Island, SC, pp. 189–192.
Ang, A.H.-S. & De Leon, D. 1997. Determination of optimal target reliabilities for design and
upgrading of structures. Structural Safety 19:91–103.
Ang, H.-S.A. & Lee, J.-C. 2001. Cost optimal design of R/C buildings. Reliability Engineering
and System Safety 73:233–238.
Ang, H.-S. & Tang, W.H. 1975. Probabilistic concepts in engineering planning and design, Vol. I
and II, Wiley.
Ang, A.H.-S. & Tang, W.H. 1984. Probability Concepts in Engineering Planning and Design,
Volume II; Decision, Risk, and Reliability, New York, John Wiley & Sons.
Aoues, Y. & Chateauneuf, A. 2007. Reliability-based optimization of structural sys-
tems by adaptive target safety application to RC frames. Structural Safety. Article in
Press.
Arias, A. 1970. A measure of earthquake intensity. In Seismic Design for Nuclear Power Plants,
R.J. Hansen (ed.), The MIT Press, Cambridge, MA, 438–469.
Atkinson, G.M. & Silva, W. 2000. Stochastic modeling of California ground motions. Bulletin
of the Seismological Society of America 90(2):255–274.
Attoh-Okine, N.O. 2002. Uncertainty analysis in structural number determination in
flexible pavement design – A convex model approach. Construction and Building Materials
16(2):67–71.
Au, S.K. 2005. Reliability-based design sensitivity by efficient simulation. Computers and
Structures 83:1048–1061.
Au, F.T.K., Cheng, Y.S., Tham, L.G. & Zeng, G.W. 2003. Robust design of structures using
convex models. Computers and Structures 81(28–29):2611–2619.
Au, S.K. & Beck, J.L. 1999. A new adaptive importance sampling scheme. Structural Safety
21:135–158.
Au, S.-K. & Beck, J.L. 2001. First excursion probabilities for linear systems by very efficient
importance sampling. Probabilistic Engineering Mechanics 16(3):193–207.
Au, S.K. & Beck, J.L. 2001. Estimation of small failure probabilities in high dimensions by
subset simulation. Probabilistic Engineering Mechanics 16(4):263–277.
Au, S.K. & Beck, J.L. 2003. Importance sampling in high dimensions. Structural Safety
25(2):139–163.
Au, S.K. & Beck, J.L. 2003. Subset simulation and its applications to seismic risk based on
dynamic analysis. Journal of Engineering Mechanics 129(8):901–917.
Augusti, G., Ciampoli, M. & Frangopol, D.M. 1998. Optimal planning of retrofitting
interventions on bridges in a highway network. Engineering Structures 20(11):933–939.
Elsevier.
Austrell, P.-E., Dahblom, O., Lindemann, J., Olsson, A., Olsson, K.-G., Persson, K.,
Petersson, H., Ristinmaa, M., Sandberg, G. & Wernbergk, P.-A. 1999. Structural Mechanics,
LTH, Sweden. CALFEM: A finite element toolbox to MATLAB, Version 3.3. http://www.
byggmek.lth.se/Calfem/index.htm.
References 601

Ayhan, H., Lim’on-Robles, J. & Wortman, M.A. 1999. An approach for computing tight
numerical bounds on renewal functions. IEEE Transactions on Reliability 48(2):182–188.
Ba-abbad, M.A., Nikolaidis, E. & Kapania, R.K. 2006. New approach for system reliability-
based design optimization. AIAA J. 44(5):1087–1096.
Bae, H.-R., Grandhi, R.V. & Canfield, R.A. 2004. An Approximation Approach for Uncer-
tainty Quantification Using Evidence Theory. Reliability Engineering and System Safety
86:215–225.
Bae, H.-R., Grandhi, R.V. & Canfield, R.A. 2004. Epistemic Uncertainty Quantification Tech-
niques Including Evidence Theory for Large-Scale Structures. Computers and Structures
82:1101–1112.
Barlow, R.E. & Proschan, F. 1965. Mathematical Theory of Reliability. New York: John
Wiley & Sons.
Barlow, R.E. & Proschan, F. 1975. Statistical Theory of Reliability and Life Testing: Probabilistic
Models. New York: Holt, Rinehart and Winston.
Barndorff-Nielsen, O. & Cox, D.R. 1979. Edgeworth and saddle-point approximations with
statistical applications. Journal of the Royal Statistical Society 41:279–312.
Beck, J.L., Chan, E., Irfanoglu, A. & Papadimitriou, C. 1999. Multi-criteria optimal structural
design under uncertainty. Earthquake Engineering and Structural Dynamics 28(7):741–761.
Beck, J.L. & Katafygiotis, L.S. 1998. Updating models and their uncertainties. I: Bayesian
statistical framework. Journal of Engineering Mechanics 124(4):455–461.
Bendsoe, M.P. 1989. Optimal shape design as a material distribution problem. Comp. Mth.
Appl. Mech. Engrg. 1:193–202.
Bendsoe, M.P. & Kikuchi, N. 1988. Generating optimal topologies in optimal design using a
homogenization method. Comp. Mth. Appl. Mech. Engrg. 71:197–224.
Bendsøe, M.P. & Sigmund, O. 1989. Topology Optimization: Theory, Methods and a
applications. Springer-Verlag, Berlin.
Ben-Haim, Y. 1985. The Assay of Spatially Random Material. Dordrecht: D. Reidel Publishing
Company.
Ben-Haim, Y. 1996. Robust Reliability in the Mechanical Sciences. Berlin: Springer Verlag.
Ben-Haim, Y. 2001. Information-gap decision theory: Decisions under severe uncertainty.
New York: Academic Press, Inc.
Ben-Haim, Y. 2004. Uncertainty, probability and information-gaps. Reliability Engineering and
System Safety 85:249–266.
Ben-Haim, Y. 2005. Value at risk with Info-gap uncertainty. Journal of Risk Finance 6:388–403.
Ben-Haim, Y. 2005. Info-gap decision theory for engineering design. In Engineering Design
Reliability Handbook, E. Nikolaidis, D. Ghiocel & S. Singhal (eds), CRC Press.
Ben-Haim, Y. 2006. Information-gap Decision Theory: Decisions Under Severe Uncertainty.
2nd edition. London: Academic Press.
Ben-Haim, Y. & Elishakoff, I. 1990. Convex Models of Uncertainty in Applied Mechanics. New
York: Elsevier.
Ben-Haim, Y. & Elishakoff, I. 1996. Convex models of uncertainty in applied mechanics.
New York: Elsevier.
Ben-Haim, Y., Chen, G. & Soong, T.T. 1996. Maximum structural response using convex
models. ASCE J. Engineering Mechanics 122:325–333.
Benjamin, J.R. & Cornell, C.A. 1970. Probability, Statistics and Decision for Civil Engineers.
Mc-Graw-Hill.
Bennet, J. & Botkin, M. 1985. Structural shape optimization with geometric description and
adaptive mesh refinement. AIAA Journal 23:458–464.
Ben-Tal, A. & Nemirovski, A. 2001. Lectures on Modern Convex Optimization: Analysis,
Algorithms, and Engineering Applications. Philadelphia: SIAM.
602 References

Ben-Tal, A. & Nemirovski, A. 2002. Robust optimization – methodology and applications.


Mathematical Programming B92:453–480.
Berens, A.P., Hovey, P.W. & Skinn, D.A. 1991. Risk Analysis for Aging Aircraft Fleets, Air
Force Wright Lab Report, WL-TR-91-3066, Vol. 1.
Bichon, B.J., Eldred, M.S., Swiler, L.P., Mahadevan, S. & McFarland, J.M. 2007. Multimodal
reliability assessment for complex engineering applications using efficient global optimiza-
tion. In Proceedings of the 9th AIAA Non-Deterministic Approaches Conference, Number
AIAA-2007-1946, April 23–26, Honolulu, HI.
Bjerager, P. 1988. Probability integration by directional simulation. Journal of Engineering
Mechanics 114(8):1288–1302.
Bjerager, P. 1990. On computational methods for structural reliability analysis. Structural Safety
9(2):79–96.
Bjerager, P. 1991. Methods for structural reliability computation. In Reliability problems:
general principles and applications in mechanics of solid and structures. F. Casciati (ed.),
New York: Springer-Verlag.
Boore, D.M. & Joyner, W.B. 1997. Site amplifications for generic rock sites. Bulletin of the
Seismological Society of America 87:327–341.
Boore, D.M. 2003. Simulation of ground motion using the stochastic method. Pure applied
Geophysics 160:635–676.
Box, G.E.P. & Cox, D.R. 1964. An analysis of transformations. J. Royal Stat. Soc. 26:211–252.
Boyd, S. & Vandenberghe, L. 2004. Convex Optimization. Cambridge: Cambridge
University Press.
Bozorgnia, Y. & Bertero, V.V. (eds) 2004. Earthquake engineering: from engineering seismology
to performance-based engineering. CRC Press Publications.
Breipohl, A.M. 1970. Probabilistic Systems Analysis, Wiley, New York.
Breitung, K. 1984. Asymptotic approximation for multi-normal integrals. ASCE Journal of
Engineering Mechanics 10(3):357–366.
Brill, E.D. 1979. The use of optimization models in public-sector planning. Mangement Science
25(5):413–422.
Brown, M. & Proschan, F. 1983. Imperfect repair. Journal of Applied Probability 20:851–859.
Bucher, C.G. 1988. Adaptive Sampling – An Iterative Fast Monte Carlo Procedure, Structural
Safety 5:119–126.
Bucher, C.G. & Bourgund, U. 1990. A fast and efficient response surface approach for structural
reliability problems. Structural Safety 7:57–66.
Bucher, C. & Frangopol, D.M. 2006. Optimization of lifetime maintenance strategies for deteri-
orating structures considering probabilities of violating safety, condition, and cost thresholds.
Probabilistic Engineering Mechanics 21(1):1–8. Elsevier.
Burke, J.V. 1991. Calmness and exact penalization. SIAM J. Control and Optimization
29(2):493–497.
Burks, A. 1970. Essays on Cellular Automata, Chapter Von Neumann’s self-reproducing
automata, pp. 3–64. University of Illinois Press.
Calafiore, G. & El Ghaoui, L. 2004. Ellipsoidal bounds for uncertain linear equations and
dynamical systems. Automatica 40:773–787.
Chan, K.Y., Kokkolaras, M., Papalambros, P.Y., Skerlos, S.J. & Mourelatos, Z. 2004. Propa-
gation of uncertainty in optimal design of multilevel systems: Piston-ring/cylinder-liner case
study. In Proceedings of the SAE World Congress, Detroit, Michigan, March 8–11. Paper No.
2004-01-1559.
Chandu, S. & Grandhi, R. 1995. General purpose procedure for reliability based structural
optimization under parametric uncertainties. Advances in Engineering Software 23:7–14.
Chateauneuf, A. & Noret, E. 2005. System reliability-based optimization of redundant timber
trusses. In Reliability and optimization of structural systems, J.D. Sørensen, (ed.), Proceedings
References 603

of the IFIP WG7.5 Working conference on reliability and optimization of structural systems,
Aalborg, Denmark.
Cheah, P.K., Fraser, D.A.S. & Reid, N. 1993. Some alternatives to edgeworth. Canadian Journal
of Statistics 21:131–138.
Chen, L. & Rao, S.S. 1997. Fuzzy finite element approach for the vibration analysis of
imprecisely defined systems. Finite Elements in Analysis and Design 27:69–83.
Chen, S., Lian, H. & Yang, X. 2002. Interval static displacement analysis for structures
with interval parameters. International Journal for Numerical Methods in Engineering
53:393–407.
Chen, X. & Lind, N.C. 1983. Fast probability integration by three-parameter normal tail
approximation. Struct. Saf. 1:269–276.
Chen, X., Hasselman, T.K. & Neill, D.J. 1997. Reliability based structural design optimiza-
tion for practical applications. In Proceedings of the 38th AIAA/ASME/ASCE/AHS/ASC
structures, structural dynamics and material conference, Kissimmee, Florida, AIAA-97-1403.
Cheng, G., Xu, L. & Jiang, L. 2006. A sequential approximate programming strategy for
reliability-based structural optimization. Computers and Structures. Article in Press.
Cherng, R.H. & Wen, Y.K. 1994. Reliability of uncertain nonlinear trusses under random
excitation, I., II. J. of Engineering Mechanics ASCE 120(4):733–757.
Chernousko, F.L. 1999. What is ellipsoidal modelling and how to use it for control and state
estimation? In Whys and Hows in Uncertainty Modelling, I. Elishakoff (ed.), pp. 127–188.
Wien: Springer-Verlag.
Ching, J. & Hsieh, Y.-H. 2007. Local estimation of failure probability function and its con-
fidence interval with maximum entropy principle. Probabilistic Engineering Mechanics 22:
39–49.
Ching, J. & Hsu, W.-C. 2006. Transforming reliability limit state constraints into deterministic
limit state constraints. Structural Safety. Article in Press.
Cho, M. & Rhee, S.Y. 2003. Layup optimization considering free-edge strength and bounded
uncertainty of material properties. AIAA Journal 41(11):2274–2282.
Cho, M., & Rhee, S.Y. 2004. Optimization of laminates with free edges under bounded
uncertainty subject to extension, bending and twisting. International Journal of Solids and
Structures 41(1):227–245.
Choi, K.K. & Youn, B.D. 2002. On probabilistic approaches for reliability-based design opti-
mization. In 9th AIAA/NASA/USA/ISSMO symposium on Multidisciplinary Analysis and
Optimization, September 4–6, Atlanta, GA, USA.
Choi, K.K., Youn, B.D. & Yang, R. 2001. Moving least square method for reliability-
based design optimization. In Proceedings of the Fourth World Congress of Structural and
Multidisciplinary Optimization (WCSMO-4), June 4–8, Dalian, China.
Choi, K.K., Du, L. & Youn, B.D. 2004. A New Fuzzy Analysis Method for Possibility-Based
Design Optimization. In 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization
Conference, AIAA 2004-4585, Albany, NY.
Christian, J.T. & Baecher, G.B. 1998. Point-estimate method and numerical quadrature. Journal
of Geotechnical and Geoenvironmental Engineering 125:779–786.
Clarke, F. 1983. Optimization and nonsmooth analysis. New York: John Wiley & Sons.
Coello-Coello, C.A. 2000. An updated survey of GA-based multi-objective optimization
techniques. ACM Computing Surveys 32(2):109–143.
Cox, D.R. 1962. Renewal Theory. Monographs on Applied Probability and Statistics. London:
Chapman & Hall.
Cox, D.R. & Isham, V. 1980. Point Processes. Monographs on Applied Probability and
Statistics. London: Chapman & Hall.
Cramer, H. & Leadbetter, M.R. 1967. Stationary and Related Stochastic Processes. New York:
John Wiley & Sons.
604 References

Creveling, C.M. 1997. Tolerance design: A handbook for developing optimal specification.
Cambridge, MA: Addison-Wesley.
Cunha, S.B., De Souza, A.P.F., Nicolleti, E.S.M. & Aguiar, L.D. 2006. A Risk-Based Inspec-
tion Methodology to Optimize Pipeline In-Line Inspection Programs. Journal of Pipeline
Integrity, 5: Q2.
Dailey, R. 1989. Eigenvector derivatives with repeated eigenvalues. AIAA Journal 27:
486–491.
Daniels, H.E. 1954. Saddlepoint approximations in statistics. Annals of Mathematical Statistics
25:631–650.
Deak, I. 1980. Three digit accurate multiple normal probabilities. Numerische Mathematik
35:369–380.
Deqing, Y., Yunkang, S., Zhengxing, L. & Huanchun, S. 2000. Topology optimization design
of continuum structures under stress and displacement constriaints. Applied Mathematic and
Mechanics 21:1–26.
Der Kiureghian, A. 1996. Structural reliability methods for seismic safety assessment: a review.
Engineering Structures 95:412–24.
Der Kiureghian, A. 2005. First- and Second-Order Reliability Methods, Chapter 14 In
Engineering Design Reliability Handbook, E. Nikolaidis, D.M. Ghiocel & S. Singhal (eds),
CRC Press, Boca Raton, FL.
Der Kiureghian, A. & Dakessian, T. 1998. Multiple Design Points in First and Second-order
Reliability. Structural Safety 20:37–50.
Der Kiureghian, A. & Liu, P.L. 1986. Structural reliability under incomplete probability
information. J. Eng. Mech. ASCE 112(1):85–104.
Der Kiureghian, A., Lin, H.-Z. & Hwang, S.-J. 1987. Second-order reliability approximations.
ASCE Journal of Engineering Mechanics 113(8):1208–1225.
Der Kiureghian, A. & Polak, E. 1988. Reliability-based optimal design: a decoupled approach.
In Reliability and optimization of structural systems. A.S. Nowak (ed.), Proceedings of the
8th IFIP WG7.5 Working Conference on reliability and optimization of structural systems,
Chelsea, MI, USA: Book Crafters. pp. 197–205.
Der Kiureghian, A. & Zhang, Y. 1999. Space-variant finite element reliability analysis. Comput.
Methods Appl. Mech. Engrg. 168:173–183.
D’Errico, J.R. & Zaino, N.A. 1988. Statistical tolerancing using a modification of Taguchi’s
method. Technometrics 30(4):397–405.
Diaz, A. & Kikuchi, N. 1992. Solution to shape and topology eigenvalue optimization problems
using a homogenization method. International Journal for Numerical Methods in Engineering
35:1487–1502.
Ditlevesen, O. 1979. Narrow reliability bounds for structural systems. Journal of Structural
Mechanics 7:453–472.
Ditlevsen, O. & Madsen, H.O. 1996. Structural reliability methods. New York, New York:
Wiley.
Ditlevsen, O., Bjerager, Olesen, R. & Hasofer, A.M. 1989. Directional Simulation in Gaussian
Space. Probabilistic Engineering Mechanics 3(4):207–217.
Ditlevsen, O., Oleson, R. & Mohr, G. 1987. Solution of a class of load combination problems
by directional simulation. Structural Safety 4:95–109.
Doltsinis, I. 1999. Elements of plasticity – Theory and computation. WIT Press, Southampton.
Doltsinis, I. (ed.), 1999. Stochastic Analysis of Multivariate Systems in Computational
Mechanics and Engineering. CIMNE, Barcelona.
Doltsinis, I. 2003. Inelastic deformation processes with random parameters – methods of analysis
and design. Comput. Methods Appl. Mech. Engrg. 192:2405–2423.
Doltsinis, I. 2003. Large deformation processes of solids – From fundamentals to computer
simulation and engineering applications. WIT Press, Southampton.
References 605

Doltsinis, I. & Kang, Z. 2004. Robust design of structures using optimization methods. Comput.
Methods Appl. Mech. Engrg. 193:2221–2237.
Doltsinis, I. & Rodic T. 1999. Process design and sensitivity analysis in metal forming. Int. J.
Numer. Meth. Engrg. 45:661–692.
Doltsinis, I., Kang, Z. & Cheng, G. 2005. Robust design of non-linear structures using
optimization methods. Comput. Methods Appl. Mech. Engrg. 194:1779–1795.
Drenick, R.F. 1970. Model-free design of aseismic structures. J. Engrg. Mech. Div. ASCE
96(EM4):483–493.
Drezner, Z. & Erkut, E. 1995. Solving the continuous p-dispersion problem using nonlinear
programming. The Journal of the Operational Research Society 46(4):516–520.
Du, X. & Chen, W. 2000. An Integrated Methodology for Uncertainty Propagation and
Management in Simulation-Based Systems Design. AIAA Journal 38(8):1471–1478.
Du, X. & Chen, W. 2002. Sequential optimization and reliability assessment method for effi-
cient probabilistic design. In ASME Design engineering technical conferences and computers
and information in engineering conference, DETC2002/DAC-34127, Montreal, Canada.
Du, X. & Chen, W. 2004. Sequential optimization and reliability assessment for efficient
probabilistic design. ASME Journal of Mechanical Design 126(2):225–233.
Du, X., Sudjianto, A. & Huang, B. 2005. Reliability-Based Design with a Mixture of Random
and Interval Variables. ASME Journal of Mechanical Design 127:1068–1076.
Dubois, D. & Prade, H. 1988. Possibility Theory. New York: Plenum Press.
Dunnett, C.W. & Sobel, M. 1955. Approximations to the probability integral and cer-
tain percentage points of multivariate analogue of Student’s t-distribution. Biometrika 42:
258–260.
Duysinx, P. 1997. Layout optimization: A mathematical programming approach. Technical
Report DCAMM report No. 540, University of Liege.
EC1. 2003. Eurocode 1: Basis of design and actions on structures – Part 2–4: Actions on
Structures – Wind actions. CEN, ENV 1991-2-4, European Committee for Standardization,
Brussels.
EC3. 2003. Eurocode 3: Design of steel structures, Part 1–1: General rules for buildings. CEN,
ENV 1993-1-1/1992, European Committee for Standardization, Brussels.
EC5. 2005. Eurocode 5: Design of timber structures; Part 1–1: general rules and rules for
buildings. EN 1995-1-1, European Committee for Standardization, Brussels.
EC8. 2004. Eurocode 8: Design of structures for earthquake resistance – Part 1. European
standard. CEN-ENV-1998-1, European Committee for Standardization, Brussels.
Eldred, M., Giunta, A. & Collis, S. 2004. Second-order corrections for surrogate-based opti-
mization with model hierarchies. In AIAA 2004-44570, 10th AIAA/ISSMO Multidisciplinary
Analysis and Optimization Conference, 30 August–1 September, Albany, NY.
Eldred, M.S. & Bichon, B.J. 2006. Second-order reliability formulations in DAKOTA/UQ. In
Proceedings of the 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and
Materials Conference, Number AIAA-2006-1828, May 1–4, Newport, RI.
Eldred, M.S. & Dunlavy, D.M. 2006. Formulations for surrogate-based optimization with data
fit, multifidelity, and reduced-order models. In Proceedings of the 11th AIAA/ISSMO Mul-
tidisciplinary Analysis and Optimization Conference, Number AIAA-2006-7117, September
6–8, Portsmouth, VA.
Eldred, M.S., Adams, B.M., Copps, K.D., Carnes, B., Notz, P.K., Hopkins, M.M. &
Wittwer, J.W. 2007. Solution-verified reliability analysis and design of compliant micro-
electro-mechanical systems. In Proceedings of the 9th AIAA Non-Deterministic Approaches
Conference, Number AIAA-2007-1934, April 23–26, Honolulu, HI.
Eldred, M.S., Agarwal, H., Perez, V.M., Wojtkiewicz, S.F. Jr. & Renaud, J.E. 2007. Investigation
of reliability method formulations in DAKOTA/UQ. Structure & Infrastructure Engineering:
Maintenance, Management, Life-Cycle Design & Performance. To appear.
606 References

Eldred, M.S., Brown, S.L., Adams, B.M., Dunlavy, D.M., Gay, D.M., Swiler, L.P., Giunta, A.A.,
Hart, W.E., Watson, J.-P., Eddy, J.P., Griffin, J.D., Hough, P.D., Kolda, T.G., Martinez-
Canales, M.L. & Williams, P.J. 2006. DAKOTA, a multilevel parallel object-oriented
framework for design optimization, parameter estimation, uncertainty quantification, and
sensitivity analysis: Version 4.0 users manual. Technical Report SAND2006-6337, Sandia
National Laboratories, Albuquerque, NM.
Eldred, M.S., Giunta, A.A., Wojtkiewicz, S.F. Jr. & Trucano, T.G. 2002. Formulations for
surrogate-based optimization under uncertainty. In Proceedings of the 9th AIAA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization, Number AIAA-2002-5585,
September 4–6, Atlanta, GA.
Elishakoff, I. 1991. Essay on reliability index, probabilistic interpetation of safety factor and
convex models of uncertainty. In Reliability Problems: General principles and Applications
in Mechanics of Solids and Structures. F. Casciati & J.B. Roberts (eds), pp. 237–271. Wien:
Springer-Verlag.
Elishakoff, I. 1995. An idea on the uncertainty triangle. The Shock and Vibration Digest
22(10):1–1.
Elishakoff, I. 1999. Are probabilistic and anti-optimization approaches compatible? In Whys
and Hows in Uncertainty Modelling, I. Elishakoff (ed.), pp. 263–355. Wien: Springer-
Verlag.
Elishakoff, I. 2005. Safety Factors and Reliability: Friends or Foes? New York: Kluwer.
Elishakoff, I., Haftka, R.T. & Fang, J. 1994. Structural design under bounded uncertainty
optimization with anti-optimization. Computers and Structures 53(6):1401–1405.
Elishakoff, I. & Ren, Y. 2003. Finite Element Methods for Structures with Large Stochastic
Variations. Oxford: Oxford University Press.
Elseifi, M.A., Gurdal, Z. & Nikolaidis, E. 1998. Convex and probabilistic models of
uncertainties in geometric imperfections of stiffened composite panels. In AIAA/ASME/
ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference and Exhibit and
AIAA/ASME/AHS Adaptive Structures Forum, Anon (ed.), Long Beach,CS, USA, April 20–23,
Part 2 (of 4):1131–1140.
Enevoldsen, I. 1994. Reliability-based optimization as an information tool. Mech. Struct. &
Mach 22:117–135.
Enevoldsen, I. 1994. Sensitivity Analysis of a Reliability-Based Optimal Solution. ASCE Journal
of Engineering Mechanics 120(1):198–205.
Enevoldsen, I. & Sørensen, J. 1992. Optimization algorithms for calculation of the joint
design point in parallel systems. Structural and Multidisciplinary Optimization 4(2):
121–127.
Enevoldsen, I. & Sørensen, J.D. 1993. Reliability-based optimization of series systems of parallel
systems. ASCE Journal of Structural Engineering 119(4):1069–1084.
Enevoldsen, I. & Sørensen, J. 1994. Reliability-based optimization as an information tool.
Mechanics of Structures & Machines 22(1):117–135.
Enevoldsen, I. & Sørensen, J. 1994. Reliability-based optimization in structural engineering.
Structural Safety 15:169–196.
Enevoldsen, S. & Sørensen, J.D. 1998. A Probabilistic Model for Chloride-Ingress and Initiation
of Corrosion in Reinforced Concrete Structures. Structural Safety 20:69–89.
Engels, H. 1980. Numerical quadrature and qubature. New York: Academic Press.
Engelund, S. 1997. Probabilistic models and computational methods for chloride ingress in
concrete. Ph.D. Thesis, Dept. of Building Technology and Structural Engineering, Aalborg
University.
Engelund, S. & Sørensen, J.D. 1998. A Probabilistic Model for Chloride-ingress and Initiation
of Corrosion in Reinforced Concrete Structures. Structural Safety 20:69–89.
Enright, M.P. & Wu, Y.-T. 1999. Probabilistic Fatigue Life Sensitivity Analysis of Titanium
Rotors. In Proceedings of the AIAA 41st SDM Conference, Atlanta, GA.
References 607

Erstafr, G.K. 1998. A method for multi-parameter PDF estimation of random variables.
Structural Safety 20:25–36.
Eschenaueuer, H.A. & Olhoff, N. 2001. Topology optimization of continuum structures:
A review. Applied Mechanics Reviews 54:331–390.
Estes, A.C. & Frangopol, D.M. 1999. Repair optimization of highway bridges using system
reliability approach. Journal of Structural Engineering ASCE 125(7):766–775.
Estes, A.C. & Frangopol, D.M. 2001. Minimum expected cost-oriented optimal mainte-
nance planning for deteriorating structures: Application to concrete bridge decks, Reliability
Engineering & System Safety 73(3):281–291.
Estes, A.C. & Frangopol, D.M. 2005. Life-cycle evaluation and condition assessment of struc-
tures, Chapter 36 In Structural Engineering Handbook, 2nd Edition, W.-F. Chen & E.M. Lui
(eds), CRC Press, 36-1 to 36-51.
Evans, D.H. 1972. An application of numerical integration techniques to statistical tolerancing,
III – general distributions. Technometrics 14:23–35.
Everett, R.A. Jr. 2002. Crack-Growth Characteristics of Fixed and Rotary Wing Aircraft. In 6th
Joint FAA/DoD/NASA Aging Aircraft Conference.
Faber, M.H., Engelund, S., Sørensen, J.D. & Bloch, A. 1989. Simplified and generic risk based
inspection planning. Proc. OMAE2000, New Orleans.
Fadel, G.M., Riley, M.F. & Barthelemy, J.-F.M. 1990. Two point exponential approximation
method for structural optimization. Structural Optimization 2(2):117–124.
Faravelli, L. 1989. Response surface approach for reliability analysis. ASCE Journal of
Engineering Mechanics 115(12):2763–2781
Feng, Y.S. & Moses, F. 1986. A method of structural optimization based on structural system
reliability. J. Struct. Mech. 14:437–453.
Field, R. 2002. Numerical methods to estimate the coefficients of the polynomial chaos
expansion. In 15th Engineering Mechanics Conference, Columbia University, NY. ASCE.
Field, R., Red-Horse, J. & Paez, T. 2000. A nondeterministic shock and vibration application
using polynomial chaos expansions. In 8th ASCE Joint Specialty Conference on Probabilistic
Mechanics and Structural Reliability, South Bend, IN.
Fiessler, B., Neumann, H. & Rackwitz, R. 1979. Quadratic limit states in structural reliability.
ASCE Journal of Engineering Mechanics 105(4):661–676.
Forth, S.C., Everett, Jr. R.A. & Newman, J.A. 2002. A Novel Approach to Rotorcraft Damage
Tolerance. 6th Joint FAA/DoD/NASA Aging Aircraft Conference.
Fox, B. 1966. Age replacement with discounting. Operations Research 14(3):533–537.
Frangopol, D.M. 1984. A reliability-based optimization technique for automatic plastic design,
Computer Methods in Applied Mechanics and Engineering 44:105–117.
Frangopol, D.M. 1984. Interactive reliability-based structural optimization. Computers and
Structures 19(4):559–563.
Frangopol, D.M. 1985. Multicriteria reliability-based structural optimization. Structural Safety
3(1):23–28.
Frangopol, D.M. 1985. Sensitivity of reliability-based optimum design, Journal of Structural
Engineering ASCE 111(8):1703–1721.
Frangopol, D.M. 1985. Structural optimization using reliability concepts. Journal of Structural
Engineering ASCE 111(11):2288–2301.
Frangopol, D.M. 1990. System reliability in structural analysis, design, and optimization,
Structural Safety (Special Issue), D.M. Frangopol (guest ed.), 7(2–4):83–309.
Frangopol, D.M. 1995. Reliability-based structural design. Chapter 16 In Probabilistic Struc-
tural Mechanics Handbook, C.R. Sundararajan (ed.), pp. 352–387. New York: Chapman &
Hall.
Frangopol, D.M. 1997. How to incorporate reliability in structural optimization, Chapter 11
In ASCE Manual on Engineering Practice No. 90: Guide to Structural Optimization,
J.S. Arora (ed.), ASCE, New York, 211–235.
608 References

Frangopol, D.M. 1998. Probabilistic structural optimization. Progress in Structural Engineering


and Materials 1(2):223–230.
Frangopol, D.M. (ed.) 1998. Optimal Performance of Civil Infrastructure Systems ASCE (ISBN
0-7844-0315-5), Reston, Virginia, 222 pages.
Frangopol, D.M. (ed.) 1999. Case Studies in Optimal Design and Maintenance Planning of
Civil Infrastructure Systems ASCE (ISBN 0-7844-0420-8), Reston, Virginia, 272 pages.
Frangopol, D.M. 1999. Life-cycle cost analysis for bridges. In Bridge safety and reliability.
ASCE, Reston, Virginia, 210–236.
Frangopol, D.M. 2000. Advances in life-cycle reliability-based technology for design an main-
tenance of structural systems. In Computational mechanics for the twenty-first century.
Edinburgh: Saxe-Coburg Publishers, pp. 257–270.
Frangopol, D.M. & Cheng, F.Y. (eds). 1997. Advances in Structural Optimization, ASCE (ISBN
0-7844-0221-3), New York, 225 pages.
Frangopol, D.M. & Corotis, R.B. 1994. Reliability-based structural system assessment, design
and optimization. Structural Safety (Special Issue), D.M. Frangopol & R.B. Corotis. (guest
eds), Elsevier, 16(1+2):1–174.
Frangopol, D.M. & Corotis, R. 1996. Reliability-based structural system optimization: State-of-
the-art verse state-of-the-practice. In Analysis and Computation: Proceedings of the Twelfth
Conference held in Conjunction with Structures Congress XIV, Cheng (ed.), pp. 67–78.
Frangopol, D.M. & Estes, A.C. 1999. Optimum lifetime planning of bridge inspection and
repair programs, Structural Engineering International, Journal of IABSE 9(3):219–223.
Frangopol, D.M. & Furuta, H. (eds) 2001. Life-Cycle Cost Analysis and Design of Civil
Infrastructure Systems, ASCE (ISBN 0-7844-0571-9), Reston, Virginia, 336 pages.
Frangopol, D.M. & Guedes Soares, C. 2001. Reliability oriented optimal structural design, Reli-
ability Engineering & Systems Safety (Special Issue), D.M. Frangopol & C. Guedes Soares,
(guest eds), 73(3):195–306.
Frangopol, D.M. & Hendawi, S. 1994. Incorporation of corrosion effects in reliability-based
optimization of composite hybrid plate girders. Structural Safety 16(1+2):145–169.
Frangopol, D.M. & Liu, M. 2006. Multiobjective optimization of risk-based maintenance and
life-cycle cost of civil infrastructure. System Modeling and Optimization (text of Plenary Lec-
ture) E. Ceragioli, A. Dontchev, H. Furuta, K. Marti, & L. Pandolfi (eds), Boston, Springer,
123–136.
Frangopol, D.M. & Liu, M. 2007. Maintenance and management of civil infrastructure based on
condition, safety, optimization, and life-cycle cost. Structure and Infrastructure Engineering
3(1):29–41.
Frangopol, D.M. & Maute, K. 2003. Life-cycle reliability-based optimization of civil and
aerospace structures. Computers & Structures 81(7):397–410 (invited review article).
Frangopol, D.M. & Maute, K. 2004. Reliability-Based Optimization of Civil and Aerospace
Structural Systems. Engineering Design Reliability Handbook. Chapter 24, CRC Press, Boca
Raton, Florida.
Frangopol, D.M. & Maute, K. 2005. Reliability-based optimization of civil and aerospace
structural systems, Chapter 24 In Engineering Design Reliability Handbook, E. Nikolaidis,
D.M. Ghiocel & S. Singhal (eds), CRC Press, Boca Raton, 24-1 to 24-32.
Frangopol, D.M. & Moses, F. 1994. Reliability-based structural optimization, Chapter 13 In
Advances in Design Optimization, H. Adeli (ed), Chapman & Hall, London, pp. 492–570.
Frangopol, D.M. & Neves, L.C. 2004. Probabilistic maintenance and optimization strategies for
deteriorating civil infrastructures, Chapter 14 In Progress in Computational Structures Tech-
nology, B.H.V. Topping & C.A. Mota Soares (eds), Saxe-Coburg Publ., Stirling, Scotland, pp.
353–377.
Frangopol, D.M. & Rondal, J. 1977. Reliability-based structural optimization. Het Ingenieurs-
blad, Acta Technica Belgica, Anvers, Belgium, 46(7):189–195.
References 609

Frangopol, D.M. & Rondal, J. 1978. Optimum probabilistic design of structures, (in French),
Annales Institut Technique du Batiment et des Travaux Publics, No. 363, Serie: Theories et
Methodes de Calcul, France, 218:23–30.
Frangopol, D.M., Brühwiler, E., Faber, M.H. & Adey, B. (eds) 2004. Life-Cycle Perfor-
mance of Deteriorating Structures: Assessment, Design and Management, ASCE (ISBN
0-7844-0707-X), Reston, Virginia, 456 pages.
Frangopol, D.M., Corotis, R.B. & Rackwitz, R. (eds) 1997. Reliability and Optimization of
Structural Systems, Pergamon (ISBN 0-08-042826-6), Elsevier, 363 pages.
Frangopol, D.M., Estes, A.C., Augusti, G. & Ciampoli, M. 1997. Optimal bridge management
based on lifetime reliability and life – cycle cost, Chapter 8 In Optimal Performance of Civil
Infrastructure Systems, D.M. Frangopol (ed), ASCE, New York, 98–115.
Frangopol, D.M., Gharaibeh, E.S., Kong, J.S. & Miyake, M. 2000. Optimal network-level
bridge maintenance planning based on minimum expected cost, Journal of the Transporta-
tion Research Board, Transportation Research Record, 1696(2), National Academy Press,
26–33.
Frangopol, D.M., Klisinski, M. & Iizuka, M. 1991. Optimization of damage – tolerant structural
systems. Computers and Structures 40(5):1085–1095.
Frangopol, D.M., Lin, K-Y. & Estes, A.C. 1997. Life-cycle cost design of deteriorating structures.
Journal of Structural Engineering ASCE 123(10):1390–1401.
Frangopol, D.M., Miyake, M., Kong, J.S., Gharaibeh, E.S. & Estes, A.C. 2002. Reliability-
and cost-oriented optimal bridge maintenance planning. Chapter 10 In Recent Advances in
Optimal Structural Design, S. Burns (ed.), ASCE, Reston, Virginia, pp. 257–270.
Fu, G. & Frangopol, D.M. 1990. Balancing weight, system reliability and redundancy in a
multiobjective optimization framework. Structural Safety 7(2–4):165–175.
Fu, G. & Frangopol, D.M. 1990. Reliability-based vector optimization of structural systems.
Journal of Structural Engineering ASCE 116(8):2143–2161.
Fujimoto, Y., Itagaki, H., Itoh, S., Asada, H. & Shinozuka, M. 1989. Bayesian Reliability
Analysis of Structures with Multiple Components. Proceedings ICOSSAR 89, pp. 2143–2146.
Fujita, M., Schall, G. & Rackwitz, R. 1989. Adaptive Reliability Based Inspection Strategies for
Structures Subject to Fatigue. Proceedings ICOSSAR 89, pp. 1619–1626.
Furuta, H., Kameda, T., Nakahara, K., Takahashi, Y. & Frangopol, D.M. 2006. Optimal
bridge maintenance planning using improved multi-objective genetic algorithm, Structure and
Infrastructure Engineering 2(1):33–41.
Gamerman, D. 1997. Markov Chain Monte Carlo. Chapman & Hall.
Ganzerli, S., De Palma, P., Stackle, P. & Brown, A. 2005. Info-gap uncertainty in structural opti-
mization via genetic algorithms. In Proceedings of the 9th International Conference on Struc-
tural Safety and Reliabilit; Proc. Intern. Conf., Reliability (ICOSSAR05), G. Augusti, G.I.
Schuëller & M. Ciampoli (eds), Rome, Italy, June 19–22. Rotterdam: Millpress: 2325–2330.
Ganzerli, S., DePalma, P., Smith, J.D. & Burkhart, M.F. 2003. Efficiency of genetic algorithms
for optimal structural design considering convex models of uncertainty. In Proceedings of
the 9th International Conference on Statistic and Probability on Civil Engineering (ICASP9),
A. Der Kiureghian, S. Madanat & J.M. Pestana (eds), Berkeley, CA, July 6–9. Rotterdam:
Millpress, pp. 1003–1010.
Ganzerli, S. & Burkhart, M.F. 2002. Genetic algorithms for optimal structural design using
convex models of uncertainties. In Proceedings of the 4th International Conference on Com-
putational Stochastic Mechanics (CSM4); P. Spanos & G. Deodatis (eds), Kerkyra (Corfu),
Greece, 9–12 June. Rotterdam: Millpress.
Ganzerli, S. & Pantelides, C.P. 2000. Optimum structural design via convex model superposi-
tion. Computers and Structures 74(6):639–647.
Gasser, M. & Schuëller, G.I. 1997. Reliability-based optimization of structural systems.
Mathematical Methods of Operations Research 46:287–307.
610 References

Gasser, M. & Schuëller, G.I. 1998. Some basic principles in reliability-based optimization (RBO)
of structures and mechanical components. In Stochastic programming methods and technical
applications, K. Marti & P. Kall (eds), Lecture Notes in Economics and Mathematical Systems
p. 458, Springer-Verlag, Berlin, Germany.
Gayton, N., Mohamed-Chateauneuf, A., Sørensen, J.D., Pendola, M. & Lemaire, M. 2004.
Calibration methods for reliability-based design codes. Structural Safety 26(1):91–121.
Gentle, J.E. 1998. Random Number Generation and Monte Carlo Methods. Springer-Verlag,
New York.
Genz, A. 1992. Numerical computation of multivariate normal probabilities. Journal of
Computational and Graphical Statistics 1:141–149.
Ghanem, R.G. & Spanos, P.D. 1991. Stochastic Finite Elements: A Spectral Approach. Springer,
Berlin.
Ghasemi, M.R., Hinton, E. & Wood, R.D. 1999. Optimization of trusses using genetic
algorithms for discrete and continuous variables. Engineering Computations 16(3):272–301.
Gill, P., Murray, E.W., Saunders, M.A. & Wright, M.H. 1998. User’s guide for npsol 5.0: A for-
tran package for nonlinear programming. Technical Report SOL 86-1, System Optimization
Laboratory, Stanford University, Stanford, CA.
Gill, P.E., Murray, W. & Wright, M.H. 1981. Practical Optimization. Academic Press.
Giunta, A.A. & Eldred, M.S. 2000. Implementation of a trust region model management strategy
in the DAKOTA optimization toolkit. In Proceedings of the 8th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization, Number AIAA-2000-4935,
September 6–8, Long Beach, CA.
Glasserman, P. & Yao, D.D. 1992. Some guidelines and guarantees for common random
numbers. Management Science 38:884–908.
Goldberg, D.E. 1989. Genetic algorithms in search optimization and machine learning.
New York: Addison-Wesley.
Gollwitzer, S. & Rackwitz, R. 1983. Equivalent components in first-order system reliability.
Reliability Engineering 5:99–115.
Gollwitzer, S. & Rackwitz, R. 1988. An efficient numerical solution to the multinormal integral.
Probabilistic Engineering Mechanics 3(2):98–101.
Goulet, C.A., Haselton, C.B., Mitrani-Reiser, J., Beck, J.L., Deierlein, G., Porter, K.A. &
Stewart, J.P. 2007. Evaluation of the seismic performance of code-conforming reinforced-
concrete frame building-From seismic hazard to collapse safety and economic losses.
Earthquake Engineering and Structural Dynamics. Article in Press.
Grandhi, R. & Wang, L. 1998. Reliability-based structural optimization using improved
two-point adaptive nonlinear approximations. Finite Elements in Analysis and Design 35–48.
Greenwood, W.H. & Chase, K.W. 1990. Root sum squares tolerance analysis with nonlinear
problems. ASME Journal of Engineering for Industry 112:382–4.
Gu, X., Renaud, J.E. & Batill, S.M. 1998. An Investigation of Multidisciplinary Design
Subject to Uncertainties. In 7th AIAA/USAF/NASA/ISSMO Multidisciplinary Analysis &
Optimization Symposium, St. Louis, Missouri.
Guan, X.L. & Melchers, R. 2001. Effect of response surface parameter variation on structural
reliability estimates. Structural Safety 23:429–444.
GUCEA, 2007. Gonzaga University Center for Evolutionary Algorithms (GUCEA), http://www.
cs.gonzaga.edu/gucea/.
Gupta, A. & Krawinkler, H. 2000. Behavior of ductile SMRFs at various seismic hazard levels.
ASCE Journal of Structural Engineering 126(1):98–107.
Gurdal, Z. & Tatting, B. 2000. Cellular automata for design of truss structures with linear and
nonlinear response. In Proceedings of the 41st AIAA Structures, Strucutural Dynamics, and
Materials Conference, Number 2000-1580, April 3–6, Atlanta, Georgia.
Haftka, R.T. & Kamat, M.P. 1985. Elements of Structural Optimization. Martinus Nijhoff, The
Hague.
References 611

Haftka, R.T., Gurdal, Z. & Kamat, M.P. 1990. Elements of Structural Optimization. Dordrecht:
Kluwer Academic Publishers.
Hahn, G.J. & Shapiro, S.S. 1967. Statistical Models in Engineering. John Wiley & Sons:
New York.
Haimes, Y.Y., Tarvainen, K., Shima, T. & Thadathil, J. 1990. Hierachical Multiobjective
Analysis of Large-Scale Systems. Hempisphere Publishing Corporation, pp. 41–42.
Hajela, P. & Kim B. 2001. On the use if energy minimization for ca based analysis in elasticity.
Struct. Multidisc. Optim. 23:24–33.
Haldar, A. & Mahadevan, S. 2001. Probability, Reliability and Statistical Methods in
Engineering Design. Wiley.
Hamming, R.W. 1973. Numerical Methods for Scientists and Engineers. New York: Dover
Publications.
Hansen, E. & Walster, G.W. 2004. Global Optimization using Interval Analysis. New York:
Marcel Dekker, Inc.
Harbitz, A. 1986. An Efficient Sampling Method for Probability of Failure Calculation.
Structural Safety 3:109–115.
Harkness, H.H., Fleming, M., Moran, B. & Belytschko, T. 1994. Fatigue Reliability With In-
Service Inspections. FAA/NASA International Symposium on Advanced Structural Integrity
Methods for Airframe Durability and Damage Tolerance.
Harr, M. 1989. Probabilistic estimates for multivariate analysis. Applied Mathemathical
Modelling 13:313–318.
Hasofer, A. 1974. Design for infrequent overloads. Earthquake Engineering and Structural
Dynamics 2(4).
Hasofer, A.M. & Lind, N.C. 1974. Exact and invariant second order code format. ASCE Journal
of Engineering Mechanics 100(1):111–121.
Hasofer, A.M. & Rackwitz, R. 2000. Time-dependent models for code optimization. In Pro-
ceedings of the 8th International Conference on Applications of Statistics and Probability
(ICASP8), R.E. Melchers & M.G. Stewart (eds), Sydney, Australia, December 1999, Volume
1, Rotterdam/Brookfield, pp. 151–158. CERRA: A.A. Balkema.
Haug, E.J. & Choi, K.K. 1982. Systematic occurrence of repeated eigenvalues in structural
optimization. Journal of Optimization Theory and Applications 38:251–274.
Haupt, L.H. & Haupt, S.E. 1998. Practical Genetic Algorithms. New York: John Wiley & Sons.
He, L. & Polak, E. 1990. Effective diagonalization strategies for the solution of a class of optimal
design problems. IEEE Transactions on Automatic Control 35(3):258–267.
Helmberg, C. 2002. Semidefinite programming. European Journal of Operational Research
137:461–482.
Hernandez, S. 1990. Métodos de diseño óptimo de estructuras. Madrid: Colegio de Ingenieros
de Caminos, Canales y Puertos.
Hisada, T. & Nakagiri, S. 1981. Stochastic finite element method developed for structural safety
and reliability. In Proceedings of the Third International Conference on Structural Safety and
Reliability, pp. 395–408. Rotterdam: Elsevier.
Hohenbichler, M. & Rackwitz, R. 1981. Non-normal dependent vectors in structural safety.
ASCE Journal of the Engineering Mechanics Division 107(6):1227–1249.
Hohenbichler, M. & Rackwitz, R. 1983. First-order concepts in system reliability. Structural
Safety 1(3):177–188.
Hohenbichler, M. & Rackwitz, R. 1986. Sensitivity and importance measures in structural
reliability. Civil Eng. Syst. 3:203–209.
Hohenbichler, R. & Rackwitz, R. 1988. Improvement of Second-order Reliability Estimates by
Importance Sampling. J. Eng. Mech. ASCE 114(12):2195–2199.
Holicky, M. & Markova J. 2003. Reliability analysis of impacts due to road vehicles. In Appli-
cations of Statistics and Probability in Civil Engineering, A. Der Kiureghian, S. Madanat &
J.M. Pestana (eds), Rotterdam, Netherlands, pp. 1645–1650. Millpress.
612 References

Holland, J.H. 1975. Adaptation in natural and artificial systems. Ann Arbor: The University of
Michigan Press.
Hong, H.P. 1998. An efficient point estimate method for probabilistic analysis. Reliability
Engineering and System Safety 59:261–267.
Hong, H.P. 1999. Simple approximations for improving second-order reliability estimates.
J. Eng. Mech. ASCE 125(5):592–595.
Hong, H.P. 1996. Point-estimate moment-based reliability analysis. Civil Engineering Systems
13(4):281–294.
Hong, H.P., Escobar, J.A. & Gomez, R. 1998. Probabilistic assessment of the in seismic
response of structural asymmetric models. In Proceedings of the Tenth European Conference
on Earthquake Engineering, Paris, 1998, Rotterdam. Balkema.
Housner, G.W. 1959. Behavior of structures during earthquakes. Journal of the Engineering
Mechanics Division ASCE 85(4):109–129.
Housner, G.W. & Jennings, P.C. 1975. The capacity of extreme earthquake motions to damage
structures. Structural and geotechnical mechanics: A volume honoring N.M. Newmark edited
by W.J. Hall, 102–116, Prentice-Hall Englewood Cliff, NJ.
Huh, J.S., Kim, K.H., Kang, D.W., Gweon, D.G. & Kwak, B.M. 2006. Performance evaluation
of precision nanopositioning devices caused by uncertainties due to tolerances using function
approximation moment method. Review of Scientific Instrument 77:015103.
Hurtado, J.E. 2001. Neural networks in stochastic mechanics. Archives of Computational
Methods in Engineering 8:303–342.
Hurtado, J.E. 2004. An examination of methods for approximating implicit limit state functions
from the viewpoint of statistical learning theory. Structural Safety 26:271–293.
Hurtado, J.E. 2004. Structural Reliability. Statistical Learning Perspectives. Heidelberg:
Springer.
Hurtado, J.E. 2006. Optimal reliability-based design using support vector machines and arti-
ficial life algorithms. In Intelligent Computational Paradigms in Earthquake Engineering,
N.D. Lagaros & Y. Tsompanakis (eds), Hershey: Idea Group Inc.
Hurtado, J.E. 2007. Filtered importance sampling with support vector margin: a powerful
method for structural reliability analysis. Structural Safety 29:2–15.
Hurtado, J.E. & Alvarez, D.A. 2001. Neural network-based reliability analysis: A comparative
study. Computer Methods in Applied Mechanics and Engineering 191:113–132.
Hurtado, J.E. & Barbat, A. 1998. Fourier-based maximum entropy method in stochastic
dynamics. Structural Safety 20:221–235.
Igusa, T. & Wan, Z. 2003. Response surface methods for optimization under uncertainty. In
Proceedings of the 9th International Conference on Application of Statistics and Probability,
A. Der Kiureghian, S. Madanat & J. Pestana (eds), San Francisco, California.
Igusa, T. & Der Kiureghian, A. 1988. Response of uncertain systems to stochastic excitations.
J. Eng. Mech. ASCE 114(5):812–832.
Inou, N., Shimotai, N. & Uesugi, T. 1994. Cellular automaton generating topological struc-
tures. In 2nd European Conference on Smart Structures and Materials, Number 2361-08,
1994, October Glasgow, United Kingdom, pp. 47–50.
Inou, N., Uesugi, T., Iwasaki, A. & Ujihashi, S. 1998. Self-organization of mechanical structure
by cellular automata. Fracture and Strength of Solids 145(9):1115–1120.
ISO 19902. 2001. Petroleum and natural gas industries – Fixed steel offshore structures.
Itoh, Y. & Liu, C. 1999. Multiobjective optimization of bridge deck maintenance. In Case Studies
in Optimal Design and Maintenance Planning if Civil Infrastructure Systems, D.M. Frangopol
(ed.), ASCE, Reston, Virginia.
Iwan, W.D. & Cifuentes, A.O. 1986. A model for system identification of degrading
structures. International Journal of Earthquake Engineering and Structural Dynamics
14:877–890.
References 613

Jaynes, E.T. 1957. Information Theory and Statistical Mechanics. The Physical Review 106:
620–630.
Jaynes, E.T. 2003. Probability theory: the logic of science. Cambridge, UK: Cambridge
University Press.
JCSS. 2001. Probabilistic Model Code. Joint Committee on Structural Safety, http://www.jcss.
ethz.ch [accessed 15 September 2006].
Jensen, B.D., Parkinson, M.B., Kurabayashi, K., Howell, L.L. & Baker, M.S. 2001. Design opti-
mization of a fully-compliant bistable micro-mechanism. In Proc. 2001 ASME Intl. Mech.
Eng. Congress and Exposition, New York, NY.
Jensen, H. 2000. On the structural synthesis of uncertain systems subjected to environmental
loads. Structural and Multidisciplinary Optimization 20:37–48.
Jensen, H.A. 2005. Structural optimization of linear dynamical systems under stochastic excita-
tion: a moving reliability database approach. Computer Methods in Applied Mechanics and
Engineering 194(12–16):1757–1778.
Jensen, H. & Iwan, W.D. 1992. Response of systems with uncertain parameters to stochastic
excitations. J. Eng. Mech. ASCE 114:1012–1025.
Joanni, A.E. & Rackwitz, R. 2006. Stochastic dependencies in inspection, repair and
failure models. In Proceedings of the European Safety and Reliability Conference,
C. Guedes Soares & E. Zio (eds), Estoril, Portugal, London, pp. 531–537. Taylor &
Francis.
Johnson, N.L., Kotz, S. & Balakrishnan, N. 1994. Continuous Univariate Distributions, Vol.
1. New York: John Wiley and Sons.
Jones, D., Shonlau M. & Welch W. 1998. Efficient global optimization of expensive black-box
functions. INFORMS J. Comp. 12:272–283.
Jones, D.R., Perttunen, C.D. & Stuckman, B.E. 1993. Lipschitzian Optimization Without the
Lipschitz Constant. Journal of Optimization Theory and Applications 73(1):157–181.
Jung, D.H. & Lee, B.C. 2002. Development of a simple and efficient method for robust
optimization. Int. J. Numer. Meth. Engng. 53:2201–2215.
Kale, A., Haftka, R.T. & Sankar, B.V. 2007. Efficient Reliability Based Design and Inspection
of Stiffened Panels Against Fatigue. Journal of Aircraft.
Kang, Z. 2005. Robust Design Optimization of Structures under Uncertainties. Doctoral Thesis,
University of Stuttgart.
Kanno, Y. & Takewaki, I. 2006. Robustness analysis of trusses with separable load and structural
uncertainties. International Journal of Solids and Structures 43:2646–2669.
Kanno, Y. & Takewaki, I. 2006. Sequential semidefinite program for maximum robust-
ness design of structures under load uncertainties. Journal of Optimization Theory and
Applications 130:265–287.
Kanno, Y. & Takewaki, I. 2007. Worst-case plastic limit analysis of trusses under uncertain
loads via mixed 0-1 programming. Journal of Mechanics of Materials and Structures 2(2):
247–273.
Kanzow, C., Nagel, C., Kato, H. & Fukushima, M. 2005. Successive linearization meth-
ods for nonlinear semidefinite programs. Computational Optimization and Applications 31:
251–273.
Kapur, J.N. 1989. Maximum Entropy Models in Science and Engineering. New York:
John Wiley and Sons.
Karamchandani, A. 1990. New Methods in Systems Reliability, Ph.D dissertation, Stanford
University.
Karamchandani, A. & Cornell, CA. 1991. Adaptive Hybrid Conditional Expectation
Approaches for Reliability Estimation. Structural Safety 11:59–74.
Karamchandani, A. & Cornell, C.A. 1992. Sensitivity estimation within first and second order
reliability methods. Struct. Saf. 11:95–107.
614 References

Kaymaz, I. & Marti, K. 2006. Reliability-based design optimization for elastoplastic mechanical
structures. Computers and Structures. Article In Press.
Kemeny, D.C., Howell, L.L. & Magleby, S.P. 2002. Using compliant mechanisms to improve
manufacturability in MEMS. In Proc. 2002 ASME DETC, Number DETC2002/DFM-34178.
Kennedy, C.A. & Lennox, W.C. 2000. Solution to the practical problem of moments using non-
classical orthogonal polynomials with applications for probabilistic analysis. Probabilistic
Engineering Mechanics 15:371–379.
Kennedy, C.A. & Lennox, W.C. 2001. Moment operations on random variables, with
applications for probabilistic analysis. Probabilistic Engineering Mechanics 16: 253–259.
Kharitonov, V. 1997. Interval uncertainty structure: Conservative but simple. In Uncertainty:
Models and Measures, H. Gunther-Naske & Y. Ben-Haim (eds), pp. 231–243. Berlin:
Akademie Verlag.
Kharmanda, G. 2004. Two points of view for developing reliability-based design optimization.
NT2F4 (New Trends in Fatigue and Fracture IV), Aleppo, Syria, 10–12 May.
Kharmanda, G., Altonji, A. & El-Hami, A. 2006. Safest point method for reliability-based
design optimization of freely vibrating structures, 1st International Francophone Congress
for Advanced Mechanics, IFCAM01, Aleppo, Syria, 02–04 May.
Kharmanda, G., El-Hami, A. & Olhoff, N. 2004. Global Reliability–Based Design Optimiza-
tion, In: Frontiers on Global Optimization, C.A. Floudas (ed.), 255(20), Kluwer Academic
Publishers, January.
Kharmanda, G., Makhloufi, A. & El-Hami, A. 2007. Efficient computing time reduction for
reliability-based design optimization, Qualita, 20–22 March, Tanger, Maroc.
Kharmanda, G., Mohamed, A. & Lemaire, M. 2001. New hybrid formulation for reliability-
based optimization of structures, The 4th World Congress of Structural and Multidisciplinary
Optimization, WCSMO-4, Dalian, China, June 4–8.
Kharmanda, G., Mohamed, A. & Lemaire, M. 2002. Efficient reliability-based design opti-
mization using hybrid space with application to finite element analysis. Structural and
Multidisciplinary Optimization 24:233–245.
Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. CAROD: Computer-Aided
Reliable and Optimal Design as a concurrent system for real structures. Journal of Computer
Aided Design and Computer Aided Manufacturing CAD/CAM 1(1):1–12.
Kharmanda, G., Mohamed, A. & Lemaire, M. 2003. Integration of reliability-based design
optimization within CAD and FE models. In Recent Advances in Integrated Design and
Manufacturing in Mechanical Engineering, Kluwer Academic Publishers.
Kharmanda, G., Olhoff, N., Mohamed, A. & Lemaire, M. 2004. Reliability-based topology
optimization. Structural and Multidisciplinary Optimization 26:295–307.
Kharmanda, G., Olhoff, N. & El-Hami, A. 2004. Recent Developments in Reliability-Based
Design Optimization (Keynote Lecture). In Computational Mechanics, Proc. Sixth World
Congress of Computational Mechanics (WCCM VI in conjunction with APCOM’04), Sept.
5–10. Beijing, China. Tsinghua University Press & Springer-Verlag.
Kharmanda, G., Olhoff, N. & El-Hami, A. 2004. Optimum values of structural safety fac-
tors for a predefined reliability level with extension to multiple limit states. Structural and
Multidisciplinary Optimization 27:421–434.
Kharmanda, G. & Olhoff, N. 2007. Extension of optimum safety factor method to non-
linear reliability-based design optimization. Journal of Structural and Multidisciplinary
Optimization (to appear).
Kim, H.M. 2001. Target Cascading in Optimal System Design. Ph.D. Thesis, University of
Michigan.
Kim, H.M., Kokkolaras, M., Louca, L.S., Delagrammatikas, G.J., Michelena, N.F.,
Filipi, Z.S., Papalambros, P.Y., Stein, J.L. & Assanis, D.N. 2002. Target cascading in vehicle
redesign: A class VI truck study. International Journal of Vehicle Design 29(3):1–27.
References 615

Kim, H.M., Michelena, N.F., Papalambros, P.Y. & Jiang T. 2003. Target cascading in optimal
system design. SME Journal of Mechanical Design 125(3):474–480.
Kim, H.M., Rideout, D.G., Papalambros, P.Y. & Stein, J.L. 2003. Analytical target cascading
in automotive vehicle design. ASME Journal of Mechanical Design 125(3):481–489.
Kim, T.-U. & Sin, H.-C. 2001. Optimal design of composite laminated plates with the discrete-
ness in ply angles and uncertainty in material properties considered. Computers and Structures
79(29–30):2501–2509.
Kirjner-Neto, C., Polak, E. & Der Kiureghian, A. 1998. An outer approximations approach to
reliability-based optimal design of structures. Journal Optim. Theory Appl. 98(1):1–16.
Kirsch, U. 1981. Optimum Structural Design. New York: McGraw-Hill, Inc.
Kirsch, U. 1993. Structural Optimization. Fundamentals and Applications. Heidelberg: Springer
Verlag.
Kirsch, U. 1999. Efficient, accurate reanalysis for structural optimization. AIAA Journal
37(12):1663–1669.
Kirsch, U. 2000. Combined approximations – a general reanalysis approach for structural
optimization. Structural and Multidisciplinary Optimization 20(2):97–106.
Kirsch, U. 2001. Exact and accurate solutions in the approximate reanalysis of structures. AIAA
Journal 39(11):2198–2205.
Kirsch, U. 2002. A unified reanalysis approach for structural analysis, design, and optimization.
Structural and Multidisciplinary Optimization 25(1):67–85.
Kirsch, U. 2003. Approximate vibration reanalysis of structures. AIAA Journal 41(3):
504–511.
Kita, E. & Toyoda, T. 2000. Structural design using cellular automata. Struct. Multidisc. Optim.
19:64–73.
Kleiber, M. & Hien, T.D. 1992. The Stochastic Finite Element Method. Chichester: John Wiley
and Sons.
Kleinmann, N.L., Spall, J.C. & Naiman, D.C. 1999. Simulation-based optimization
with stochastic approximation using common random numbers. Management Science
45(11):1570–1578.
Klir, G. 1997. Uncertainty theories, models and principles: An overview of personal views and
contributions. In Uncertainty: Models and Measures, H. Gunther-Natke & Y. Ben-Haim (eds),
pp. 27–43. Berlin: Akademie Verlag.
Klir, G.J. & Filger, T.A. 1988. Fuzzy Sets, Uncertainty, and Information. Prentice Hall.
Klir, G.J. & Yuan, B. 1995. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice Hall.
Koch, P.N., Yang, R.J. & Gu, L. 2004. Design for six sigma through robust optimization. Struct.
and Multidisc. Optim. 26:235–248.
Koch, P.N., Yang, R.J. & Gu, L. 2007. Design for six sigma through robust optimization.
Structural and Multidisciplinary Optimization. Article in Press.
Kocvara, M. 1997. Topology optimization with displacement constraints: a bilevel programming
approach. Struct. Optim. 4:256–263.
Kojima, M. & Tunçel, L. 2000. Cones of matrices and successive convex relaxations of
nonconvex sets. SIAM Journal on Optimization 10:750–778.
Kokkolaras, M., Fellini, R., Kim, H.M., Michelena, N.F. & Papalambros, P.Y. 2002. Exten-
sion of the target cascading formulation to the design of product families. Structural and
Multidisciplinary Optimization 24(4):293–301.
Kokkolaras, M., Louca, L.S., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S.,
Papalambros, P.Y., Stein, J.L. & Assanis, D.N. 2004. Simulation-based optimal design of
heavy trucks by model-based decomposition: An extensive analytical target cascading case
study. International Journal of Heavy Vehicle Systems 11(3–4):402–432.
Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2004. Design optimization of
hierarchically decomposed multilevel systems under uncertainty. In Proceedings of the
616 References

ASME Design Engineering Technical Conferences, Salt Lake City, Utah, Paper No.
DETC2004/DAC-57357.
Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2006. Design optimization of hierarchi-
cally decomposed multilevel systems under uncertainty. ASME Journal of Mechanical Design
128(2):503–508.
Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2006. Impact of uncertainty quantifica-
tion on design decisions for a hydraulic-hybrid powertrain engine. In Proceedings of the 47th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference,
Newport, Rhode Island, Paper No. AIAA-2006-2001.
Kolassa, J.E. 1997. Series Approximation Methods in Statistics. New York: Springer-Verlag.
Kong, J.S. & Frangopol, D.M. 2003. Life-cycle reliability-based maintenance cost optimization
of deteriorating structures with emphasis on bridges. Journal of Structural Engineering ASCE
129(6):818–828.
Kong, J.S. & Frangopol, D.M. 2004. Cost-reliability interaction in life-cycle cost optimization
of deteriorating structures. Journal of Structural Engineering ASCE 130(11): 1704–1712.
Kong, J.S. & Frangopol, D.M. 2005. Probabilistic optimization of aging structures considering
maintenance and failure costs. Journal of Structural Engineering ASCE 131(4):600–616.
Kosko, B. 1992. Neural Networks and Fuzzy Systems. Englewood Cliffs: Prentice Hall.
Koyluoglu, H.U. & Nielsen, S.R.K. 1994. New approximations for SORM integrals. Structural
Safety 13:235–246.
Koyluoglu, H.U., Cakmak, A.S. & Nielsen, S.R.K. 1995. Interval algebra to deal with pattern
loading and structural uncertainties. J. Eng. Mech. ASCE 121(11):1149–1157.
Kramer, S.L. 2003. Geotechnical Earthquake Engineering. Prentince Hall.
Krishnamurthy, T. 2003. Response Surface Approximation with Augmented and Compactly
Supported Radial Basis Functions. Proceedings of the AIAA 44th SDM Conference.
Kroon, I.B. 1994. Decision Theory Applied to Structural Engineering Problems. Ph.D. Thesis,
Dept. of Building Technology and Structural Engineering, Aalborg University.
Kuschel, N. & Rackwitz, R. 1997. Two basic problems in reliability-based structural
optimization. Mathematical Methods of Operations Research 46:309–333.
Kuschel, N. & Rackwitz, R. 1998. Structural optimization under time-variant reliability con-
straints. In Proceeding of the 8th IFIP WG 7.5 Working conference on Reliability and
Optimization of Structural Systems, edited by Nowak, University of Michigan, Ann Arbor,
Michigan, USA, pp. 27–38.
Kuschel, N. & Rackwitz, R. 2000. Optimal design under time-variant reliability constraints.
Structural Safety 22(2):113–127.
Kuschel, N. & Rackwitz, R. 2000. Time-variant reliability-based structural optimization using
SORM. Optimization 47(3/4):349–368.
Kuschel, N. & Rackwitz, R. 2000. A new approach for structural optimization of series system.
In: Proceedings of the 8th International conference on applications of statistics and probabil-
ity (ICASP) in Civil engineering reliability and risk analysis, R.E. Melchers & M.G. Stewart
(eds), Sydney, Australia, December 1999, (2):987–994.
Kushner, H.J. & Yin, G.G. 2003. Stochastic approximation and recursive algorithms and
applications. New York: Springer
Lagaros, N. & Papadrakakis, M. 2003. Soft computing methodologies for structural optimiza-
tion. Applied Soft Computing 3:283–300.
Lagaros, N.D. & Papadrakakis, M. 2006. Robust seismic design optimization of steel structures,
J. Struct. Multidisc. Optim. Available on-line, Doi:10.1007/s00158-006-0047-5.
Lagaros, N.D., Papadrakakis, M. & Kokossalakis, G. 2002. Structural optimization using
evolutionary algorithms. Computers and Structures 80(7–8):571–589.
Lagaros, N.D., Plevris, V. & Papadrakakis, M. 2005. Multi-objective design optimiza-
tion using cascade evolutionary computations. Comput. Methods Appl. Mech. Engrg.
194(30–33):3496–3515.
References 617

Lagaros, N.D. & Tsompanakis, Y. (eds) 2006. Intelligent computational paradigms in


earthquake engineering. Idea Publishers.
Lange, K. 1999. Numerical Analysis for Statisticians. New York: Springer-Verlag.
Lawrence, C., Zhou, J.L. & Tits, A.L. 2007. User’s guide for CFSQP version 2.5. Available from
http://www.aemdesign.com.
Lee, K.H. & Park, G.J. 2001. Robust optimization considering tolerances of design variables.
Computers and Structures 7:77–86.
Lee, J.O., Yang, Y.O. & Ruy, W.S. 2002. A Comparative Study on Reliability Index and Target
Performance Based Probabilistic Structural Design Optimization. Computers and Structures
80:257–269.
Lee, T.W. & Kwak, B.M. 1987–88. A reliability-based optimal design using advanced first order
second moment method. Mechanics of Structures and Machines 15(4):523–542.
Lee, S.H. & Kwak, B.M. 2005. Reliability based design optimization using response sur-
face augmented moment method. Proceedings of 6th World Congress on Structural and
Multidisciplinary Optimization, Rio de Janeiro, Brazil.
Lee, S.H. & Kwak, B.M. 2006. Response surface augmented moment method for efficient
reliability analysis. Structural Safety 28:261–272.
Lee, W.J. & Woo, T.C. 1990. Tolerances: their analysis and synthesis. ASME Journal of
Engineering for Industry 112:113–121.
Legresley, P. & Alonso, J. 2001. Investigation of nonlinear projection for POD based reduced
order models for aerodynamics. In AIAA 2001-16737, 39th Aerospace Sciences Meeting &
Exhibit, January 8–11, Reno, NV.
Lemaire, M., in collaboration with Chateauneuf, A. & Mitteau, J.C. 2006. Structural Reliability.
ISTE, UK.
Leverant, G.R., Littlefield, D.L., McClung, R.C., Millwater, H.R. & Wu, Y.-T. 1997. A Proba-
bilistic Approach to Aircraft Turbine Rotor Material Design. The International Gas Turbine
& Aeroengine Congress & Exhibition, Paper No. 97-GT-22, Orlando, FL.
Lewis, K. & Mistree, F. 1997. Collaborative, Sequential and Isolated Decisions in Design.
Proceedings of ASME Design Engineering Technical Conferences, Paper# DETC1997/
DTM-3883.
Li, K.S. & Lumb, P. 1985. Reliability analysis by numerical integration and curve fitting.
Structural Safety 3:29–36.
Liang, J., Mourelatos, Z.P. & Tu, J. 2007. A Single-Loop Method for Reliability-Based
Design Optimization. In press International Journal of Product Development. Also, Proceed-
ings of ASME Design Engineering Technical Conferences, 2004, Paper # DETC2004/DAC-
57255.
Liang, J., Mourelatos, Z.P. & Nikolaidis, E. 2007. A Single-Loop Approach for System
Reliability-Based Design Optimization. ASME Journal of Mechanical Design (accepted).
Liang, J., Mourelatos, Z.P. & Tu, J. 2004. A Single-Loop Method for Reliability-Based
Design Optimization. International Journal of Product Development. Interscience Enterprises
Limited, Great Britain (in press).
Lin, K.-Y. & Frangopol, D.M. 1996. Reliability – based optimum design of reinforced concrete
girders. Structural Safety 18(2/3):239–258.
Lind, N.C. 1977. Reliability based structural codes, practical calibration. Safety of structures
under dynamic loading, Trondheim, Norway, pp. 149–160.
Lindley, D.V. 1976. Introduction to Probability and Statistics from a Bayesian Viewpoint, Vol.
1+2. Cambridge University Press, Cambridge.
Liu, H., Chen, W., Kokkolaras, M., Papalambros, P.Y. & Kim, H.M. 2006. Probabilistic ana-
lytical target cascading – a moment matching formulation for multilevel optimization under
uncertainty. ASME Journal of Mechanical Design 128(4):991–1000.
Liu, P.-L. & Der Kiureghian, A. 1991. Optimization algorithms for structural reliability.
Structural Stafety 9:161–177.
618 References

Liu, M. & Frangopol, D.M. 2004. Optimal bridge maintenance planning based on probabilistic
performance prediction. Engineering Structures 26(7):991–1002.
Liu, M. & Frangopol, D.M. 2005. Maintenance planning of deteriorating bridges by using
multiobjective optimization. Transportation Research Record, Journal of the Transporta-
tion Research Board, CD 11-S, Transportation Research Board of the National Academies,
Washington, D.C., pp. 491–500.
Liu, M. & Frangopol, D.M. 2005. Balancing connectivity of deteriorating bridge networks
and long-term maintenance cost through optimization. Journal of Bridge Engineering ASCE
10(4):468–481.
Liu, M. & Frangopol, D.M. 2005. Bridge annual maintenance prioritization under uncer-
tainty by multiobjective combinatorial optimization. Computer Aided Civil and Infrastructure
Engineering 20(5): 343–353.
Liu, M. & Frangopol, D.M. 2005. Multiobjective maintenance planning optimization for
deteriorating bridges considering condition, safety and life-cycle cost. Journal of Structural
Engineering ASCE 131(5):833–842.
Liu, M. & Frangopol, D.M. 2006. Optimizing bridge network maintenance management under
uncertainty with conflicting criteria: Life-cycle maintenance, failure, and user costs. Journal
of Structural Engineering ASCE 132(11):1835–1845.
Liu, P.-L. & Kuo, C.-Y. 2003. Safety evaluation of the upper structure of bridge based on concrete
non-destructive tests. In Applications of Statistics and Probability in Civil Engineering, A. Der
Kiureghian, S. Madanat & J.M. Pestana (eds), Rotterdam, The Netherlands, pp. 1683–1688.
Millpress.
Liu, P.-L. & Der Kiureghian, A. 1986. Multivariate Distribution Models with
Prescribed Marginals and Covariances. Probabilistic Engineering Mechanics 1(2):
105–112.
Liu, W.K., Belytschko, T. & Lua, Y.J. 1995. Probabilistic finite element method. In Probabilistic
Structural Mechanics Handbook, C.R. Sundararajan (ed.), pp. 70–105. New York: Chapman
& Hall.
Lombardi, M. & Haftka, R.T. 1998. Anti-Optimization Technique for Structural Design under
Load Uncertainties. Computer Methods in Applied Mechanics and Engineering 157:19–31.
Louca, S., Kokkolaras, M., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S.,
Papalambros, P.Y. & Assanis, D.N. 2002. Analytical target cascading for the design of an
advanced technology heavy truck. In Proceedings of the 2002 ASME International Mechanical
Engineering Congress and Exposition, New Orleans, LA. Paper No. IMECE-2002-32860.
Lugannani, R. & Rice, S. 1980. Saddle point approximation for the distribution of sums of
random variables. Advances in Applied Probability 12:475–490.
Luo, X. & Grandhi, R.V. 1997. ASTROS for reliability-based multidisciplinary structural
analysis and optimization. Computers and Structures 62:737–745.
Lyon, R.H. 1975. Statistical energy analysis of dynamical systems, The MIT Press,
Cambridge, MA.
Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety, Englewood Cliffs,
New Jersey: Prentice Hall.
Madsen, H.O. & Friis Hansen, P. 1991. Comparison of some algorithms for reliability-based
structural optimization and sensitivity analysis. In Reliability and Optimization of Structural
Systems, C.A. Brebbia & S.A. Orszag (eds), Springer-Verlag, Germany, pp. 443–451.
Madsen, H.O. & Friis-Hansen, P. 1992. A comparison of some algorithms for reliability-based
structural optimization and sensitivity analysis. In Proceedings of the 4th IFIP WG 7.5 Work-
ing Conference on Reliability and Optimization of Structural Systems, R. Rackwitz & P. Thoft-
Cristensen (eds), Munich, Germany, September 1991, Berlin, pp. 443–451. IFIP: Springer.
Madsen, H.O., Sørensen, J.D. & Olesen, R. 1989. Optimal Inspection Planning for Fatigue
Damage of Offshore Structures. Proceedings ICOSSAR 89, pp. 2099–2106.
References 619

Madsen, H.O. & Sørensen, J.D. 1990. Probability-Based Optimization of Fatigue Design
Inspection and Maintenance. Presented at Int. Symp. on Offshore Structures, University of
Glasgow.
Madsen, H.O., Skjong, R.K., Talin, A.G. & Kirkemo, F. 1987. Probabilistic Fatigue Crack
Growth Analysis of Offshore Structures, with Reliability Updating Through Inspection.
SNAME, Arlington, VA.
Marler, R.T. & Arora, J.S. 2004. Survey of Multi-objective Optimization Methods for
Engineering. J. Struct. Multidisc. Optim. 26(6):369–395.
Marsh, P.S. & Frangopol, D.M. 2007. Lifetime multi-objective optimization of cost and spacing
of corrosion rate sensors embedded in a deteriorating reinforced concrete bridge deck. Journal
of Structural Engineering ASCE 133(6):777–787.
Marti, K. 1996. Differentiation formulas for probability functions: the transformation method.
Mathematical Programming 75:201–220.
Marti, K. 2005. Stochastic Optimization Methods. Berlin: Springer.
Martin, J.D. & Simpson, T.W. 2005. Use of Kriging Models to Approximate Deterministic
Computer Models. AIAA Journal 3(4).
Masur, E. 1984. Optimal structural design under multiple eigenvalue constraints. International
Journal of Solids and Structures 20:117–120.
Mathworks, Inc. 2004. Matlab reference manual, Version 7.0. Natick, Massachusetts:
Mathworks, Inc.
Mathworks, Inc. 2007. www.mathworks.com.
Mattson, C.A., Mullur, A.A. & Messac, A. 2004. Smart pareto filter: Obtaining a minimal
representation of multiobjective design space. Eng. Optim. J. 36(6):721–740.
Maute, K. & Frangopol, D.M. 1998. Reliability-based design of mems mechanisms by topology
optimization. Computers and Structures 81:813–824.
Maute, K. & Frangopol, D.M. 2003. Reliability-based design of MEMS mechanisms by
topology optimization. Computers & Structures, 81(8–11), K.J. Bathe 60th Anniversary Issue
pp. 813–824.
McAllister, C.D. & Simpson, T.W. 2003. Multidisciplinary Robust Design Optimization of an
Internal Combustion Engine. ASME Journal of Mechanical Design 125(1):124–130.
McKay, M.D., Beckman, R.J. & Conover, W.J. 1979. A comparison of three methods for select-
ing values of input variables in the analysis of output from a computer code. Technometrics
21(2):239–245.
Mead, L.R. & Papanicolau, N. 1984. Maximum entropy in the problem of moments. Journal
of Mathematical Physics 25:2404–2417.
Melchers, R.E. 1987. Structural Reliability: Analysis and Prediction, Wiley.
Melchers, R.E. 1989. Importance sampling in structural systems. Structural Safety 6(1):3–10.
Melchers, R.E. 2001. Structural Reliability Analysis and Prediction. John Wiley & Sons.
Merriam-Webster on-line. 2007. www.m-w.com, accessed April 2007.
Messac, A. & Ismail-Yahaya, A. 2002. Multiobjective robust design using physical program-
ming. J. Struct. Multidisc. Optim. 23:357–371.
Meza, J.C. 1994. OPT++: An object-oriented class library for nonlinear optimization. Technical
Report SAND94-8225, Sandia National Laboratories, March, Albuquerque, NM.
Michelena, N.F., Kim, H.M. & Papalambros, P.Y. 1999. A system partitioning and optimiza-
tion approach to target cascading. In Proceedings of the 12th International Conference on
Engineering Design, Munich, Germany.
Michelena, N.F., Louca, L., Kokkolaras, M., Lin, C.-C., Jung, D., Filipi, Z., Assanis, D.,
Papalambros, P.Y., Peng, H., Stein, J. & Feury, M. 2001. Design of an advanced heavy tac-
tical truck: A target cascading case study. SAE 2001 Transactions – Journal of Commercial
Vehicles. Also appeared in the Proceedings of the 2001 SAE International Truck and Bus
Meeting and Exhibition, Chicago, IL, Paper No. 2001-01-2793.
620 References

Michelena, N.F., Park, H. & Papalambros, P.Y. 2003. Convergence properties of analytical
target cascading. AIAA Journal 41(5):897–905.
Miller, A.C. & Rice, T.R. 1983. Discrete approximations of probability distributions. Manage-
ment Science 29:352–362.
Mills-Curran, W. 1988. Calculation of eigenvector derivatives for structures with repeated
eigenvalues. AIAA Journal 26(7):867–871.
Millwater, H.R., Fitch, S., Riha, D.S., Enright, M.P., Leverant, G.R., McClung, R.C., Kuhlman,
C.J., Chell, G.G. & Lee, Y.-D. 2000. A Probabilistically-Based Damage Tolerance Analysis
Computer Program for Hard Alpha Anomalies In Titanium Rotors. Proceedings of the 45th
ASME International Gas Turbine & Aeroengine Technical Congress, Munich, Germany.
Millwater, H.R., Wu, Y.-T., Cardinal, J.W. & Chell G.G. 1996. Application of Advanced Prob-
abilistic Fracture Mechanics to Life Evaluation of Turbine Rotor Blade Attachments. Journal
of Engineering for Gas Turbines and Power 18:394–398.
Mogami, K., Nishiwaki, S., Izui, K., Yoshimura, M. & Kogiso, N. 2006. Reliability-based struc-
tural optimization of frame structures for multiple failure criteria using topology optimization
techniques. Struct. Multidisp. Optim. 32(4):299–311.
Mohsine, A. 2006. Contribution à l’optimisation fiabiliste en dynamique des structures
mécaniques. Thèse de doctorat, INSA de Rouen, France (French version).
Mohsine, A., Kharmanda, G. & El-Hami, A. 2006. Improved hybrid method as a robust
tool for reliability-based design optimization, Structural and Multidisciplinary Optimization
32:203–213.
Moilanen, A. & Wintle, B.A. 2006. Uncertainty analysis favors selection of spatially aggregated
reserve structures. Biological Conservation 129:427–434.
Moore, R.E. 1966. Interval Analysis. Prentice-Hall.
More, J.J., Garbow, B.S. & Hillstrom, K.E. 1980. User guide for MINPACK-1. Argonne
National Labs Report ANL-80-74. Argonne. Illinois.
Moré, J.J. & Sørensen, D.C. 1983. Computing a Trust Region Step. SIAM Journal on Scientific
and Statistical Computing 3:553–572.
Mori, Y. & Ellingwood, B.R. 1993. Time-dependent system reliability analysis by adaptive
importance sampling. Structural Safety 12(1):59–73.
Moses, F. 1973. Desing for Reliability: Concepts and Applications. John Wiley & Sons.
Moses, F. 1977. Structural System Reliability and Optimization. Comput. Struct. 7:283–290.
Moses, F. 1995. Probabilistic Analysis of Structural Systems. Probabilistic Structural Mechanics
Handbook: Theory and Industrial Applications, edited by C. Raj Sundararajan, Chapman &
Hall, 166–187.
Moses, F. 1997. Problems and prospects of reliability based optimization. Engineering Structures
19(4):293–301.
Mourelatos, Z.P. & Zhou, J. 2005. Reliability Estimation with Insufficient Data Based on
Possibility theory. AIAA Journal 43(8):1696–1705.
Mourelatos, Z.P. & Zhou, J. 2006. A Design Optimization Method using Evidence Theory.
ASME Journal of Mechanical Design 128(4):901–908.
Muhanna, R.L. & Mullen, R.L. 2001. Uncertainty in Mechanics Problems – Interval-Based
Approach. Journal of Engineering Mechanics 127(6):557–566.
Mullen, R.L. & Muhanna, R.L. 1999. Bounds of Structural Response for all Possible Loadings.
ASCE Journal of Structural Engineering 125(1):98–106.
Murotsu, Y. & Shao, S. 1989. Optimum shape design of truss structures based on reliability.
Struct. Multidisc. Optim. 2(2):65–76.
Murotsu, Y., Kishi, M., Okada, H., Yonezawa, M. & Taguchi, K. 1984. Probabilistically opti-
mum design of frame structures. Proc. 11th IFIP Conf. on System Modeling and Optimization,
Springer-Verlag, pp. 545–554.
Murotsu, Y., Shao, S. & Watanabe, A. 1994. An approach to reliability-based optimization of
redundant structures. Structural Safety 16:133–143.
References 621

Muscolino, G. 1993. Response of linear and non-linear structural systems under gaussian or
non-gaussian filtered input. In Dynamic Motion: Chaotic and Stochastic Behaviour, F. Casciati
(ed.), pp. 203–299. Wien: Springer-Verlag.
Myers, R.H. & Montgomery, D.C. 1995. Response surface methodology: process and product
optimization using designed experiments. New York: John Wiley & Sons.
Nakamura, H., Miyamoto, A. & Kawamura, K. 2000. Optimization of bridge maintenance
strategies using GA and IA techniques. In Reliability and Optimization of Structural Systems,
Proceedings IFIP WG 7.5, A.S. Nowak & M.M. Szerszen (eds), Ann Arbor, Michigan.
Nakib, R. & Frangopol, D.M. 1990. RBSA and RBSA-OPT: Two computer programs for
structural system reliability analysis and optimization. Computers and Structures 36(1):
13–27.
Nakib, R. & Frangopol, D.M. 1990. Reliability-based structural optimization using interactive
graphics. Computers and Structures 37(1):27–34.
Neves, L.A.C., Frangopol, D.M. & Cruz, P.J.S. 2006. Probabilistic lifetime-oriented multi-
objective optimization of bridge maintenance: single maintenance type. Journal of Structural
Engineering ASCE 132(6):991–1005.
Neves, L.A.C., Frangopol, D.M. & Petcherdchoo, A. 2006. Probabilistic lifetime-oriented multi-
objective optimization of bridge maintenance: combination of maintenance types. Journal of
Structural Engineering ASCE 132(11):1821–1834.
Nie, J. & Ellingwood, B.R. 2005. Finite element-based structural reliability assessment using
efficient directional simulation. ASCE Journal of Engineering Mechanics 131(3):259–267.
Nieuwenhof, B. & Coyotte, J. 2002. A perturbation stochastic finite element method for
the time-harmonic analysis of structures with random mechanical properties. In 5th World
Congress on Computational Mechanics, Vienna, Austria.
Nikolaidis, E. & Burdisso, R. 1988. Reliability-based optimization: a safety index approach.
Computers and Structures 28(6):781–788.
Nikolaidis, E., Chen, S., Cudney, H., Haftka, R.T. & Rosca, R. 2004. Comparison of Probabil-
ity and Possibility for Design Against Catastrophic Failure Under Uncertainty. ASME Journal
of Mechanical Design 126:386–394.
Nikolaidis, E., Ghiocel, D.M. & Singhal, S. (eds) 2005. Engineering Design Reliability
Handbook, CRC Press, Boca Raton, FL.
Nurdin, H. 2002. Mathematical modeling of bias and uncertainty in accident risk assessment.
Mathematical Sciences, University of Twente, The Netherlands.
Oberkampf, W., Helton, J. & Sentz, K. 2001. Mathematical Representations of Uncer-
tainty. AIAA Non-Deterministic Approaches Forum, AIAA 2001-1645, Seattle, WA, April
16–19.
Oberkampf, W.L. & Helton, J. 2002. Investigation of Evidence Theory for Engineering
Applications. AIAA Non-Deterministic Approaches Forum, AIAA 2002-1569, Denver, CO.
Ohsaki, M., Fujisawa, K., Katoh, N. & Kanno, Y. 1999. Semi-definite programming for
topology optimization of truss under multiple eigenvalue constraints. Computer Methods
in Applied Mechanics and Engineering 180:203–217.
Ordaz, M. 1988. On the use of probability concentrations. Structural Safety 5:317–318.
Ordaz, M., Huerta, B. & Reinoso, E. 2003. Exact computation of input-energy spectra
from Fourier amplitude spectra. Earthquake Engineering and Structural Dynamics 32:
597–605.
Overbay, S., Ganzerli, S., De Palma, P., Stackle, P. & Brown, A. 2006. Trusses, NP-
Completeness, and Genetic Algorithms. In Proceedings of the 17th Analysis and Computation
Specialty Conference; Proc. Conf., F.A. Charney, D.E. Grierson, M. Hoit & J.M. Pestana (eds),
Saint Louis, MO, May 18–20. Reston: ASCE Publications.
Owen, A.B. 1997. Monte Carlo variance of scrambled net quadrature. SIAM J. Num. Anal.
34(5):1884–1910.
Page, C.H. 1952. Instantaneous power spectra. Journal of Applied Physics 23(1):103–106.
622 References

Palmberg, B., Blom, A.F. & Eggwertz. 1987. Probabilistic Damage Tolerance Analysis of Aircraft
Structures. In Probabilistic Fracture Mechanics and Reliability, J.W. Provan (ed.), Martinus
Nijhoff Publishers.
Pandey, M.D. 1998. An effective approximation to evaluate multinormal integrals. Structural
Safety 20(1):51–67.
Pandey, M.D. & Ariaratman, S.T. 1996. Crossing rate analysis of non gaussian response of
linear systems. Journal of Engineering Mechanics 122:507–511.
Pantelides, C.P. & Tzan, S-R. 1996. Convex models for seismic design of structures – I.
Earthquake Eng. Struct. Dyn. 25(9):927–944.
Pantelides, C.P. & Ganzerli, S. 1989. Design of trusses under uncertain loads using convex
models. Journal of Structural Engineering (ASCE) 124:318–329.
Papadimitriou, C., Beck, J.L. & Katafygiotis, L.S. 2001. Updating robust reliability using
structural test data. Probabilistic Engineering Mechanics 16:103–113.
Papadrakakis, M., Lagaros, N. & Tsompanakis, Y. 1998. Structural optimization using
evolution strategies and neural networks. Computer Methods in Applied Mechanics and
Engineering 156:309–333.
Papadrakakis, M., Papadopoulos, V. & Lagaros, N. 1996. Structural reliability analysis of
elastic-plastic structures using neural networks and Monte Carlo simulation. Computer
Methods in Applied Mechanics and Engineering 136:145–163.
Papadrakakis, M., Tsompanakis, Y. & Lagaros, N.D. 1999. Structural shape optimization using
evolution strategies. Eng. Optim. J. 31:515–540.
Papalambros, P.Y. 2001. Analytical target cascading in product development. In Proceedings
of the 3rd ASMO UK/ISSMO Conference on Engineering Design Optimization, Harrogate,
North Yorkshire, England.
Papalambros, P.Y. & Wilde, D.J. 2000. Principles of Optimal Design; Modeling and Computa-
tion. 2nd Edition, Cambridge University Press.
Papoulis, A. 1991. Probability, Random Variables and Stochastic Processes. New York:
McGraw-Hill.
Park, G.-J., Lee, T.-H., Lee, K.H. & Hwang, K.-H. 2006. Robust design: An overview. AIAA J.
44(1):181–191.
Patel, N.M., Agarwal, H., Tovar, A. & Renaud, J. 2005. Reliability based topology opti-
mization using the hybrid cellular automaton method. In 1st AIAA Multidisciplinary Design
Optimization Specialist Conference, April 18–21 Austin, Texas.
Penmetsa, R.C. & Grandhi, R.V. 2002. Estimating Membership Response Function using
Surrogate Models. Paper AIAA 2002-1234.
Penmetsa, R.C. & Grandhi, R.V. 2002. Efficient Estimation of Structural Reliability for
Problems with Uncertain Intervals. Computers and Structures 80:1103–1112.
Pierce, S.G., Worden, K. & Manson, G. 2006. A novel information-gap technique to assess relia-
bility of neural network-based damage detection. Journal of Sound and Vibration 293:96–111.
Polak, E. 1997. Optimization. Algorithms and consistent approximations. New York:
Springer-Verlag.
Polak, E. & Royset, J.O. 2007. Efficient sample sizes in stochastic nonlinear programming.
J. Computational and Applied Mathematics. To appear.
Porter, K.A., Beck, J.L., Shaikhutdinov, R.V., Au, S.K., Mizukoshi, K., Miyamura, M.,
Ishida, H., Moroi, T., Tsukada, Y. & Masuda, M. 2004. Effect of seismic risk on lifetime
property values. Earthquake Spectra 20:1211–1237.
Powell, M.J.D. 1982. VMCWD: A FORTRAN Subroutine for Constrained Optimization.
Report DAMTP 1982/NA4, Cambridge University, England.
Powell, M.J.D. 1994. A direct search optimization method that models the objective and con-
straint functions by linear interpolation. In Proceedings of the 6th Workshop on Optimization
References 623

and Numerical Analysis, S. Gomez & J.-P. Hennart (eds), Oaxaca, Mexico, January 1992,
Dordrecht, pp. 51–67. Kluwer Academic Publishers.
Pradlwater, H.J., Schuëller, G.I., Koutsourelakis, P.S. & Champris, D.C. 2007. Application of
line sampling simulation method to reliability benchmark problems. Structural Safety (Article
in Press).
Pshenichnyj, B.N. 1994. The Linearization Method for Constrained Optimization, Volume 22
of Computational Mathematics. Berlin: Springer.
Qiu, Z. 2003. Comparison of static response of structures using convex models and interval
analysis method. Numerical Methods in Engineering 56(12):1735–1753.
Qiu, J. & Slocum, A.H. 2004. A curved-beam bistable mechanism. J. Microelectromech. Syst.
13(2):137–146.
Qiu, Z. & Wang, X. 2003. Comparison of dynamic response of structures with uncertain-
but-bounded parameters using non-probabilistic interval analysis method and probabilistic
approach. Int. J. Solids and Structures 40:5423–5439.
Qiu, Z. & Wang, X. 2006. Interval analysis method and convex models for impulsive response of
structures with uncertain-but-bounded external loads. Acta Mecanica Sinica 22(3):265–276.
Qu, X. & Haftka, R.T. 2003. Design under uncertainty using Monte Carlo simulation and
probabilistic sufficiency factor. In Proceedings of DET’03 Conference, Chicago, IL, USA.
Rackwitz, R. 2000. Optimization – the basis of code making and reliability verification.
Structural Safety 22(1):27–60.
Rackwitz, R. 2001. Risk control and optimization for structural facilities. Proc. 20th IFIP TC7
Conf. On System modeling and optimization. Trier, Germany.
Rackwitz, R. 2001. Reliability Analysis – A Review and Some Perspectives. Structural Safety
23:365–395.
Rackwitz, R. 2002. Optimization and risk acceptability based on the Life Quality Index.
Structural Safety 24:297–331.
Rackwitz, R. & Fiessler, B. 1978. Structural reliability under combined random load sequences.
Comput. Struct. 9:489–494.
Rackwitz, R., Lentz, A. & Faber, M.H. 2005. Socio-economically sustainable civil engineering
infrastructures by optimization. Structural Safety 27(3):187–229.
Raiffa, H. & Schlaifer, R. 1961. Applied Statistical Decision Theory. Harward University Press,
Cambridge, Mass.
Rajashekhar, M.R. & Ellingwood, B.R. 1993. A new look at the response surface approach for
reliability analysis. Structural Safety 12:205–220.
Rajeev, S. & Krishnamoorthy, C.S. 1997. Genetic algorithms-based methodologies for design
optimization of trusses. ASCE J. of Structural Engineering 123(3):350–358.
Rao, S.S. & Cao, L. 2002. Optimum Design of Mechanical Systems Involving Interval
Parameters. ASME Journal of Mechanical Design 124:465–472.
Rao, S.S. & Sawyer, J.P. 1995. A Fuzzy Finite Element Approach for the Analysis of Imprecisely
Defined Systems. AIAA Journal 33:2264–2370.
Ravindran, S. 1999. Proper orthogonal decomposition in optimal control of fluids. Technical
report, NASA TM-1999-209113.
Reid, N. 1988. Saddlepoint methods and statistical inference. Statistical Science 3:213–238.
Rencher A.C. 1995. Methods of Multivariate Analysis. New York: John Wiley & Sons.
Robert, C.P. & Casella, G. 2004. Monte Carlo Statistical Methods (2nd edition). New York:
Springer.
Robinson, J. 1982. Saddlepoint approximations for permutation tests and confidence intervals.
Journal of the Royal Statistical Society Series B 44:91–101.
Rosen Group. 2004. Metal Loss Inspection Performance Specifications, www.Roseninspection.
net,Standard_CDP_POFspec_56_rev3.62.doc.
624 References

Rosenblatt, M. 1952. Remarks on a Multivariate Transformation. The Annals of Mathematical


Statistics 23(3):470–472.
Rosenblueth, E. 1975. Point estimates for probability moments. Proceedings of the National
Academy of Sciences of the USA 72:3812–3814.
Rosenblueth, E. 1976. Optimum Design for Infrequent Disturbances. ASCE Journal of the
Structural Division 102(9):1807–1825.
Rosenblueth, E. 1981. Two-point estimate in probabilities. Applied Mathematical Modeling
5:329–335.
Rosenblueth, E. & Mendoza, E. 1971. Reliability optimization in isostatic structures. ASCE
Journal of the Engineering Mechanics Division 97(6):1625–1642.
Ross, T.J. 1995. Fuzzy Logic with Engineering Applications. McGraw Hill.
Royset, J.O. & Polak, E. 2004. Implementable algorithm for stochastic programs using sample
average approximations. J. Optimization Theory and Application 122(1):157–184.
Royset, J.O. & Polak, E. 2004. Reliability-based optimal design using sample average
approximations. J. Probabilistic Engineering Mechanics 19(4):331–343.
Royset, J.O. & Polak, E. 2007. Extensions of stochastic optimization results from problems
with simple to problems with complex failure probability functions. J. Optimization Theory
and Application 133(1):1–18.
Royset, J.O. & Polak, E. 2007. Efficient sample size in stochastic nonlinear programming.
Journal of Computational and Applied Mathematics (Article in Press).
Royset, J.O., Der Kiureghian, A. & Polak, E. 2001. Reliability-based optimal design of series
structural systems. Journal of Engineering Mechanics 607–614.
Royset, J.O., Der Kiureghian, A. & Polak, E. 2001. Reliability-based optimal struc-
tural design by the decoupling approach. Reliability Engineering and System Safety 73:
213–221.
Royset, J.O., Der Kiureghian, A. & Polak, E. 2006. Optimal design with probabilistic objective
and constraints. J. Engineering Mechanics 132(1):107–118.
Rozvany, G.I.N. 1997. Topology Optimization in Structural Mechanics. Springer.
Rozvany, G.I.N., Bendsoe, M.P. & Kirsh, U. 1995. Optimality criteria: a basis for multidisci-
plinary optimization. Appl. Mech. Rev. 48:41–119.
Rubinstein, R. & Shapiro, A. 1993. Discrete Event Systems: Sensitivity Analysis and Stochastic
Optimization by the Score Function Method. New York: John Wiley & Sons.
Rummelhart, D.E. & McClelland, J.L. 1986. Parallel Distributed Processing, Volume 1:
Foundations. The MIT Press, Cambridge.
Ruszczynski, A. & Shapiro, A. 2003. Stochastic Programming. New York: Elsevier.
Sacks, J., Schiller, S.B. & Welch, W.J. 1989. Design for Computer Experiments. Technometrics,
31(1).
Schemit, L.A. & Lai, Y.C. 1994. Structural optimization based on preconditioned conjugate
gradient analysis methods. International Journal for Numerical Methods in Engineering
37(6):943–964.
Schittkowski, K. 1986. NLPQL: A FORTRAN Subroutine Solving Non-Linear Programming
Problems. Annals of Operations Research.
Schuëller, G. 1997. A state-of-the-art report on computational stochastic mechanics. Proba-
bilistic Engineering Mechanics 5:485–500.
Schuëller, G.I. 1998. Structural Reliability – Recent Advances. Proc. 7th ICOSSAR’97,
pp. 3–33.
Schuëller, G. 2001. Computational stochastic mechanics – recent advances. Computers &
Structures 79:2225–2234.
Schuëller, G.I. 2005. Special Issue on Computational Methods in Stochastic Mechan-
ics and Reliability Analysis. Comput. Methods Appl. Mech. Engrg. 194(12–16):
1251–1795.
References 625

Schuëller, G.I. & Stix, R. 1987. A critical appraisal of methods to determine failure probabilities.
Structural Safety 4:293–309.
Schwefel, H.P. 1981. Numerical optimization for computer models. Wiley & Sons,
Chichester, UK.
Sentz, K. & Ferson, S. 2002. Combination of Evidence in Dempster – Shafer Theory. Sandia
National Laboratories Report SAND2002-0835.
Seo, H.S. & Kwak, B.M. 2002. Efficient statistical tolerance analysis for general distribu-
tions using three-point information. International Journal of Production Research 40(4):
931–944.
Sexsmith, R.G. 1999. Probability-based safety analysis – value and drawbacks. Structural Safety
21:303–310.
Shannon, C.E. 1948. A Mathematical Theory of Communication. The Bell System Technical
Journal 27:379–423.
Shiao, M.C. 2006. Risk-Based Maintenance Optimization. Proceedings of the International
Conference on Structural Safety and Reliability.
Shiao, M.C. & Wu, Y.-T. 2004. An Efficient Simulation-Based Method for Probabilistic Dam-
age Tolerance Analysis With Maintenance Planning. Proceedings of the ASCE Specialty
Conference on Probabilistic Mechanics and Reliability.
Shinozuka, M. 1970. Maximum structural response to seismic excitations. J. Engrg. Mech. Div.
ASCE 96(EM5):729–738.
Shore, J.E. & Johnson, R.W. 1980. Axiomatic derivation of the principle of maximum entropy
and the principle of minimum cross-entropy. IEEE Transactions on Information Theory
26(1):26–37.
Skjong, R. 1985. Reliability-Based Optimization of Inspection Strategies. Proc. ICOSSAR’85
II:614–618.
Slotta, D., Tatting, B., Watson, L., Gurdal, Z. & Missoum, S. 2002. Convergence
analysis for cellular automata applied to truss design. Engineering Computations 19(8):
953–969.
Sobczyk, K. & Trebicki, J. 1990. Maximum entropy principle in stochastic dynamics.
Probabilistic Engineering Mechanics 5:102–110.
Sørensen, D.C. 1994. Minimization of a Large Scale Quadratic Function Subject to an Ellip-
soidal Constraint. Department of Computational and Applied Mathematics, Rice University.
Technical Report TR94-27.
Sørensen, J.D. & Tarp-Johansen, N.J. 2005. Reliability-based optimization and optimal reliabil-
ity level of offshore wind turbines. International Journal of Offshore and Polar Engineering
(IJOPE) 15(2):1–6.
Sørensen, J.D. & Tarp-Johansen, N.J. 2005. Optimal Structural Reliability of Offshore Wind
Turbines. CD-rom Proc. ICOSSAR’2005, Rome.
Sørensen, J.D. & Thoft-Christensen, P. 1988. Inspection Strategies for Concrete Bridges. Proc.
IFIP WG 7.5, Springer-Verlag 48:325–335.
Sørensen, J.D. & Thoft-Christensen, P. 1985. Structural optimization with reliability constraints.
Proc. 12th IFIP Conf. On ‘System modeling and optimization. Springer-Verlag, pp. 876–885.
Sørensen, J.D. & Frangopol, D.M. (eds) 2006. Advances in Reliability and Optimization of
Structural Systems (ISBN 0-415-39901-7), Taylor & Francis Group plc, London, 308 pages.
Sørensen, J.D., Kroon, I.B. & Faber, M.H. 1994. Optimal reliability-based code calibration.
Structural Safety 15:197–208.
Sørensen, J.D., Thoft-Christensen, P., Siemaszko, A., Cardoso, J.M.B. & Santos, J.L.T. 1995.
Interactive reliability-based optimal design. Proc. 6th IFIP WG 7.5 Conf. On Reliability and
optimization of structural systems. Chapman & Hall, pp. 249–256.
Spall, J.C. 2003. Introduction to stochastic search and optimization. New York: Wiley-
Interscience.
626 References

Spletzer, J.R. & Taylor, J.C. 2003. A bounded uncertainty approach to multi-robot localiza-
tion. In IEEE International Conference on Intelligent Robots and Systems; Proc. Intern.
Conf., Anon (ed.), Las Vegas, October, 27–31. Institute of Electrical and Electronics
Engineers Inc.
Stadler, W. 1984. Multicriteria optimization in mechanics (A Survey). Appl. Mech. Rev. 37:
277–286.
Steihaug, T. 1983. The Conjugate Gradient Method and Trust Regions in Large Scale
Optimization. SIAM Journal on Numerical Analysis 20:626–637.
Streicher, H. & Rackwitz, R. 2002. Structural optimization – a one level approach. Proc.
Workshop on Reliability-based design and optimization – rbo02. IPPT, Warsaw.
Streicher, H. 2004. Zeitvariante zuverlässigkeitsorientierte Kosten-Nutzen-Optimierung
für Strukturen unter Berücksichtigung neuer Ansätze für Erneuerungs- und
Instandhaltungsmodelle. PhD dissertation, Technische Universität München, Munich,
Germany. In German.
Streicher, H. & Rackwitz, R. 2004. Time-variant reliability-oriented structural optimization and
a renewal model for life-cycle costing. Probabilistic Engineering Mechanics 19(1–2):171–183.
Streicher, H., Joanni, A. & Rackwitz, R. 2006. Cost-benefit optimization and target relia-
bility levels for existing, aging and maintained structures. Structural Safety. Accepted for
publication.
Sturm, J.F. 1999. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric
cones. Optimization Methods and Software 11/12:625–653.
Svanberg, K. 1987. The method of moving asymptotes a new method for structural optimization.
Int. J. Numer. Meth. Engrg. 24:359–373.
Szidarovszky, F. & Bahill, A.T. 1992. Linear Systems Theory. Boca Raton: CRC Press.
Taflanidis, A.A. & Beck, J.L. 2007. Stochastic subset optimization for optimal reliability
problems. Journal of Probabilistic Engineering Mechanics (Article in Press).
Taflanidis, A.A. & Beck, J.L. 2007. Stochastic subset optimization for stochastic design.
ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and
Earthquake Engineering, Rethymno, Greece, 13–16 June.
Taguchi, G. 1978. Performance analysis design. International Journal of Production Research
16:521–530.
Takewaki, I. 2001. A new method for nonstationary random critical excitation. Earthquake
Engineering and Structural Dynamics 30(4):519–535.
Takewaki, I. 2001. Probabilistic critical excitation for MDOF elastic-plastic structures on
compliant ground. Earthquake Engineering and Structural Dynamics 30(9):1345–1360.
Takewaki, I. 2002. Critical excitation method for robust design: A review. Journal of Structural
Engineering ASCE 128(5):665–672.
Takewaki, I. 2002. Robust building stiffness design for variable critical excitations. Journal of
Structural Engineering ASCE 128(12):1565–1574.
Takewaki, I. 2004. Bound of Earthquake Input Energy. Journal of Structural Engineering ASCE
130(9):1289–1297.
Takewaki, I. 2006. Critical Excitation Methods in Earthquake Engineering. Elsevier Science
Publishers, Amsterdam.
Takewaki, I. & Ben-Haim, Y. 2005. Info-gap Robust Design with Load and Model Uncertainties.
Journal of Sound and Vibration 288(3):551–570.
Tamma, K.K., Zhou, X. & Sha, D. 2000. The time dimension: a theory towards the evolution,
classification, characterization and design of computational algorithms for transient/dynamic
applications. Archives of Computational Methods in Engineering 7:67–290.
Thanedar, P.B. & Kodiyalam, S. 1992. Structural Optimization Using Probabilistic Constraints.
Structural Optimization 4:236–240.
Thoft-Christensen, P. & Murotsu, Y. 1986. Application of Structural Systems Theory. Springer.
References 627

Thoft-Christensen, P. & Sørensen, J.D. 1987. Optimal Strategies for Inspection and Repair of
Structural Systems. Civil Engineering Systems 4:94–100.
Thomas, J., Dowell, E. & Hall, K. 2001. Three-dimensional transonic aeroelasticity using
proper orthogonal decomposition based reduced order models. In 42nd AIAA/ASME/ASCE/
AHS/ASC Structures, Structural Dynamics and Materials (SDM) Conference, April, Seattle,
WA, AIAA Paper 2001-1526.
Tonon, F., Bernardini, A. & Elishakoff, I. 2001. Hybrid analysis of uncertainty:
Probability, fuzziness and anti-optimization. Chaos, Solutions and Fractals 12(8):
1403–1414.
Torczon, V. & Trosset, M.W. 1998. Using approximations to accelerate engineering design opti-
mization. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symp. on Multidisciplinary
Analysis and Optimization, AIAA Paper 98-4800, St. Louis, Missouri.
Torng, T.Y. & Yan, R.J. 1993. Robust structural system design using a system reliability-based
design optimization method. In Probabilistic Mechanics: Advances in structural reliability
methods, P.D. Spanos & Y.-T. Wu (eds), Springer-Verlag, NY:534–549.
Tovar, A. 2004. Bone Remodeling as a Hybrid Cellular Automaton Optimization Process. Ph.D.
Thesis, University of Notre Dame.
Tovar, A., Patel, N.M., Kaushik, A.K. & Renaud, J.E. 2007. Optimality conditions of the hybrid
cellular automata for structural optimization. AIAA Journal 43(3):673–683.
Tovar, A., Quevedo, W., Patel, N. & Renaud, J. 2006. Topology optimization with stress
and displacement constraints usng the hybrid cellular automaton method. In Proceed-
ings of the 3rd European Conference on Computational Mechanics, June 5–8, Lisbon,
Portugal.
Trebicki, J. & Sobczyk, K. 1996. Maximum entropy principle and non-stationary distributions
of stochastic systems. Probabilistic Engineering Mechanics 11:169–178.
Tretiakov, G. 2002. Stochastic quasi-gradient algorithms for maximization of the probability
function. A new formula for the gradient of the probability function. In Stochastic
Optimization Techniques pp. 117–142. Springer, New York.
Tsompanakis, Y. & Papadrakakis, M. 2004. Large-scale reliability-based structural optimiza-
tion. J. Struct. Multidisc. Optim. 26:1–12.
Tsompanakis, Y., Lagaros, N.D. & Stavroulakis, G. 2007. Soft computing techniques in
parameter identification and probabilistic seismic analyses of structures. J. Advances in Eng.
Software (Article in Press).
Tu, J. & Jones, D.R. 2003. Variable Screening in Metamodel Design by Cross-Validated
Moving Least Squares Method. Proceedings 44th AIAA/ASME/ASCE/ AHS/ASC Structures,
Structural Dynamics and Materials Conference, AIAA-2003-1669, Norfolk, VA.
Tu, J., Choi, K.K. & Park, Y.H. 1999. A New Study on Reliability-Based Design Optimization.
ASME Journal of Mechanical Design 121:557–564.
Tu, J., Choi, K.K. & Park, Y.H. 2000. Design potential method for robust system parameter
design. AIAA Journal 39(4):667–677.
Uang, C.M. & Bertero, V.V. 1990. Evaluation of seismic energy in structures. Earthquake
Engineering and Structural Dynamics 19:77–90.
Uryasev, S. 1995. Derivatives of probability functions and some applications. Annals of
Operations Research 56:287–311.
Van Noortwijk, J.M. 2001. Cost-based criteria for obtaining optimal design decisions. In Pro-
ceedings of the 8th International Conference on Structural Safety and Reliability, Newport
Beach, CA, U.S.A., June.
Vandenberghe, L. & Boyd, S. 1996. Semidefinite programming. SIAM Review 38:49–95.
Volker, A.W.F., Dijkstra, F.H., Heerings, J.H.A.M. & Terpstra, S. 2004. Modeling of NDE
Reliability; Development of A POD-Generator. 16th WCNDT 2004 – World Conference
on NDT.
628 References

von Neumann, J. & Morgenstern, O. 1943. Theory of Games and Economical Behavior.
Princeton University Press.
Wall, M. 1995. GAlib:. A C++ library of genetic algorithm components. Available at
http://lancet.mit.edu/ga.
Wang, C.K. 1986. Structural analysis on microcomputers. New York, NY: Macmillan.
Wang, G. 2003. Adaptive Response Surface Method Using Inherited Latin Hypercube Design
Points. ASME Journal of Mechanical Design 125:1–11.
Wang, L. & Grandhi R.V. 1994. Efficient safety index calculation for structural reliability
analysis. Comput. Struct. 52(1):103–111.
Wen, Y.K. 2000. Reliability and performance-based design. Proceedings of the 8th ASCE Spe-
cialty Conference on Probabilistic Mechanics and Structural Reliability, University of Notre
Dame, Indiana, USA, 24–26 July.
Westermo, B.D. 1985. The critical excitation and response of simple dynamic systems. Journal
of Sound and Vibration 100(2):233–242.
White, D.J. 1996. A heuristic approach to a weighted maxmin dispersion problem. IMA Journal
of Mathematics Applied in Business and Industry 7:219–231.
White, P., Barter, S. & Molent, L. 2002. Probabilistic Fracture Prediction Based On Aircraft
Specific Fatigue Test Data. 6th Joint FAA/DoD/NASA Aging Aircraft Conference.
Willcox, K. & Peraire, J. 2001. Balanced model reduction via the proper orthogonal decom-
position. In 15th AIAA Computational Fluid Dynamics Conference, June 11–14, Anaheim,
CA, AIAA 2001–2611.
Wittwer, J.W., Baker, M.S. & Howell, L.L. 2006. Robust design and model validation of
nonlinear compliant micromechanisms. J. Microelectromechanical Sys. 15(1). To appear.
Wojtkiewicz, S.F. Jr., Eldred, M.S., Field, R.V. Jr., Urbina, A. & Red-Horse, J.R. 2001. A toolkit
for uncertainty quanti_cation in large computational engineering models. In Proceedings
of the 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials
Conference, Number AIAA-2001-1455, April 16–19, Seattle, WA.
Wolfram, S. 2002. A New Kind of Science. Wolfram Media.
Wolkowicz, H., Saigal, R. & Vandenberghe, L. (eds) 2000. Handbook of Semidefinite
Programming – Theory, Algorithms, and Applications. Dordrecht: Kluwer.
Wu, Y.-T. 1994. Computational methods for efficient structural reliability and reliability
sensitivity analysis. AIAA Journal 32(8):1717–1723.
Wu, Y.-T. & Wirsching, P.H. 1987. A new algorithm for structural reliability estimation. J. Eng.
Mech. ASCE 113:1319–1336.
Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2006. Probabilistic Function Evaluation System
(ProFES) for Reliability-Based Design. Journal of Structural Safety 28(1–2):164–195.
Wu, Y.-T., Millwater, H.R. & Cruse, T.A. 1990. An Advanced Probabilistic Structural Analysis
Method for Implicit Performance Functions. AIAA Journal 28(9):1663–1669.
Wu, Y.-T. & Mohanty, S. 2006. Variable Screening and Ranking Using Several Sampling Based
Sensitivity Measures. Journal of Reliability Engineering & System Safety 91(6):634–647.
Wu, Y.-T., Shiao, M., Shin, Y. & Stroud, W.J. 2005. Reliability-Based Damage Tol-
erance Methodology for Rotorcraft Structures. Transactions Journal of Materials and
Manufacturing.
Wu, Y.-T. & Shin, Y. 2004. Probabilistic Damage Tolerance Methodology For Reliability
Design And Inspection Optimization. Proceedings of the AIAA 45th SDM Conference, Paper
2004-01-0681.
Wu, Y.-T. & Shin, Y. 2005. Probabilistic Function Evaluation System for Maintenance
Optimization. Proceedings of the AIAA 46th SDM Conference.
Wu, Y.T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety factor based approach for probability-
based design optimization. In Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Struc-
tures, Structural Dynamics, and Materials Conference, Seattle, WA, USA, Paper no AIAA
2001-1522.
References 629

Wu, Y-T., Enright, M.P. & Millwater, H.R. 2002. Probabilistic Methods for Design Assessment
of Reliability With Inspection. AIAA Journal 40(5):937–946.
Xiao, Q., Sues, R. & Rhodes, G. 1999. Computational strategies for reliability based
Multidisciplinary optimization. Proceedings of the 13th ASCE EMD Conference.
Xie, Y.M. & Stevens, G. 1997. Evolutionary Structural Optimization. Springer-Verlag, London.
Xiu, D., Lucor, D., Su, C.-H. & Karniadakis, G. 2002. Stochastic modelling of flow-structure
interactions using generalized polynomial chaos. Journal of Fluids Engineering 124:51–59.
Xu, S. & Grandhi, R.V. 1998. Effective two-point function approximation for design
optimization. AIAA J. 36(12):2269–2275.
Yager, R.R., Fedrizzi, M. & Kacprzyk, J. (eds) 1994. Advances in the Dempster – Shafer Theory
of Evidence. John Wiley & Sons, Inc.
Yang, R.J., Chuang, C., Gu, L. & Li, G. 2005. Experience with approximate reliability-
based optimization methods II: an exhaust system problem. Structural and Multidisciplinary
Optimization 29:488–497.
Yang, S.-I., Frangopol, D.M., Kawakami, Y. & Neves, L.C. 2006. The use of lifetime functions
in the optimization of interventions on existing bridges considering maintenance and failure
costs. Reliability Engineering & System Safety 96(6):698–705.
Yang, S.-I., Frangopol, D.M. & Neves, L.C. 2006. Optimum maintenance strategy for
deteriorating structures using lifetime functions. Engineering Structures 28(2):196–206.
Ye, K.Q., Li, W. & Sudjianto, A. 2000. Algorithmic Construction of Optimal Symmetric Latin
Hypercube Designs. Journal of Statistical Planning and Inference 90:145–159.
Yi, P., Cheng, G. & Jiang, L. 2006. A sequential approximate programming strategy for
performance-measure based probabilistic structural design optimization. Structural Safety.
Article in Press.
Youn, B.D., Kokkolaras, M., Mourelatos, Z.P., Papalambros, P.Y., Choi, K.K. & Gorsich, D.
2004. Uncertainty propagation techniques for probabilistic design of multilevel systems.
In Proceedings of the 10th AIAA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Albany, New York, Paper No. AIAA-2004-4470.
Youn, B.D. & Choi, K.K. 2003. Hybrid Analysis Method for Reliability-Based Design
Optimization. ASME Journal of Mechanical Design 125(2):221–232.
Youn, B.D. & Choi, K.K. 2004. A new response surface methodology for reliability-based design
optimization. Computers and Structures 82:241–256.
Youn, B.D. & Choi, K.K. 2004. Selecting probabilistic approaches for reliability-based design
optimization. AIAA Journal 42(1):124–131.
Youn, B.D. & Choi, K.K. 2004. The performance moment integration method for reliability-
based robust design optimization. Proceedings of the ASME Design Engineering Technical
Conference, Salt Lake City, Utah, USA, September 28–October 2.
Youn, B.D., Choi, K.K. & Du, L. 2005. Enriched performance measure approach for
reliability-based design optimization. AIAA J. 43(4):874–884.
Youn, B.D., Choi, K.K. & Park, Y.H. 2001. Hybrid Analysis Method for Reliability-Based
Design Optimization. ASME Journal of Mechanical Design 125(2):221–232.
Yu, X., Chang, K. & Choi, K. 1998. Probabilistic structural durability prediction. AIAA Journal
36(4):628–637.
Yu, X., Choi, K. & Chang, K. 1997. A mixed design approach for probabilistic structural
durability. Journal of Structural Optimization 14(2–3):81–90.
Yu, X., Choi, K. & Chang, K. 1997. Reliability and durability based design sensitivity analysis
and optimization. Technical Report R97-01, Center for Computer Aided Design University
of Iowa.
Yuge, K. & Kikuchi, N. 1995. Optimization of a frame structure subjected to a plastic
deformation. Struct. Optim. 10:197–208.
Yuge, K., Iwai, N. & Kikuchi, N. 1999. Optimization of 2-d structures subjected to nonlinear
deformations using the homogenization method. Struct. Optim. 17(4):286–299.
630 References

Zadeh, L.A. 1965. Fuzzy Sets. Information and Control 8:338–353.


Zadeh, L.A. 1978. Fuzzy Sets as a Basis for a Theory of Possibility. Fuzzy Sets and Systems
1:3–28.
Zhang, W.-H., Beckers, P. & Fleury, C. 1995. A unified parametric design approach to
structural shape optimization. International Journal for Numerical Methods in Engineering
38:2283–2292.
Zhao, Y.G. & Ono, T. 2001. Moment methods for structural reliability. Structural Safety
23:47–75.
Zhou, J. & Mourelatos, Z.P. 2007. A Sequential Algorithm for Possibility-Based Design Opti-
mization. In press ASME Journal of Mechanical Design. Also, Proceedings of ASME Design
Engineering Technical Conferences, 2006, Paper# DETC2006-99232.
Zhou, M. & Rozvany, G.I.N. 1991. The COC algorithm, Part II: Topological, geometrical and
generalized shape optimization. Comp. Meth. Appl. Mech. Engrg. 89:309–336.
Zou, T., Mahadevan, S. & Sopory, A. 2004. A reliability-based design method using simu-
lation techniques and efficient optimization approach. ASME Design Engineering Technical
Conferences, Salt Lake City, Utah, DETC2004/DAC-57457.
Zou, T., Mahadevan, S. & Rebba, R. 2004. Computational efficiency in reliability-based opti-
mization. In Proceedings of the 9th ASCE Specialty Conference on Probabilistic Mechanics
and Structural Reliability, July 26–28, Albuquerque, NM.
Author index

Adams, B.M. 401 Huh, J.S. 57 Patel, N.M. 281


Agarwal, H. 281 Hurtado, J.E. 435 Plevris, V. 567
Allen, M. 135 Polak, E. 307
Aoues, Y. 217 Joanni, A.E. 335
Renaud, J.E. 281
Beck, J.L. 155 Kanno, Y. 471 Royset, J.O. 307
Ben-Haim, Y. 531 Kharmanda, G. 189 Rackwitz, R. 335
Bichon, B.J. 401 Kokkolaras, M. 115
Kwak, B.M. 57 Sørensen, J.D. 31
Chateauneuf, A. 3, 217 Lagaros, N.D. 567
Lee, S.H. 57 Taflanidis, A.A. 155
Liang, J. 87 Takewaki, I. 471, 531
De Palma, P. 549
Tillotson, D. 281
Doltsinis, I. 499
Mahadevan, S. 401 Tovar, A. 281
Maute, K. 135 Tsompanakis, Y. 567
Eldred, M.S. 401
Mourelatos, Z.P. 87, 247
Weickum, G. 135
Fragiadakis, M. 567 Nikolaidis, E. 87 Wu, Y.-T. 369
Frangopol, D.M. 135
Papadrakakis, M. 567 Zhou, J. 247
Ganzerli, S. 549 Papalambros, P.Y. 115
Subject index

Advanced Mean Value (AMV) 402, 406 Failure-Region Sampling 378, 381
Age-dependent Repairs 350 First-order reliability method (FORM)
AMV+ 402, 406–407 146, 402, 408
AMV2 + 402, 406–407
Analytical Target Cascading 117, 121
Genetic Algorithms 554
Global reliability methods 411, 419
Bi-level RBDO 413
Block repairs 353
Hybrid Cellular Automaton method 282,
290–291
Cascade Evolutionary Algorithm (CEA)
573, 590
Cellular automata 290 Importance Sampling 174, 180
Chloride attack 362 Info-gap theory 471, 475, 532, 539, 546
Combined approximation 142 Inspection and repair 349, 354
Common Random Numbers 168, 171, Inspection optimization 371, 386
180 Interval Analysis 127
Convergence 221, 223
Convex model 472, 549, 551, 554, 560 Large displacements 514, 526
Cost function 5, 22 Latin Hypercube Sampling (LHS) 575
Cost-benefit optimization 34, 44, 343 Life-cycle Cost 156, 175
Critical excitation 532, 538 Limit state 574

DAKOTA 403 Maintenance Optimization 369


Decision theory 32 Maintenance strategies 348
Decomposition-based Design 115 Markov Chain Monte Carlo 381
Design of Experiments 59 Mean-Value First-Order Second-Moment
Desirability function 500, 509 (MVFOSM) 403
Deterioration failure models 336–337 Microelectromechanical systems (MEMS)
Deterministic Design Optimization 189, 422
199, 233 Model Prediction 157, 159, 170
Moment estimation 448, 450
Earthquake input energy 534, 536, 546 Monte Carlo Simulation (MCS) 307, 314,
Efficient global reliability analysis (EGRA) 574
411, 419 Most Probable Failure Point 8, 15, 219,
Elasticity 512 222, 226
Entropy 444, 459 Multi-objective optimization 573
Evidence theory 248, 261
Evidence-based Design Optimization Nataf transformation 405
261 Neural networks (NN) 576
Evolution Strategies (ES) 572 Nonlinear dynamics 516
634 Subject index

Optimal inspection 36 Robust Design Optimization (RDO) 439,


Optimization 31, 36 507, 509, 571
Optimization methods 359 Robustness function 471, 477, 485, 490,
Optimization under uncertainty 116 533, 541
Optimum Safety Factor 191, 199, 209
Safety factor 3, 10, 26, 226, 228
Pareto optima 509, 526, 573 Sample average approximations 314
Passively controlled structure 532, 542, 547 Sample-adjustment rules 308, 315–316
Pearson System 64 Second-order reliability method (SORM)
Performance measure approach (PMA) 402, 402, 408
405 Semidefinite programming 471, 473, 490
Performance-based Earthquake Engineering Sensitivity analysis 42, 139, 413
(PBEE) 569 Sequential Optimization and Reliability
Plasticity 517 Assessment (SORA) 92
Poissonian disturbances 347 Sequential PBDO 265
Polynomial Chaos expansion 147 Sequential RBDO 415
Possibility theory 248, 253 Series systems 339, 357
Possibility-based Design Optimization 249, Simultaneous-perturbation Stochastic
258 Approximation 171, 180
Probabilistic Damage Tolerance 370 Single-Loop RBDO approach 95
Probabilistic Design 124 Statistical Moments 58–59, 76
Probabilistic Ground Motion Model 176 Stochastic constraints 504, 509
Probabilistic transformation 221, 223, 226 Stochastic Design 157, 171
Probability estimation 441 Stochastic Optimization 499
Probability of detection 371, 387 Stochastic Simulation 157, 173
Stochastic Subset Optimization 159, 178
Quasiconvex optimization 473, 487 Structural Dynamics 138
System Reliability 9, 21, 156, 159, 230–231
Reduced order model 137–138 System Reliability-based Design
Reliability 89–90 Optimization 89
Reliability analysis 191–192
Reliability index 6, 18, 22 Target reliability 221, 229
Reliability index approach (RIA) 402, 405 Taylor series approximation 499, 503
Reliability methods 403 Topology Optimization 286
Reliability versus robustness 509 Two-point adaptive nonlinearity
Reliability-Based Design Optimization approximation (TANA) 402,
(RBDO) 34, 76, 149, 189, 199, 233, 406–407
283, 309, 412, 570 Two-Stage Importance Sampling (TIS) 377
Reliability-Based Robust Design
Optimization (RRDO) 571, 589
Reliability-Based Topology Optimization Uncertainty analysis 144
282, 296 Uncertainty Propagation 122
Renewal model 340, 344 Unified RDO-RRDO 456
Response Surface Method 68 Updating 342
Risk Sensitivity Analysis 386
Robust analysis 455, 459 Wind turbines 48

S-ar putea să vă placă și