Sunteți pe pagina 1din 1208

Lecture Notes in Mechanical Engineering

Benoit Eynard
Vincenzo Nigrelli
Salvatore Massimo Oliveri
Guillermo Peris-Fajarnes
Sergio Rizzuti Editors

Advances on
Mechanics, Design
Engineering and
Manufacturing
Proceedings of the International Joint Conference
on Mechanics, Design Engineering & Advanced
Manufacturing (JCM 2016), 14–16 September,
2016, Catania, Italy
Lecture Notes in Mechanical Engineering
About this Series

Lecture Notes in Mechanical Engineering (LNME) publishes the latest


developments in Mechanical Engineering—quickly, informally and with high
quality. Original research reported in proceedings and post-proceedings represents
the core of LNME. Also considered for publication are monographs, contributed
volumes and lecture notes of exceptionally high quality and interest. Volumes
published in LNME embrace all aspects, subfields and new challenges of
mechanical engineering. Topics in the series include:

• Engineering Design
• Machinery and Machine Elements
• Mechanical Structures and Stress Analysis
• Automotive Engineering
• Engine Technology
• Aerospace Technology and Astronautics
• Nanotechnology and Microengineering
• Control, Robotics, Mechatronics
• MEMS
• Theoretical and Applied Mechanics
• Dynamical Systems, Control
• Fluid Mechanics
• Engineering Thermodynamics, Heat and Mass Transfer
• Manufacturing
• Precision Engineering, Instrumentation, Measurement
• Materials Engineering
• Tribology and Surface Technology

More information about this series at http://www.springer.com/series/11236


Benoit Eynard Vincenzo Nigrelli

Salvatore Massimo Oliveri


Guillermo Peris-Fajarnes
Sergio Rizzuti
Editors

Advances on Mechanics,
Design Engineering
and Manufacturing
Proceedings of the International Joint
Conference on Mechanics, Design
Engineering & Advanced Manufacturing
(JCM 2016), 14–16 September, 2016,
Catania, Italy
Organizing Scientific Associations:

AIP-PRIMECA—Ateliers Inter-établissements de
Productique—Pôles de Resources Informatiques
pour la MECAnique—France
ADM—Associazione nazionale Disegno e Metodi
dell’ingegneria industriale—Italy
INGEGRAF—Asociación Española de Ingeniería
Gráfica—Spain

123
Editors
Benoit Eynard Guillermo Peris-Fajarnes
Université de Technologie de Compiègne Universidad Politecnica de Valencia
Compiègne Valencia
France Spain

Vincenzo Nigrelli Sergio Rizzuti


Dipartimento di Ingegneria Chimica, Dipartimento di Ingegneria Meccanica,
Gestionale, Informatica, Meccanica Energetica e Gestionale
Università degli Studi di Palermo Università della Calabria
Palermo Rende, Cosenza
Italy Italy

Salvatore Massimo Oliveri


Dipartimento di Ingegneria Elettrica,
Elettronica e Informatica (DIEEI)
Università degli Studi di Catania
Catania
Italy

ISSN 2195-4356 ISSN 2195-4364 (electronic)


Lecture Notes in Mechanical Engineering
ISBN 978-3-319-45780-2 ISBN 978-3-319-45781-9 (eBook)
DOI 10.1007/978-3-319-45781-9
Library of Congress Control Number: 2016950391

© Springer International Publishing AG 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface and Acknowledgements

The JCM Conference has arrived at its second event, following JCM 2014 held in
Toulouse (F) in 2014. The cycle of conferences started in 2003 with biennial editions
organized by ADM (Design and Methods of Industrial Engineering Society—Italy)
and INGEGRAF (Asociación Española de Ingeniería Gráfica—Spain). In the con-
joint conference held at Venice in June 2011 (IMProVe 2011) also AIP-PRIMECA
(Ateliers Inter-établissements de Productique—Pôles de Resources Informatiques
pour la MECAnique—France) took part the event as organizer.
JCM 2016 has been organized by Rapid Prototyping and Geometric Modelling
Laboratory of the University of Catania (IT)—Department of Electronic, Electric
and Informatics Engineering (DIEEI).
JCM 2016 has gathered researchers, industry experts in the domain of
“Interactive and Integrated Design and Manufacturing for Innovation” to dissemi-
nate their major and recent results, studies, implementations, tools and techniques at
an international level.
The overall number of authors involved has been 404. JCM 2016 attracted
138 abstract submissions that became 135 papers. Due to a peer-review process,
123 papers have been selected and accepted for presentation at conference, as
podium or poster session. The precious reviewing work has been possible because
111 persons have been involved in the process, which has been coordinated by
24 track chairs, providing not less than two reviews per paper, for a total of
381 reviews.
The book is organized in several parts, each one corresponding to the tracks
of the Conference. Each part is briefly introduced by the track chairs, that followed
the review process.
We would like to personally thank all the people involved in the review process
for their strong commitment and expertise demonstrated in this not easy,
time-consuming and very important task. We would like to thank the persons of the

v
vi Preface and Acknowledgements

Organizing Committee that allowed the conference to be held and in particular


Ph.D. Gaetano Sequenzia for his work in all phases of Conference organization and
management, his support to the Program Chair during the review process and the
communication to authors, invited speakers, sponsors and so on.

Catania, Italy Salvatore Massimo Oliveri


Rende, Italy Sergio Rizzuti
Organization Committee

Conference Chair
Salvatore Massimo Oliveri, Univ. Catania

Conference Program Chair


Sergio Rizzuti, Univ. della Calabria

Conference Advisory Chairmen


Benoit Eynard, UT Compiègne
Xavier Fischer, ESTIA
Vincenzo Nigrelli, Univ. Palermo
Guillermo Peris-Fajarnes, Univ. Polit. Valencia

Scientific Committee
Angelo Oreste Andrisano, Univ. Modena e Reggio Emilia
Fabrizio Micari, Univ. Palermo
Fernando J. Aguilar, Univ. Almería
Pedro Álvarez, Univ. Oviedo
Agustín Arias, Univ. País Vasco
Sandro Barone, Univ. Pisa
Antonio Bello, Univ. Oviedo
Alain Bernard, Ecole Centrale Nantes
Jean-François Boujut, Grenoble INP
Daniel Brissaud, Grenoble INP
Fernando Brusola, Univ. Polit. Valencia

vii
viii Organization Committee

Enrique Burgos, Univ. País Vasco


Gianni Caligiana, Univ. Bologna
Monica Carfagni, Univ. Firenze
Antonio Carretero, Univ. Polit. Madrid
Pierre Castagna, Univ. Nantes
Patrick Charpentier, Univ. Lorraine
Vincent Cheutet, INSA Lyon
Gianmaria Concheri, Univ. Padova
Paolo Conti, Univ. Perugia
David Corbella, Univ. Polit. Madrid
Daniel Coutellier, ENSIAME
Alain Daidié, INSA Toulouse
Jean-Yves Dantan, ENSAM Metz
Beatriz Defez, Univ. Polit. Valencia
Paolo Di Stefano, Univ. L’Aquila
Emmanuel Duc, SIGMA Clermont
Alex Duffy, Univ. Strathclyde
Francisco Xavier Espinach, Univ. Girona
Georges Fadel, Clemson Univ.
Mercedes Farjas, Univ. Polit. Madrid
Jesùs Fèlez, Univ. Polit. Madrid
Gaspar Fernández, Univ. León
Livan Fratini, Univ. Palermo
Benoît Furet, Univ. Nantes
Mikel Garmendia, Univ. País Vasco
Philippe Girard, CNRS-IMS
Samuel Gomes, UTBM
Bernard Grabot, ENI Tarbes
Peter Hehenberger, Johannes Kepler University Linz
Francisco Hernández, Univ. Polit. Cataluña
Isidro Ladrón-de-Guevara, Univ. Málaga
Antonio Lanzotti, Univ. Napoli “Federico II”
Jesús López, Univ. Pública Navarra
Ferruccio Mandorli, Polit. Delle Marche
Mª Luisa Martínez-Muneta, Univ. Polit. Madrid
Christian Mascle, Polytechnique Montréal
Chris McMahon, Univ. of Bristol
Rochdi Merzouki, Univ. Lille
Rikardo Mínguez, Univ. País Vasco
Giuseppe Monno, Polit. Bari
Paz Morer, Univ.Navarra
Javier Muniozguren, Univ. País Vasco
Frédéric Noël, Grenoble INP
César Otero, Univ. Cantabria
Manuel Paredes, INSA Toulouse
Organization Committee ix

Basilio Ramos, Univ. Burgos


Didier Rémond, INSA Lyon
Caterina Rizzi, Univ. Bergamo
Louis Rivest, ETS Montréal
José Ignacio, Rojas-Sola Univ. Jaén
Lionel Roucoules, ENSAM Aix
Carlos San-Antonio, Univ. Polit. Madrid
José Miguel Sánchez, Univ. Cantabria
Jacinto Santamaría-Peña, Univ. La Rioja
Félix Sanz-Adan, Univ. La Rioja
Irene Sentana, Univ. Alicante
Sébastien Thibaud, Univ. Franche Comté
Stefano Tornincasa, Polit. Torino
Christophe Tournier, ENS Cachan
Pedro Ubieto, Univ. Zaragoza
Mercedes Valiente Lopez, Univ. Polit. de Madrid
Jozsef Vancza, MTA SZTAKI
Bernard Yannou, CentraleSupélec
Eduardo Zurita, Univ. Santiago de Compostela

Additional Reviewers
Niccolò Becattini, Polit. Milano
Giovanni Berselli, Univ. Genova
Francesco Bianconi, Univ. Perugia
Elvio Bonisoli, Polit. Torino
Yuri Borgianni, Univ. Bolzano
Fabio Bruno, Univ. Calabria
Francesca Campana, Univ. Roma “La Sapienza”
Nicola Cappetti, Univ. Salerno
Alessandro Ceruti, Univ. Bologna
Giorgio Colombo, Polit. Milano
Filippo Cucinotta, Univ. Messina
Francesca De Crescenzio, Univ. Bologna
Luigi De Napoli, Univ. Calabria
Luca Di Angelo, Univ. L'Aquila
Francesco Ferrise, Polit. Milano
Stefano Filippi, Univ. Udine
Michele Fiorentino, Polit. Bari
Daniela Francia, Univ. Bologna
Rocco Furferi, Univ. Florence
Salvatore Gerbino, Univ. Molise
Michele Germani, Polit. Marche
Lapo Governi, Univ. Firenze
x Organization Committee

Serena Graziosi, Polit. Milano


Tommaso Ingrassia, Univ. Palermo
Francesco Leali, Univ. Modena and Reggio Emilia
Antonio Mancuso, Univ. Palermo
Massimo Martorelli, Univ. Napoli “Federico II”
Maura Mengoni, Polit. Marche
Barbara Motyl, Univ. Udine
Maurizio Muzzupappa, Univ. Calabria
Alessandro Paoli, Univ. Pisa
Stanislao Patalano, Univ. Napoli “Federico II”
Marcello Pellicciari, Univ. Modena and Reggio Emilia
Margherita Peruzzini, Univ. Modena and Reggio Emilia
Roberto Raffaeli, Univ. eCampus
Armando Razionale, Univ. Pisa
Roberto Razzoli, Univ. Genova
Fabrizio Renno, Univ. Napoli “Federico II”
Francesco Rosa, Polit. Milano
Federico Rotini, Univ. Firenze
Davide Russo, Univ. Bergamo
Gianpaolo Savio, Univ. Padova
Domenico Speranza, Univ. Cassino
Davide Tumino, Univ. Enna Kore
Antonio E. Uva, Polit. Bari
Alberto Vergnano, Univ. Modena and Reggio Emilia
Enrico Vezzetti, Polit. Torino
Maria Grazia Violante, Polit. Torino

Organizing Committee
Salvatore Massimo Oliveri, Univ. Catania
Gaetano Sequenzia, Univ. Catania
Gabriele Fatuzzo, Univ. Catania
University of Catania (IT)—Department of Electronic, Electric and Informatics
Engineering—Rapid Prototyping and Geometric Modelling Laboratory
Viale Andrea Doria, 6, Building 3
95125, Catania, Italy
Contents

Part I Integrated Product and Process Design

Section 1.1 Innovative Design Methods


A Systematic Methodology for Engineered Object Design:
The P-To-V Model of Functional Innovation . . . . . . . . . . . . . . . . . . . . . . 5
Geoffrey S. Matthews
Influence of the evolutionary optimization parameters
on the optimal topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Tommaso Ingrassia, Antonio Mancuso and Giorgio Paladino
Design of structural parts for a racing solar car . . . . . . . . . . . . . . . . . . . 25
Esteban Betancur, Ricardo Mejía-Gutiérrez, Gilberto Osorio-gómez
and Alejandro Arbelaez

Section 1.2 Integrated Product and Process Design


Some Hints for the Correct Use of the Taguchi Method
in Product Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Sergio Rizzuti and Luigi De Napoli
Neuro-separated meta-model of the scavenging process
in 2-Stroke Diesel engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Stéphanie Cagin and Xavier Fischer
Subassembly identification method based on CAD Data . . . . . . . . . . . . . 55
Imen Belhadj, Moez Trigui and Abdelmajid Benamara
Multi-objective conceptual design: an approach to make
cost-efficient the design for manufacturing and assembly
in the development of complex products . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Claudio Favi, Michele Germani and Marco Mandolini

xi
xii Contents

Modeling of a three-axes MEMS gyroscope with feedforward


PI quadrature compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
D. Marano, A. Cammarata, G. Fichera, R. Sinatra and D. Prati
A disassembly Sequence Planning Approach for maintenance . . . . . . . . 81
Maroua Kheder, Moez Trigui and Nizar Aifaoui
A comparative Life Cycle Assessment of utility poles manufactured
with different materials and dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Sandro Barone, Filippo Cucinotta and Felice Sfravara
Prevision of Complex System’s Compliance during
System Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
J-P. Gitto, M. Bosch-Mauchand, A. Ponchet Durupt, Z. Cherfi
and I. Guivarch
Framework definition for the design of a mobile
manufacturing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Youssef Benama, Thecle Alix and Nicolas Perry
An automated manufacturing analysis of plastic parts
using faceted surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Jorge Manuel Mercado-Colmenero, José Angel Moya Muriana,
Miguel Angel Rubio- Paramio and Cristina Martín-Doñate
Applying sustainability in product development . . . . . . . . . . . . . . . . . . . . 129
Rosana Sanz, José Luis Santolaya and Enrique Lacasa
Towards a new collaborative framework supporting the design
process of industrial Product Service Systems . . . . . . . . . . . . . . . . . . . . . 139
Elaheh Maleki, Farouk Belkadi, Yicha Zhang and Alain Bernard
Information model for tracelinks building in early design stages . . . . . . 147
David Ríos-Zapata, Jérôme Pailhés and Ricardo Mejía-Gutiérrez

Section 1.3 Interactive Design


User-centered design of a Virtual Museum system: a case study . . . . . . 157
Loris Barbieri, Fabio Bruno, Fabrizio Mollo and Maurizio Muzzupappa
An integrated approach to customize the packaging of heritage
artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
G. Fatuzzo, G. Sequenzia, S.M. Oliveri, R. Barbagallo and M. Calì
Contents xiii

Part II Product Manufacturing and Additive Manufacturing

Section 2.1 Additive Manufacturing


Extraction of features for combined additive manufacturing
and machining processes in a remanufacturing context . . . . . . . . . . . . . . 181
Van Thao Le and Henri Paris Guillaume Mandil
Comparative Study for the Metrological Characterization
of Additive Manufacturing artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Charyar Mehdi-Souzani, Antonio Piratelli-Filho and Nabil Anwer
Flatness, circularity and cylindricity errors in 3D printed models
associated to size and position on the working plane . . . . . . . . . . . . . . . . 201
Massimo Martorelli, Salvatore Gerbino, Antonio Lanzotti,
Stanislao Patalano and Ferdinando Vitolo
Optimization of lattice structures for Additive Manufacturing
Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Gianpaolo Savio, Roberto Meneghello and Gianmaria Concheri
Standardisation Focus on Process Planning and Operations
Management for Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . 223
Jinhua Xiao, Nabil Anwer, Alexandre Durupt, Julien Le Duigou
and Benoît Eynard
Comparison of some approaches to define a CAD model from
topological optimization in design for additive manufacturing . . . . . . . . 233
Pierre-Thomas Doutre, Elodie Morretton, Thanh Hoang Vo,
Philippe Marin, Franck Pourroy, Guy Prudhomme
and Frederic Vignat
Review of Shape Deviation Modeling for Additive Manufacturing . . . . . 241
Zuowei Zhu, Safa Keimasi, Nabil Anwer, Luc Mathieu
and Lihong Qiao
Design for Additive Manufacturing of a non-assembly robotic
mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
F. De Crescenzio and F. Lucchi
Process parameters influence in additive manufacturing . . . . . . . . . . . . . 261
T. Ingrassia, Vincenzo Nigrelli, V. Ricotta and C. Tartamella
Multi-scale surface characterization in additive manufacturing
using CT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Yann Quinsat, Claire Lartigue, Christopher A. Brown
and Lamine Hattali
xiv Contents

Testing three techniques to elicit additive manufacturing


knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Christelle Grandvallet, Franck Pourroy, Guy Prudhomme
and Frédéric Vignat
Topological Optimization in Concept Design: starting approach
and a validation case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Michele Bici, Giovanni B. Broggiato and Francesca Campana

Section 2.2 Advanced Manufacturing


Simulation of Laser-Sensor Digitizing for On-Machine
Part Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Nguyen Duy Minh Phan, Yann Quinsat and Claire Lartigue
Tool/Material Interferences Sensibility to Process
and Tool Parameters in Vibration-Assisted Drilling . . . . . . . . . . . . . . . . . 313
Vivien Bonnot, Yann Landon and Stéphane Segonds
Implementation of a new method for robotic repair operations
on composite structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Elodie Paquet, Sébastien Garnier, Mathieu Ritou, Benoît Furet
and Vincent Desfontaines
CAD-CAM integration for 3D Hybrid Manufacturing . . . . . . . . . . . . . . . 329
Gianni Caligiana, Daniela Francia and Alfredo Liverani

Section 2.3 Experimental Methods in Product Development


Mechanical steering gear internal friction: effects on the drive feel
and development of an analytic experimental model
for its prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Giovanni Gritti, Franco Peverada, Stefano Orlandi, Marco Gadola,
Stefano Uberti, Daniel Chindamo, Matteo Romano
and Andrea Olivi
Design of an electric tool for underwater archaeological restoration
based on a user centred approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Loris Barbieri, Fabio Bruno, Luigi De Napoli, Alessandro Gallo
and Maurizio Muzzupappa
Analysis and comparison of Smart City initiatives . . . . . . . . . . . . . . . . . . 363
Aranzazu Fernández-Vázquez and Ignacio López-Forniés
Involving Autism Spectrum Disorder (ASD) affected people
in design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Stefano Filippi and Daniela Barattin
Contents xv

Part III Engineering Methods in Medicine


Patient-specific 3D modelling of heart and cardiac structures
workflow: an overview of methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Monica Carfagni and Francesca Uccheddu
A new method to capture the jaw movement . . . . . . . . . . . . . . . . . . . . . . 397
Lander Barrenetxea, Eneko Solaberrieta,
Mikel Iturrate and Jokin Gorozika
Computer Aided Engineering of Auxiliary Elements
for Enhanced Orthodontic Appliances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Roberto Savignano, Sandro Barone, Alessandro Paoli
and Armando Viviano Razionale
Finite Element Analysis of TMJ Disks Stress Level
due to Orthodontic Eruption Guidance Appliances . . . . . . . . . . . . . . . . . 415
Paolo Neri, Sandro Barone, Alessandro Paoli
and Armando Razionale
TPMS for interactive modelling of trabecular scaffolds
for Bone Tissue Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
M. Fantini, M. Curto and F. De Crescenzio
Mechanical and Geometrical Properties Assessment
of Thermoplastic Materials for Biomedical Application . . . . . . . . . . . . . . 437
Sandro Barone, Alessandro Paoli, Paolo Neri, Armando Viviano Razionale
and Michele Giannese
The design of a knee prosthesis by Finite Element Analysis . . . . . . . . . . 447
Saúl Íñiguez-Macedo, Fátima Somovilla-Gómez, Rubén Lostado-Lorza,
Marina Corral-Bobadilla, María Ángeles Martínez-Calvo
and Félix Sanz-Adán
Design and Rapid Manufacturing of a customized foot orthosis:
a first methodological study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
M. Fantini, F. De Crescenzio, L. Brognara and N. Baldini
Influence of the metaphysis positioning in a new reverse shoulder
prosthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
T. Ingrassia, L. Nalbone, Vincenzo Nigrelli, D. Pisciotta
and V. Ricotta
Digital human models for gait analysis: experimental validation
of static force analysis tools under dynamic conditions . . . . . . . . . . . . . . 479
T. Caporaso, G. Di Gironimo, A. Tarallo, G. De Martino,
M. Di Ludovico and A. Lanzotti
xvi Contents

Using the Finite Element Method to Determine the Influence


of Age, Height and Weight on the Vertebrae
and Ligaments of the Human Spine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Fátima Somovilla-Gómez, Rubén Lostado-Lorza, Saúl Íñiguez-Macedo,
Marina Corral-Bobadilla, María Ángeles Martínez-Calvo
and Daniel Tobalina-Baldeon

Part IV Nautical, Aeronautics and Aerospace Design


and Modelling
Numerical modelling of the cold expansion process in mechanical
stacked assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Victor Achard, Alain Daidie, Manuel Paredes and Clément Chirol
A preliminary method for the numerical prediction of the behavior
of air bubbles in the design of Air Cavity Ships . . . . . . . . . . . . . . . . . . . . 509
Filippo Cucinotta, Vincenzo Nigrelli and Felice Sfravara
Stiffness and slip laws for threaded fasteners subjected
to a transversal load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Rémi Thanwerdas, Emmanuel Rodriguez and Alain Daidie
Refitting of an eco-friendly sailing yacht: numerical prediction
and experimental validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
A. Mancuso, G. Pitarresi, G.B. Trinca and D. Tumino
Geometric Parameterization Strategies for shape Optimization
Using RBF Mesh Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Ubaldo Cella, Corrado Groth and Marco Evangelos Biancolini
Sail Plan Parametric CAD Model for an A-Class Catamaran
Numerical Optimization Procedure Using Open Source Tools . . . . . . . . 547
Ubaldo Cella, Filippo Cucinotta and Felice Sfravara
A reverse engineering approach to measure the deformations
of a sailing yacht . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Francesco Di Paola, Tommaso Ingrassia, Mauro Lo Brutto
and Antonio Mancuso
A novel design of cubic stiffness for a Nonlinear Energy Sink (NES)
based on conical spring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Donghai Qiu, Sébastien Seguy and Manuel Paredes
Design of the stabilization control system of a high-speed craft . . . . . . . 575
Antonio Giallanza, Luigi Cannizzaro, Mario Porretto
and Giuseppe Marannano
Contents xvii

Dynamic spinnaker performance through digital photogrammetry,


numerical analysis and experimental tests. . . . . . . . . . . . . . . . . . . . . . . . . 585
Michele Calì, Domenico Speranza and Massimo Martorelli
GA multi-objective and experimental optimization for a tail-sitter
small UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
Luca Piancastelli, Leonardo Frizziero and Marco Cremonini

Part V Computer Aided Design and Virtual Simulation

Section 5.1 Simulation and Virtual Approaches


An integrated approach to design an innovative motorcycle rear
suspension with eccentric mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
R. Barbagallo, G. Sequenzia, A. Cammarata and S.M. Oliveri
Design of Active Noise Control Systems for Pulse Noise . . . . . . . . . . . . . 621
Alessandro Lapini, Massimiliano Biagini, Francesco Borchi,
Monica Carfagni and Fabrizio Argenti
Disassembly Process Simulation in Virtual Reality Environment . . . . . . 631
Peter Mitrouchev, Cheng-gang Wang and Jing-tao Chen
Development of a methodology for performance analysis and synthesis
of control strategies of multi-robot pick & place applications . . . . . . . . . 639
Gaël Humbert, Minh Tu Pham, Xavier Brun, Mady Guillemot
and Didier Noterman
3D modelling of the mechanical actions of cutting: application
to milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Wadii Yousfi, Olivier Cahuc, Raynald Laheurte, Philippe Darnis
and Madalina Calamaz
Engineering methods and tools enabling reconfigurable
and adaptive robotic deburring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Giovanni Berselli, Michele Gadaleta, Andrea Genovesi,
Marcello Pellicciari, Margherita Peruzzini
and Roberto Razzoli
Tolerances and uncertainties effects on interference fit
of automotive steel wheels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Stefano Tornincasa, Elvio Bonisoli and Marco Brino
An effective model for the sliding contact forces in a multibody
environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
Michele Calì, Salvatore Massimo Oliveri, Gaetano Sequenzia
and Gabriele Fatuzzo
xviii Contents

Systems engineering and hydroacoustic modelling applied


in simulation of hydraulic components . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Arnaud Maillard, Eric Noppe, Benoît Eynard
and Xavier Carniel
Linde’s ice-making machine. An example of industrial archeology
study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
Belén Pérez Delgado, José R. Andrés Díaz, María L. García Ceballos
and Miguel A. Contreras López
Solder Joint Reliability: Thermo-mechanical analysis on Power Flat
Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Alessandro Sitta, Michele Calabretta, Marco Renna
and Daniela Cavallaro

Section 5.2 Virtual and Augmented Reality


Virtual reality to assess visual impact in wind energy projects . . . . . . . . 719
Piedad Eliana Lizcano, Cristina Manchado, Valentin Gomez-Jauregui
and César Otero
Visual Aided Assembly of Scale Models with AR . . . . . . . . . . . . . . . . . . . 727
Alessandro Ceruti, Leonardo Frizziero and Alfredo Liverani

Section 5.3 Geometric Modelling and Analysis


Design and analysis of a spiral bevel gear. . . . . . . . . . . . . . . . . . . . . . . . . 739
Charly Lagresle, Jean-Pierre de Vaujany and Michèle Guingand
Three-dimensional face analysis via new geometrical descriptors . . . . . . 747
Federica Marcolin, Maria Grazia Violante, Sandro Moos, Enrico Vezzetti,
Stefano Tornincasa, Nicole Dagnes and Domenico Speranza
Agustin de Betancourt’s plunger lock: Approach to its geometric
modeling with Autodesk Inventor Professional . . . . . . . . . . . . . . . . . . . . . 757
José Ignacio Rojas-Sola and Eduardo De La Morena-De La Fuente
Designing a Stirling engine prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
Fernando Fadon, Enrique Ceron, Delfin Silio and Laida Fadon
Design and analysis of tissue engineering scaffolds based on open
porous non-stochastic cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
R. Ambu and A.E. Morabito
Geometric Shape Optimization of Organic Solar Cells for Efficiency
Enhancement by Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
Grazia Lo Sciuto, Giacomo Capizzi, Salvatore Coco
and Raphael Shikler
Contents xix

Section 5.4 Reverse Engineering


A survey of methods to detect and represent the human symmetry
line from 3D scanned human back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Nicola Cappetti and Alessandro Naddeo
Semiautomatic Surface Reconstruction in Forging Dies . . . . . . . . . . . . . . 811
Rikardo Minguez, Olatz Etxaniz, Agustin Arias, Nestor Goikoetxea
and Inaki Zuazo
A RGB-D based instant body-scanning solution for compact box
installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
Rocco Furferi, Lapo Governi, Francesca Uccheddu and Yary Volpe
Machine Learning Techniques to address classification issues
in Reverse Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
Jonathan Dekhtiar, Alexandre Durupt, Dimitris Kiritsis,
Matthieu Bricogne, Harvey Rowson and Benoit Eynard
Recent strategies for 3D reconstruction using Reverse Engineering:
a bird’s eye view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
Francesco Buonamici, Monica Carfagni and Yary Volpe

Section 5.5 Product Data Exchange and Management


Data aggregation architecture “Smart-Hub” for heterogeneous
systems in industrial environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
Ahmed Ahmed, Lionel Roucoules, Rémy Gaudy and Bertrand Larat
Preparation of CAD model for collaborative design meetings:
proposition of a CAD add-on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
Ahmad Al Khatib, Damien Fleche, Morad Mahdjoub,
Jean-Bernard Bluntzer and Jean-Claude Sagot
Applying PLM approach for supporting collaborations in medical
sector: case of prosthesis implantation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
Thanh-Nghi Ngo, Farouk Belkadi and Alain Bernard

Section 5.6 Surveying, Mapping and GIS Techniques


3D Coastal Monitoring from very dense UAV-Based Photogrammetric
Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
Fernando J. Aguilar, Ismael Fernández, Juan A. Casanova,
Francisco J. Ramos, Manuel A. Aguilar, José L. Blanco
and José C. Moreno
xx Contents

Section 5.7 Building Information Modelling


BiMov: BIM-Based Indoor Path Planning . . . . . . . . . . . . . . . . . . . . . . . . 891
Ahmed Hamieh, Dominique Deneux and Christian Tahon

Part VI Education and Representation Techniques

Section 6.1 Teaching Engineering Drawing


Best practices in teaching technical drawing: experiences
of collaboration in three Italian Universities . . . . . . . . . . . . . . . . . . . . . . . 905
Domenico Speranza, Gabriele Baronio, Barbara Motyl, Stefano Filippi
and Valerio Villa
Gamification in a Graphical Engineering course - Learning
by playing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
Valentín Gómez-Jáuregui, Cristina Manchado and César Otero
Reliable low-cost alternative for modeling and rendering 3D Objects
in Engineering Graphics Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
J. Santamaría-Peña, M. A. Benito-Martín, F. Sanz-Adán, D. Arancón
and M. A. Martinez-Calvo

Section 6.2 Teaching Product Design and Drawing History


How to teach interdisciplinary: case study for Product Design
in Assistive Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
G. Thomnn, Fabio Morais and Christine Werba
Learning engineering drawing and design through the study
of machinery and tools from Malaga’s industrial heritage . . . . . . . . . . . 941
M. Carmen Ladrón de Guevara Muñoz, Francisco Montes Tubio,
E. Beatriz Blázquez Parra and Francisca Castillo Rueda
Developing students’ skills through real projects
and service learning methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
Anna Biedermann, Natalia Muñoz López and Ana Serrano Tierz
Integration of marketing activities in the mechanical design process . . . 961
Cristina Martin-Doñate, Fermín Lucena-Muñoz
and Javier Gallego-Alvarez

Section 6.3 Representation Techniques


Geometric locus associated with thriedra axonometric projections.
Intrinsic curve associated with the ellipse generated . . . . . . . . . . . . . . . . 973
Pedro Gonzaga, Faustino Gimena, Lázaro Gimena and Mikel Goñi
Contents xxi

Pohlke Theorem: Demonstration and Graphical Solution . . . . . . . . . . . . 981


Faustino Gimena, Lázaro Gimena, Mikel Goñi and Pedro Gonzaga

Part VII Geometric Product Specification and Tolerancing

Section 7.1 Geometric Product Specification and Tolerancing


ISO Tolerancing of hyperstatic mechanical systems with deformation
control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
Oussama Rouetbi, Laurent Pierre, Bernard Anselmetti
and Henri Denoix
How to trace the significant information in tolerance analysis
with polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
Vincent Delos, Denis Teissandier and Santiago Arroyave-Tobón
Integrated design method for optimal tolerance stack evaluation
for top class automotive chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
Davide Panari, Cristina Renzi, Alberto Vergnano, Enrico Bonazzi
and Francesco Leali
Development of virtual metrology laboratory based on skin model
shape simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
Xingyu Yan, Alex Ballu, Antoine Blanchard, Serge Mouton
and Halidou Niandou
Product model for Dimensioning, Tolerancing and Inspection . . . . . . . . 1033
L. Di Angelo, P. Di Stefano and A.E. Morabito

Section 7.2 Geometric and Functional Characterization


of Products
Segmentation of secondary features from high-density acquired
surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
L. Di Angelo, P. Di Stefano and A.E. Morabito
Comparison of mode decomposition methods tested on simulated
surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
Alex Ballu, Rui Gomes, Pedro Mimoso, Claudia Cristovao
and Nuno Correia
Analysis of deformations induced by manufacturing processes
of fine porcelain whiteware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
Luca Puggelli, Yary Volpe and Stefano Giurgola
Characterization of a Composite Material Reinforced
with Vulcanized Rubber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
D. Tobalina, F. Sanz-Adan, R. Lostado-Lorza, M. Martínez-Calvo,
J. Santamaría-Peña, I. Sanz-Peña and F. Somovilla-Gómez
xxii Contents

Definition of geometry and graphics applications on existing


cosmetic packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
Anna Maria Biedermann, Aranzazu Fernández-Vázquez
and María Elipe

Part VIII Innovative Design

Section 8.1 Knowledge Based Engineering


A design methodology to predict the product energy efficiency
through a configuration tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
Paolo Cicconi, Michele Germani, Daniele Landi
and Anna Costanza Russo
Design knowledge formalization to shorten the time
to generate offers for Engineer To Order products . . . . . . . . . . . . . . . . . 1107
Roberto Raffaeli, Andrea Savoretti and Michele Germani
Customer/Supplier Relationship: reducing Uncertainties
in Commercial Offers thanks to Readiness, Risk and Confidence
Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
A. Sylla, E. Vareilles, M. Aldanondo, T. Coudert, L. Geneste
and K. Kirytopoulos
Collaborative Design and Supervision Processes Meta-Model
for Rationale Capitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
Widad Es-Soufi, Esma Yahia and Lionel Roucoules
Design Archetype of Gears for Knowledge Based Engineering . . . . . . . . 1131
Mariele Peroni, Alberto Vergnano, Francesco Leali
and Andrea Brentegani
The Role of Knowledge Based Engineering
in Product Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
Giorgio Colombo, Francesco Furini and Marco Rossoni

Section 8.2 Industrial Design and Ergonomics


Safety of Manufacturing Equipment: Methodology Based on a Work
Situation Model and Need Functional Analysis . . . . . . . . . . . . . . . . . . . . 1151
Mahenina Remiel Feno, Patrick Martin, Bruno Daille-Lefevre,
Alain Etienne, Jacques Marsot and Ali Siadat
Identifying sequence maps or locus to represent the genetic structure
or genome standard of styling DNA in automotive design . . . . . . . . . . . . 1159
Shahriman Zainal Abidin, Azlan Othman, Zafruddin Shamsuddin,
Zaidi Samsudin, Halim Hassan and Wan Asri Wan Mohamed
Contents xxiii

Generating a user manual in the early design phase


to guide the design activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
Xiaoguang Sun, Rémy Houssin, Jean Renaud and Mickaël Gardoni
Robust Ergonomic Optimization of Car Packaging in Virtual
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
Antonio Lanzotti, Amalia Vanacore and Chiara Percuoco
Human-centred design of ergonomic workstations on interactive
digital mock-ups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
Margherita Peruzzini, Stefano Carassai, Marcello Pellicciari
and Angelo Oreste Andrisano
Ergonomic-driven redesign of existing work cells: the “Oerlikon
Friction System” case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
Alessandro Naddeo, Mariarosaria Vallone, Nicola Cappetti,
Rosaria Califano and Fiorentino Di Napoli

Section 8.3 Image Processing and Analysis


Error control in UAV image acquisitions for 3D reconstruction
of extensive architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
Michele Calì, Salvatore Massimo Oliveri, Gabriele Fatuzzo
and Gaetano Sequenzia
Accurate 3D reconstruction of a rubber membrane inflated during
a Bulge Test to evaluate anisotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
Michele Calì and Fabio Lo Savio
B-Scan image analysis for position and shape defect definition
in plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
Donatella Cerniglia, Tommaso Ingrassia, Vincenzo Nigrelli
and Michele Scafidi
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
Part I
Integrated Product and Process Design

Designing and developing a new product is a complex and mult idisciplinary task.
It could, in the case of very complex products, involve a big number of specialist
and equipment. Always, the main goals of the process must be satisfying the cus-
tomer demands while preserving the company or project team performance. To
ensure the success of the process a big number of methods, methodologies and
tools have been developed. These methodologies are subject to continuous im-
provements and adaptations to specific cases. In this sense, the framework of the
integrated product and process design was initially developed for the big c o mpa-
nies, but in the recent years its use in mediu m size or even in small size co mpanies
has been reported. Another fact that has been detected is the growing interest to
propose greener products and more environ mentally friendly processes.
Some of the papers that are presented in this chapter correspond to proposals to
adapt, enhance or present new methods, tools and methodologies for integrated
product and process design. Some other papers present case studies that could help
increasing the knowledge and the easiness to imp lant similar processes to other
cases. All these articles could be of interest to the researchers and practitioners in-
terested in increasing their knowledge in the state of the art of the integrated pro d-
uct and process design.

Francisco X. Espinach - Univ. Girona

Roberto Razzoli - Univ. Genova

Lionel Roucoules - ENSAM Aix


Section 1.1
Innovative Design Methods
A SYSTEMATIC METHODOLOGY FOR
ENGINEERED OBJECT DESIGN :
THE P-TO-V MODEL OF FUNCTIONAL
INNOVATION

Geoffrey S Matthews

ABC Optimal Ltd, Botley Mills, Southampton, SO30 2GB, United Kingdom
Geoffrey S Matthews. Tel.: +44 756 8589569.
E-mail address: geoffrey.s.matthews@abcoptimal.uk.com

Abstract

This paper seeks to establish the foundations of a methodology offering


practical guidance to aid the innovative design of Engineered Object functionality.
The methodology is set in a P-To-V framework. The concept of the framework is
borrowed from an earlier work, but constituent elements are new.
Much recent work focuses on different aspects of innovation. However, there
seems to be a gap for an overarching framework guiding the process of innovative
design but with a clear focus on the technical aspects of the object to be
engineered. In other words, ‘A Systematic Methodology For Engineered Object
Design’.
The term ‘Engineered Object’ rather than ‘Product’ has been used, to make the
scope as wide as possible. Three Innovation Groups are proposed – Elemental,
Application and Combination.
From a case study review, factors are identified which provided a ‘spark of
imagination’ leading to technical problem resolution. The term Influencing Factor
is defined along with the concept of Innovation Groups. The Influencing Factor
Matrix is generated to highlight patterns linking Innovation Group and Influencing
Factor(s).
The final step in the construction of the P-To-V Model is the generation of an
overarching Model Operating Chart, which aggregates the various elements of the
model.

Keywords: Design, Methodology, Model, Operating Chart

© Springer International Publishing AG 2017 5


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_1
6 G.S. Matthews

1 Introduction

1.1 Background

In ‘Winning At Innovation’(1), Trías De Bes, T. and Kotler P., propose a


model to enable a structured approach to navigate through the multitudinous
phases, steps and activities involved in innovation. Their work encompasses all
aspects from a company wide perspective. It does address technical elements by
reviewing established tools and techniques, but that is not the focus of the work.
Parraquez, P., 2015, in his thesis ‘A Networked Perspective on the Engineering
Design Process’(2), seeks to provide a framework to evaluate the efficiency and
effectiveness of the process, but the work is not intended to, and does not try to
address technical elements of the output from the process. Financial issues are
dealt with in the paper by Ripperda S, and Krause D., 2015, ‘Cost Prognosis of
Modular Product Structure Concepts’(3). Specific technical problems are dealt
with in theses such as the one by Gmeiner T., 2015, ‘Automatic Fixture Design
Based on Formal Knowledge Representation, Design Synthesis and
Verification’(4).
This paper however, constructs a model specifically related to the functional
aspects of design, which leads to an integrated approach to innovation.

1.2 Terminology

The title of this work contains the description ‘Engineered Object’ rather than
the more commonly occurring ‘Product’. The term ‘Product‘ is mostly associated
with items which result from some kind of factory based process which are often
purchased directly by the consumer, eg. a car, a washing machine, an electric
toothbrush. It is considered that this scope is too narrow for the intended purposes
of the methodology. Is the composite material wing of an aircraft or the space
frame roof support of an exhibition hall a ‘Product’? It is with this in mind that the
term ‘Engineered Object’ has been used to make the scope as wide and
generalised as possible.
For the purposes of this paper, ‘Innovation’ is taken to mean developing
something new but which is based on, or has some linkage with what already
exists. To enable practical application of the methodology three groupings are
proposed, which are defined as follows:-
Elemental Innovation – The complete Engineered Object remains in its well
established form but there is an ‘internal’ change to an element of the object which
improves the function.
A Systematic Methodology for Engineered … 7

Application Innovation – The Engineered Object itself is not fundamentally


altered but its use and application is changed in terms of positioning or orientation.
Combination Innovation – The Engineered Object in this case is new but in
itself, has no innovative elements. It is rather the bringing together of various
existing elements in a new combination or aggregation which provides advantages
hitherto unavailable.
Innovative designs require a ‘spark of imagination’ and from a case study
review, the paper identifies various examples. To enable classification, further
reference and eventually use as tools in the process of innovative design, the
‘sparks of imagination’ are described using an adjective / noun format. These
descriptions are defined as Influencing Factors.
The Influencing Factor Matrix links Innovation Group and Influencing
Factor(s).

1.3 Model Structure : P-To-V

This section establishes the structure of the model. Passing reference is made to
typical phases of the design process and established innovation methodologies but
as these are well researched and documented subjects, the attention is brief. The
paper’s subject is the provision of a tool which can be used at various stages by all
of the participants in the innovation process.
Previous works have well illustrated the point that innovative design does not
proceed in an orderly time sequence and this paper sets out the interactions
between the roles in a multi-nodal display showing frequent reverse interchange of
ideas and information, but at all times coordinated by the Innovation Project
Leader.
The final step in the construction of the P-To-V Model is the generation of an
overarching Model Operating Chart. It is here that the various elements of the
model are aggregated.

1.4 Limitations and Further Work

This short section brings the paper to a conclusion and establishes next steps.
8 G.S. Matthews

2 Influencing Factor Concept

2.1 Concept Origin

The idea for the use of ‘Influencing Factors’ originates in the author’s work
experience in the analysis and improvement of various processes – mainly
industrial but also administrative – using the Methods Time Measurement (MTM)
methodology(5) . The focus there is on the time required by human operatives to
perform certain tasks and the basis is analysis of the movements undertaken
(mainly) by the arms, hands and fingers. The time required is dependent on
several variables, eg., distance, visibility of the reach-to point, frequency of the
motion, etc. Clearly it takes longer to reach 40cm than it does to reach 10cm.
These variables are called Influencing Factors.

2.2 Concept Application

The idea in this paper is to apply this Influencing Factor concept, albeit in a
modified form, to provide structure to the identification, evaluation and listing of
many varied and different events which have an impact on the initial phases of the
functional innovation process.

2.3 Out Of The Box

Within the context of a technical paper it is natural to align the analysis with
engineering characteristics – mass, force, elasticity, statistical validity, etc.
However, there is a deliberate attempt in this paper to seek examples of
Influencing Factors detached from the world of scientific method. Further
explanation will follow, but it is initially surprising to find the inspiration for an
innovative solution to a construction site challenge while preparing entertainment
for a child’s birthday party.

3 Brief Case Study Review

Space does not permit a full description of the case studies reviewed. The target
was to identify what type of events caused the initial ‘spark of imagination’ which
then led on to the development of innovative designs.
A Systematic Methodology for Engineered … 9

Of particular interest were examples where the initial spark was not found
through a ‘classic’ engineering procedure. One such example was a road sign
stating simply ‘Bridge May Be Icy’ – Figure 1 below, which led to conductive
concrete(10). A further example was an innovative method of providing the
temporary support work for the construction of a concrete dome – Figure 2 below.
This borrowed the idea of inflated bouncy castles used for children’s
entertainment purposes(6).

Fig. 1. Bridge May Be Icy

Fig. 2. Concrete Dome Support

A summary of the case studies along with the allocation to one or other of the
Influencing Factors is shown in Figure 3 below. References to the case study
sources are provided in the figure.
10 G.S. Matthews

Case Study Summary


New Object Description New Object Basis Ref Influencing Factor
Concrete Dome Formwork - Observation of 'Bouncy
inflated temporary dome using Castle' erection while
6 Family Activity
1mm thick rubber based flexible preparing for children's
fabric entertainment activities
Low Cost Micro-Hydro Scheme Traditional
Masonry construction 7
(Peru) Intake Structure Expertise
Designed as a teaching aid to Alternative
Rubik 's Cube as Toy/Game 8
explain spatial relationships Perspective
Transverse mounted engine /
BMC Mini 9 Limited Space
in-sump gearbox
Addition of steel shavings
Natural
Conductive Concrete and carbon particles to an 10
Observation
otherwise standard concrete
Bitumen Emulsion Binders - Short term viscosity
Changed
used for highway maintenance reduction through emulsion 11
Regulations
purposes technology
Changed
F1 Hydro-Pneumatic Suspension Micro-Filter technology 12
Regulations
Virtual Fencing for Cattle Satellite technology linked to Conference
13
Control programmable cattle collars Proceedings
Dyson Dual Cyclone Household Industrial cyclone extraction Adjacent
14
Vacuum Cleaner system Functionality
Sony Walkman Stereo Cassette Transportable cassette Individual
15
Player recorder/player Requests
Higgs Boson Paper (Physics Addition of model
16 Rejected Ideas
Letters) explanation
High Pressure Gasoline Fuel Technical
Spiral blade spring 17
Pump Inlet Valve Reflection
Snowboard Skateboard and surfboard 18 Group Input
Basestone Linkage between office and
Personal
CollaborationTool construction site digital 19
Frustration
information via tablet app
Mobile phone, digital
Smartphone / Tablet 20 New Technology
camera, computer
Fig. 3. Case Study summary
A Systematic Methodology for Engineered … 11

4 The P-To-V Model

4.1 Design and Innovation Methodologies

There are many proven design methodologies and models. Cross(22) notes in
ascending order of complexity, those proposed by French, Archer, Pahl and Beitz,
VDI (Verein Deutscher Ingenieure) and March. Bürdek(23) reproduces his
previously proposed feedback loop model, but with the explanatory comment that
‘the repertoire of methods to be applied depends on the complexity of the
problems posed’.
Methodologies for innovation are also well established. Brainstorming is
clearly a classical starting point with progression using techniques such as
Innovation Funneling, Technique Mapping and many others.
All of the above models seem to lack a linkage between the idea generation
activities and the roles / responsibilities of those involved in the process.
‘Winning At Innovation’(1), has the subtitle of ‘The A-To-F Model’. It
identifies six roles - Activators, Browsers, Creators, Developers, Executors,
Facilitators. Innovation is addressed at the strategic level, covering general
business aspects of the process. Some attention is given to Product Design but that
is not the focus of that work.
The A-To-F concept gave rise to the idea of a similar model based
methodology, but with more focus on the functional aspects of the Engineered
Object. Hence the P-To-V Model.

4.2 P-To-V Characterisitics

The P-To-V Model has the following roles – Provokers, Quantifiers,


Researchers, Specifiers, Transformers, Utilisers, Validators.
Intended features of this model though are fluidity and flexibility. It should not
be applied by following common lines of demarcation between organizational
departments. The roles should be thought of as ‘task’ focused and not ‘function’
focused. Of course, such fluidity and flexibility, if not properly managed would
lead to chaos and failure. This requires a certain amount of oversight and control.
The responsibility for this lies in the role of the Provoker, and here there must be
an element of continuity. At the strategic level, this person will be the sponsor for
any particular project and at the operational level the recommendation is the
appointment of an Innovation Project Leader (IPL).
12 G.S. Matthews

4.3 Influencing Factors - Linkage to Roles

The next step is to connect the Innovation Groups, the Influencing Factors and
the Roles. In order to do that it is necessary to explore a little the function of the
Influencing Factors. They are not intended to be technical formulae giving a
definite and precise answer to a specific question. Rather, they are intended to be
Signposts suggesting where inspiration may be found. A potential but not
exhaustive linkage is provided in the Influencing Factor Matrix in Figure 4 below.

Influencing Factor Matrix


Participant Roles
Innovation
Influencing Factor
Group P Q R S T U V
Application Family Activity x x x
Application Traditional x x x x
Application Alternative x x
Application Limited Space x x x
Elemental Natural x x x
Elemental Changed x x x x x
Elemental Conference x x
Elemental Adjacent x x x x
Elemental Individual Requests x x x x x
Elemental Rejected Ideas x x x
Elemental Technical x x x x
Combination Group Input x x x x
Combination Personal x x
Combination New Technology x x x x x
Fig. 4. Influencing Factor Matrix

4.4 Model Operating Chart (MOC)

The Model Operating Chart (MOC) can be seen in Figure 5 below. A short
explanatory description follows:-
Overall Layout – This is basically in circular format indicating that innovation
is an iterative process with several loops.
Provokers – The role of the Provoker lies outside the circle because this is an
‘oversight’ role rather a ‘task’ role. Actual world reality demands that there is a
managerial role providing continuity and this is indicated by the existence of the
Innovation Project Leader who has a double function. Firstly, to be the
representative of the Provoker on a day-to-day, week-to-week basis, and secondly
A Systematic Methodology for Engineered … 13

to manage and co-ordinate the activities of the other role holders. This role
therefore sits at the center of the MOC.
Other Roles – These are located round the circumference of the circle, with
one-way arrows leading from one role to the next. These arrows indicate how
innovation projects should ideally (and on odd occasions do actually) flow.

Fig. 5. Model Operating Chart

Interface with IPL – It is seen that there is a two-way arrow connecting each of
the circumferential roles to the IPL. This recognizes two things. Firstly, that the
IPL has an overall co-ordination responsibility and secondly that the project
activities may not, and in fact often do not, flow in a laminar fashion. Turbulence
does occur and a Systematic Methodology needs to recognize that and have an
appropriate mechanism.
Innovation Group / Influencing Factor – This short loop provides a roadmap
suggesting that following each new task allocation, the person discharging the role
should undertake a short review of the Influencing Factors to aid the decision
about how the task should be discharged.

5 Further Work

The P-To-V Model is a work in progress. The next steps will be to extend the
range of Influencing Factors and add algorithmic analysis using, for example,
weightings against the various Influencing Factors depending on which type of
Innovation Group is relevant.
14 G.S. Matthews

Acknowledgments My thanks go to the following individuals who responded personally to


questions about ‘sparks of imagination’. Professor C Tuan – University of Nebraska-Lincoln, Ms
S Selvakumaran – Cambridge University, Ingenieur L Mancini, Magneti Marelli S.p.a.,
Professor T Waterhouse – Scotland’s Rural College.

References

1. Trías De Bes, T. and Kotler P., ‘Winning At Innovation – The A-to-F Model’, 2011,
Palgrave Macmillan, England
2. Parraquez, P., 2015, ‘A Networked Perspective on the Engineering Design Process’.
3. Ripperda S, and Krause D., 2015, ‘Cost Prognosis of Modular Product Structure
Concepts’ 20th International Conference on Engineering Design, ICED15, Mailand
(2015)
4. Gmeiner T., 2015, ‘Automatic Fixture Design Based on Formal Knowledge
Representation, Design Synthesis and Verification’.
5. Bokranz R. and Landau K., Handbuch Industrial Engineering -
Produktivitätsmanagement mit MTM, 2012, Schäffer-Poeschel Verlag für Wirtschaft-
Steuern-Recht
6. Priestly A. Engineering The Domes. Magazine of The Institution Of Civil Engineers,
2016, March, P28.
7. Selvakumaran S. Making low-cost micro-hydro schemes a sustainable reality.
Proceedings of The Institution Of Civil Engineers, Volume 165, Issue CE1, Paper
1100012.
8. Smith N. Classic Project. Magazine of The Institution OF Engineering And Technology,
2016, March, P95.
9. Bardsley G. Issigonis: The Official Biography, Icon Books, ISBN 1-84046-687-1.
10. Tuan, C. (2008). "Roca Spur Bridge: The Implementation of an Innovative De-icing
Technology." J. Cold Reg. Eng., 10.1061/(ASCE)0887381X
11. Heslop M.W. and Elborn M.J. Surface Treatment Engineering, Journal of The
Institution Of Highways And Transportation, 1986, Aug/Sept, P19.
12. Cross N. Design Thinking, London/NewYork, Bloomsbury Academic, P37
13. Umstätter C. The evolution of virtual fences: A review, Computers and Electronics in
Agriculture, Volume 75, Issue 1, January 2011, Pages 10–22.
14. Adair J. Effective Innovation, London, Pan Macmillan, P225
15. Cross N. Engineering Design Methods, Chichester, John Wiley and Sons, P208
16. Carroll S. The Particle At The End Of The Universe, London, Oneworld Publications,
P223.
17. Mancini L. Email-28 April 2016, Magneti Marelli S.p.a.
18. Schmidt M. Innovative Design Functions Only In Teams. VDI Nachrichten, 2011,
Nr43, S26.
19. Siljanovski A. Sharing Network. Magazine of The Institution Of Civil Engineers, 2016,
March, P46.
20. Norman D.A. The Design Of Everyday Things, New York, Basic Books, P265.
21. Norman D.A. The Design Of Everyday Things, New York, Basic Books, P279-280.
22. Cross N. Engineering Design Methods – Strategies For Product Design, Chichester,
John Wiley and Sons, P29-42.
23. Bürdeck B.E. History, Theory and Practice of Product Design, Basel, Birkhäuser, P113.
Influence of the evolutionary optimization
parameters on the optimal topology

Tommaso Ingrassiaa,*, Antonio Mancusoa, Giorgio Paladinoa


a
DICGIM, Università degli Studi di Palermo, viale delle Scienze, 90128 Palermo, Italy
* Corresponding author. Tel.: +3909123897263; E-mail address:
tommaso.ingrassia@unipa.it

Abstract Topological optimization can be considered as one of the most general


types of structural optimization. Between all known topological optimization
techniques, the Evolutionary Structural Optimization represents one of the most
efficient and easy to implement approaches. Evolutionary topological optimization
is based on a heuristic general principle which states that, by gradually removing
portions of inefficient material from an assigned domain, the resulting structure
will evolve towards an optimal configuration. Usually, the initial continuum
domain is divided into finite elements that may or may not be removed according
to the chosen efficiency criteria and other parameters like the speed of the
evolutionary process, the constraints on displacements and/or stresses, the desired
volume reduction, etc. All these variables may influence significantly the final
topology.
The main goal of this work is to study the influence of both the different
optimization parameters and the used efficiency criteria on the optimized
topology. In particular, two different evolutionary approaches, based on the von
Mises stress and the Strain Energy criteria, have been implemented and analyzed.
Both approaches have been deeply investigated by means of a systematic
simulation campaign aimed to better understand how the final topology can be
influenced by different optimization parameters (e.g. rejection ratio, evolutionary
rate, convergence criterion, etc..). A simple case study (a clamped beam) has been
developed and simulated and the related results have been compared. Despite the
object simplicity, it can be observed that the evolved topology is strictly related to
the selected parameters and criteria.

Keywords: Topology optimization, Evolutionary optimization, rejection ratio, FEM,


efficiency criteria.

© Springer International Publishing AG 2017 15


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_2
16 T. Ingrassia et al.

1 Introduction

The improvements in the design of structural components are often reached by an


iterative approach driven by the designer experience. Even if this represents a key
aspect of the design process, an approach that is completely based on experience,
usually, can lead to only marginal improvements and would take quite a long time.
A complementary approach is what makes use of structural optimization methods
[1,2] to determine the optimal characteristics, topology and/or shape of an object.
In the recent years, structural optimisation has considerably developed and the
interest concerning its practical applications is steadily growing in many
engineering fields [3-8]. Of course, the improvements of the information
technology tools have strongly contributed to the spreading of the numerical
analysis methods, like FEM or BEM, which can be effectively used during the
optimization process of a structure. In the past, many research activities related to
the optimization methods were focused primarily on mathematical aspects of the
problem, trying to adapt the available analytical and numerical methods to solve
particular structural problems. These kinds of problems, in fact, are quite difficult
to solve making use of non-convex functions with several variables (continuous
and discrete). Practical applications of these optimization methods usually forces
the designer to simplify the problem, often dramatically, with a consequent lost of
reliability.
Therefore, in the engineering field, the need for new optimization procedures
(alternative to classic mathematical approaches) has arisen during years. These
alternative approaches would allow maintaining some generality and accuracy in
the description of real complex problems, but leading to solutions reasonably
similar to those considered rigorously optimal. Consequently, since the early
1990s, different new optimization methodologies, based on numerical approaches
[3, 8, 9], have been proposed. In this scenario, the Evolutionary Structural
Optimization (ESO) has become one of the most interesting and known technique
[6, 10, 11]. Following the ESO approach, the optimal solution is searched basing
on heuristic rules. Unlike traditional methods, the evolutionary strategy has shown
a high degree of efficiency for different typologies of structural problems [11].
The solutions found using the ESO approach, however, might be influenced by the
chosen optimization parameters [10, 11]. Although several papers are found in
literature concerning the ESO approach, to the authors knowledge, much little
information is available regarding the effect of the parameters on the optimal
solution.
In this work, it has been investigated how the main control parameters, used in an
evolutionary optimization process, can affect the result. One of the main
advantages of the proposed approach concerns the comparison between two of the
most commonly used efficiency criteria. The goal is to provide useful guidelines
that can lead designers to obtain the best result for every (particular) optimization
problem.
Influence of the evolutionary optimization ... 17

2 Evolutionary Structural Optimization

The ESO method represents one of the most efficient and easily implemented
approach. The working principle of the evolutionary technique requires to
gradually eliminate parts of inefficient material from an assigned domain. In this
way, the topology of the structure evolves toward an optimal configuration. The
initial domain is typically divided into Finite Elements (FE) and the removal of
material is based on particular efficiency criteria. An evolutionary optimization
procedure is generally structured as follows [12-14]. At first, the whole domain is
meshed using finite elements; then the boundary conditions (loads and constraints)
are imposed and a numerical FEM analysis is performed. As soon as the solution
is found, the obtained numerical results are sorted on the basis of the chosen
efficiency criterion (e.g. von Mises stress, strain energy, displacement, etc..). The
values of the chosen parameter of each finite element are then compared with a
reference value; if the FE value is lower than the reference one, the finite element
is removed. The reference value is usually a percentage of the maximum
parameter value found in the structure. As an example, if the von Mises stress
efficiency criterion is used, for each finite element the following inequality is
checked:

ߪ௝௏ெ ൑ ܴܴ௜ ‫ߪ כ‬௠௔௫


௏ெ
(1);
where:

-ߪ௝௏ெ is the von Mises stress of the j-th element;


-ܴܴ଴ ൏ ܴܴ௜ ൏ ܴܴ௙  is the Rejection Ratio during the i-th iteration;
- ܴܴ଴ ܴܴܽ݊݀௙  are, respectively, the initial and final Rejection Ratios;
௏ெ
- ߪ௠௔௫ is the maximum value of the von Mises stress calculated in the structure at
the i-th iteration.
As soon as all elements that verify the inequality (1) during the i-th iteration are
removed, a steady state is reached. Consequently, the rejection ratio must be
increased to further improve the structure. It can be done according to the
following formulation [12,14]:
ܴܴ݅ + 1 = ܴܴ݅ + ‫ܴܧ‬Ǣ
where ER represents the Evolutionary Rate.
So that, a new FEM analysis is performed, the von Mises stress values are updated
and all the finite elements verifying the efficiency criterion (1) are removed. The
procedure is recursively repeated and it stops as soon as the convergence criterion
[12, 15] is verified (e.g. when the final value of the rejection ratio, RRf, is reached
or the Maximum Reduction of Volume, MRV, is obtained). The initial rejection
ratio is usually defined in the range 0 < ܴܴ0 <1% but, in some cases, values higher
than 1% can be considered to avoid absence of elements to be removed (since the
inequality (1) is not verified). The ends values (initial and final) of the rejection
18 T. Ingrassia et al.

ratio are usually empirically defined basing on the experience of the designer. A
suitable choice of these values [11, 15] can assure a progressive removal process
of the elements.

3 Implementation of the procedure

In this study, two different efficiency criteria, based respectively on the von Mises
(VM) stress and the Strain Energy (SE) [9 - 11], were investigated. In the first
case, as described in the previous section, the elements removal is based on the
value of the von Mises stress of each element, compared with a percentage of the
௏ெ
maximum stress value, ߪ௠௔௫ , calculated in the domain. Through this approach, a
homogeneous equivalent stress level structure can be obtained (uniform strength
structure).
The approach based on the second efficiency criterion, instead, removes the
elements having the lowest values of strain energy.
Both optimization procedures have been implemented using the Ansys Parametric
Design Language (APDL) and the Ansys software as finite element code.
In order to ensure a more gradual evolutionary process, a new control parameter,
called RER (Removed Element Rate), has been introduced. The RER parameter
takes into account the number of elements removed at each iteration. In particular,
if before reaching the steady state of the i-th iteration, the number of removed
elements exceeds the value RER, the iteration is interrupted, the rejection ratio is
updated and a new iteration starts. If the rejection ratio value is erroneously too
large, the use of the new parameter avoids to remove too much material during a
single iteration and, consequently, it ensures more accurate and reliable results.
Independently from the efficiency criterion, the optimization procedure is
structured [16-17] as shown in Figure 1.
Influence of the evolutionary optimization ... 19

Fig. 1 Workflow of the implemented ESO procedure

4 Case Study

In order to better understand the influence of the described parameters on the final
topology, a clamped steel beam has been used as a case study. A vertical load of
100 N has been applied to the free end. The main dimensions and the FEM model
(meshed with 8-node brick elements) of the beam are shown in Figure 2.

Fig.2 Dimensions (left) and FEM model (right) of the case study.
20 T. Ingrassia et al.

Table 1 – Range values of the main optimization parameters.


Efficiency Criterion
Parameter von Mises Stress Strain Energy
Initial Rejection Ratio – R0 1% ÷ 6% 1% ÷ 6%
Final Rejection Ratio - Rf 5% ÷ 30% 5% ÷ 30%
Evolutionary Rate - ER 0.5% ÷ 2% 0.5% ÷ 2%
Maximum Reduction of Volume - MRV 60% 60%
Removed Elements Rate - RER 10 ÷ 20 10 ÷ 20
Number of Finite Elements 1500 - 3920 1500 - 3920
Table 1 shows the values ranges of the main parameters for a given 60% of MRV.
According to Table 1, a deep investigation has been carried out aimed to find the
influence of the described parameters on the final topology. In the following, the
main interesting results will be highlighted and discussed.

5 Results

Figure 3 shows the results obtained using the von Mises efficiency criterion with
different values of ER (1% - 2%) and without any check on the number of
elements removed at each iteration (no RER control imposed).

.
Fig. 3 – VM criterion: influence of the ER parameter on the optimized solution.

Introducing the RER parameter in the VM efficiency criterion, for a given


constant value of ER (equal to 1%), different results have been obtained. In
particular, figure 4 shows how the optimal topology is remarkably affected when
the RER parameter changes from 10 to 20.

Fig. 4 – VM criterion: influence of the RER parameter on the optimized solution

Figure 5, instead, shows that using the VM efficiency criterion, the final topology
slightly changes by varying the mesh size.
Influence of the evolutionary optimization ... 21

Fig. 5 – VM criterion: influence of the mesh size on the optimized solution

Finally, the plot in figure 6 shows that the final rejection ratio (RRf) does not
affects considerably the maximum von Mises stress value while, on the contrary, it
has a significant influence on the minimum value on the optimized structure.

Fig. 6 – VM criterion: influence of the RRf on the von Mises stresses

Results of the optimization process based on the strain energy criterion are
influenced in a similar way with respect to the von Mises stress criterion. Figure 7
shows how the RER parameter affects the optimal topology obtained using the SE
efficiency criterion. These results have been obtained considering constant values
of RR0 (1%) and ER (0.5%).

Fig. 7 – SE criterion: influence of the RER parameter on the optimized solution

Moreover, a comparison of the optimized structures obtained with both the criteria
is shown in figure 8. One can notes many details that differentiate the optimal
topologies.
22 T. Ingrassia et al.

Fig. 8 – Optimized structures using the von Mises (on the left) and the Strain Energy (on the
right) criteria
Finally, as it can be noticed from figure 9, the criterion of the strain energy allows
to obtain higher volume reductions than the von Mises stress criterion for a given
value of the final rejection ratio.

Fig. 9 – (final volume/initial volume) vs final rejection ratio.

6 Conclusions

Topology optimization methods allow to obtain high-performance structures with


significant reductions in overall dimensions and masses. In this scenario, the ESO
method represents one of the most effective approach to solve large-scale
topological optimization problem. The designer, however, is not always able to
choose a priori the most suitable parameters set of the ESO optimization process
to obtain the best result in the shortest time. In this work, two different efficiency
criteria, commonly used in the evolutionary optimization processes, have been
investigated. In particular, the von Mises stress and the strain energy criteria have
been implemented and a systematic numerical campaign has been performed
aimed to better understand how the optimization parameters can affect the ESO-
Influence of the evolutionary optimization ... 23

based solutions. In this contest, a new parameter, called RER – Removed


Elements Rate, has been introduced by the author for the first time. The obtained
results have shown the remarkable influence of the efficiency criteria on the
optimal topology in terms of material distribution and volume reduction.
Moreover, the new parameter RER allows a more accurate control of the elements
removal process and a better solving of the optimization problem. The study can
provide useful guidelines for a better understanding and foreseeing of the results
of an ESO-based optimization process, so contributing to a larger spreading and
use of this methodology during the design of high-performances structures.

References

1. Vanderplaats, Garret N., Numerical optimization techniques for


engineering design: with applications. Vol. 1. New York: McGraw-Hill,
1984
2. Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear
underrun protective device for truck, (2010) Proceedings of the 8th
International Symposium on Tools and Methods of Competitive
Engineering, TMCE 2010, 2, pp. 713-725
3. Tromme, E., Tortorelli, D., Brüls, O., Duysinx, P., Structural
optimization of multibody system components described using level set
techniques, (2015) Structural and Multidisciplinary Optimization, 52 (5),
pp. 959-971
4. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique
simultaneous approach for the design of a sailing yacht, (2015)
International Journal on Interactive Design and Manufacturing, DOI:
10.1007/s12008-015-0267-2
5. Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical
redesign of a semitrailer, (2005) WIT Transactions on the Built
Environment, 80, pp. 359-369
6. Nalbone, L., et al., Optimal positioning of the humeral component in the
reverse shoulder prosthesis, 2014, Musculoskeletal Surgery, 98 (2), pp.
135-142.
7. Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multi-
layer structures by laser-based ultrasonic technique, 2008, Journal of
Adhesion, 84 (10), pp. 811-829
8. Savas, S., Evolutionary Topological Design of Two Dimensional
Composite Structures American International Journal of Contemporary
Research, 2012, Vol.2 No.3, pp-76-88
9. Ingrassia, T., Nigrelli, V., Buttitta, R., A comparison of simplex and
simulated annealing for optimization of a new rear underrun protective
device, (2013), Engineering with Computers, 29 (3), pp. 345-358
10. Nha Chu, D., Xie, Y.M., Hira, A., Steven, G.P., On various aspects of
evolutionary structural optimization for problems with stiffness
24 T. Ingrassia et al.

constraints, Finite Elements in Analysis Design, vol 24, 1997, pp.197-


212
11. Deaton, J. D., Grandhi, R. V., A survey of structural and
multidisciplinary continuum topology optimization: post 2000”, 2014,
Struct Multidisc Optim 49:1-38,
12. Yildiz, A. R., Comparison of evolutionary-based optimization algorithms
for structural design optimization, Engineering applications of artificial
intelligence, 2013, 26.1, pp. 327-333
13. Li, Q., Steven, G.P., Xie, Y.M., Evolutionary structural optimization for
connection topology design of multi-components system, Engineering
Computations, 2001, Vol.18 No.3/4, pp. 460-479
14. Garcia, M.J, Ruiz, O.E., Steven, G.P, Engineering design using
evolutionary structural optimisation based on iso-stress-driven smooth
geometry removal. NAFEMS World Congress, NAFEMS, Milan, Italy,
April 2001; 349–360
15. Xie, Y.M., Steven, G.P., Optimal design of multiple load case structures
using an evolutionary procedure, Engineering computations, 1994, Vol
11, pp.295-302
16. Leu, L.J., Lee, C.H., Optimal design system using finite element package
as the analysis engine, Advances in Structural Engineering, 2007, 10.6,
713-725
17. Zhang, D., Liang, S., Yang, Y., A constraint and algorithm for stress-
based evolutionary structural optimization of the tie-beam problem, 2015,
Engineering Computations, 32:6, 1753-1778
Design of structural parts for a racing solar car

Esteban BETANCUR1*, Ricardo MEJÍA-GUTIÉRREZ1, Gilberto OSORIO-


GÓMEZ1 and Alejandro ARBELAEZ1
1
Universidad EAFIT, Medellín, Colombia
*Corresponding author. Tel.: +57-3136726555; E-mail address: ebetanc2@eafit.edu.co

Abstract The racing solar cars are characterized by the constant pursuit of energy
efficiency. The tight balance between energy inputs and consumption is the main
reason to seek optimization in different areas. The vehicle weight is directly relat-
ed to the energy consumption via rolling resistance of the tires. The relation be-
tween weight and energy consumption is quantified. The structural optimization
techniques are studied and a series of rules is obtained to iteratively improve the
shape of structural parts reducing its weight. The implementation is done in a
practical case and satisfactory results are achieved.

Keywords: Solar car, structural optimization, weight reduction, energy efficien-


cy, innovative design.

1 Introduction

A solar car is an electric vehicle that has solar panels as energy input. Since three
decades, solar cars competitions have been executed in order to compare different
solar car concepts. The most important competition is called the World Solar
Challenge (WSC) and its carried out every two years in Australia. The challenge is
to travel with the solar car from Darwin to Adelaide (3022km) using only solar
energy. Every event, new strict rules are imposed to force the cars optimization.
Last event principal regulations for the challenger class were: 6m2 of Silicon solar
panel (1000 watts approx.), 20 kg of Lithium battery (5kWh approx.), one driver
and 4 wheels. With these constrains every team should design and build the most
efficient car to achieve the challenge. Between the teams that achieve the chal-
lenge, a race time classification is done; therefore the complete objective is to fin-
ish the race in the less possible time.

© Springer International Publishing AG 2017 25


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_3
26 E. Betancur et al.

The solar car development involves innovation in multiple engineering areas such
as aerodynamics, control, mechanics, electronics, solar panels, and others. These
multidiscipline projects have been studied by different research groups and uni-
versities all over the world and nowadays is getting more popular time by time.
Some of the main studies on this topic are done by [1] and [2].

The weight reduction of components is a necessity that was first found on the aer-
ospace industry, in our time is present in automotive, mobility and mechanical in-
dustries with the need of cost and energy efficiencies. On solar cars applications,
the weight is proportional to the energy consumption and is a major topic to con-
sider since the design.

2 Solar car efficiency

The solar car challenges come with the objective of testing the energy efficiency
of the vehicles. In general, they consist on meeting a distance in the shortest pos-
sible time and using only solar energy. To optimize the vehicle energy efficiency,
one has to maximize the energy input and storage and minimize the energy output
at high speeds. The energy input depends on the solar panel properties, the storage
is defined by the battery cells properties and array topology and the energy output
is defined by the traction system and the loss forces. Assuming that the vehicle is
on a flat road and travelling at constant speed, the loss forces are mainly the aero-
dynamic drag and the roll resistance force that are defined by equations (1) and (2)
respectively.

1
Faero = Cd AUv2 (1)
2

Froll = CrrW (2)

Where:
Faero: Aerodynamic force [N]
ρ: Air density [kg/m3]
Cd: Drag coefficient [Adim]
A: Vehicle frontal area [m2]
v: Vehicle velocity [m/s]
Froll: Roll resitance force [N]
Crr: Roll resistance coefficient [Adim]
W: Vehicle weight [N]
Design of structural parts for a racing solar car 27

Then, the total energy consumption can be known by calculating the work exerted
by the forces on the complete race distance (see Eq. (3)). The WSC has a distance
of 3022 km and the used tires have a roll resistance coefficient of C rr=0,005 ap-
proximately. Therefore, for this specific case every kilogram of mass on the vehi-
cle means an energy consumption of 41,2 Wh on the race (0,8% of the total bat-
tery capacity). There is a direct dependency between the energy efficiency and the
car weight, and then it is mandatory to reduce this weight as much as possible.

xf
Erace = ³ (Fdrag  Froll )dx (3)
x0

3 Mechanical components design

The solar car suspension system has the main function of giving support to the car
all the time and under any possible load condition that the race can cause. The
bump damping and the low weight are secondary properties of this system. For the
design and manufacture of the suspension components the objective is to perform
the main function with the less possible weight including the bump damping need-
ed for the vehicle stability on the road.
The design constrains involve loading conditions, geometric limits on vehicle,
manufacture capabilities, available materials and components, wheel diameter,
among others. Therefore the suspension design process can be defined as an opti-
mization problem where the weight should be minimized changing the shape of
the components and subject to all the listed design constrains. To solve this prob-
lem, structural optimization techniques are studied.

3.1 Structural Optimization

The structural optimization using computational methods can be divided in two


main categories, the topology and the structural one. Although both can have the
same objective functions, the mathematical definition differs from one to another.
For topology optimization the most common method is to define a design space
and a density function (ρ) on it. Then, one can have regions with almost zero (ρ =0,
non-material) or one (ρ=1, structural material) density values (See [3]). This way,
any shape that fits the design space can be obtained. Since the late eighties, these
techniques have been studied and new variances have been proposed, the most
popular methods are SIMP [4], Evolutionary Structural Optimization (ESO) [5]
and Bidirectional ESO (BESO) [6].
28 E. Betancur et al.

The shape optimization is, in general terms, to modify only the boundary of the
body in order to optimize an objective function. In contrast to the topology opti-
mization, these techniques do not produce (in a natural way) topological changes
on the domain (i.e. new holes) and are useful for small shape variations. Shape
sensitivity and shape derivatives are the main topics for this discipline (see [7]).
Level Set boundary representation (See [8] and [9]) proposes a macrogeometrical
implicit boundary representation by isocontours of a Level Set Function to control
the shape and topology variations during the optimization process.
Different combinations of the two optimization families (topological and shape
optimization) have been proposed and studied. In 2000 Sethian and Wiegmann [9]
developed an optimization procedure using level set boundary representation de-
fined by the following steps:

1. Initialize; find stresses in initial design.


2. While termination criteria are not satisfied do
3. Cut new holes according to the stress distribution and the removal rate.
4. Move boundaries via the level set method, according to the stress distribu-
tion and the removal rate.
5. Find displacements, stresses, etc.
6. If the constraints are violated reduce the removal rate and revert to previous
iteration.
7. Update the removal rate.

With this process, topological changes are made with the new holes creation and
shape variations with the boundary movement.
These different optimization methods and the available commercial programs for
structural optimization, do not take into account the manufacture methods and ca-
pabilities. The obtained shapes usually are only reachable using 3D printers and
the standard CNC manufacture process results impractical or non-viable.

4 Design process

The vehicle suspension design begins with the definition of basic shapes that ful-
fill the geometric constrains, the designation of the maximum loading condition
that can happen on race and the safety factor for the design. Then the loads of each
component are defined and the stress (σ) and strain conditions are found using fi-
nite elements simulation.
An iterative shape improvement process is done for each component. The process
steps are listed below and the redesign conditions are obtained from the structural
optimization literature review.
1. Redesign the element following these rules:
a. Remove low stressed material making holes if possible
Design of structural parts for a racing solar car 29

b. Remove low stressed material moving shape boundary inwards


c. Add material (move boundary outwards) where the stress exceeds the
maximum
d. Design the new shape according to the manufacture capabilities
2. Find the weight, stress and strain distribution of the new design
3. Stop If (weight<desired weight) and (σmax< material yield stress) or maxi-
mum iterations reached, else go to step 1.
With this design process, the optimal shape is not reached but a significant weight
reduction is achieved. The resulting structural component has an improved materi-
al distribution with respect to the initial one and the stress distribution is homoge-
nized.

6 Results

The proposed method was used for the design of the structural components for the
EAFIT-EPM Solar Car participating in the WSC 2015. A single-sided swing-arm
suspension (see Fig. 1) was defined for the 4 wheels, a preliminary design was
created and the components were improved one by one. The CAD software CREO
Parametric 3.0 was used for the design of the components and the CAE software
ANSYS Workbench 15.0 for the structural FEM simulation. Since the necessity of
the designer and manufacturer criteria for modifications, the improvement itera-
tions were not done automatically.

Fig. 1. Single-sided swing-arm suspension diagram.

The knuckle is a 3-link component that is articulated to the suspension arm, sup-
ports the wheel axis, and is attached to the suspension shock (see Fig. 1.). This
component was upgraded on 10 iterations using the proposed design process and
the manufacture was to be done on a CNC milling machine, the material is 7075
30 E. Betancur et al.

Aluminum. For the design of the components, a maximum state of loads was de-
fined as the critical instant on which the vehicle is turning, braking and bumping
with a 3.5G, 1.2G and 3.5G acceleration respectively (G: acceleration of gravity).
Figure 2 illustrates the loads for the complete suspension system. Using rigid body
approximations, the loads were calculated on all the components and the boundary
conditions for the knuckle simulation were defined (See Fig. 3).

Fig. 2. System state of loads: Front view (left) and side view (right) based on the car orientation.

Fig. 3. Knuckle state of loads.

An initial CAD design is created with parametric dimensions based on the design
space. The boundary conditions for the structural simulation are defined as: dis-
placement constraint on the arm link, force and moment reactions on the wheel ax-
is link and force reaction on the shock link (See Fig. 3). Based on the stress distri-
bution (See Fig. 4), the material is removed or added as defined on Section 4, The
maximum stress is on the suspension arm support for all the proposed designs
(See Fig. 4), a material addition is done in this region. The weight and max stress
evolution of the improvement process are presented on Figure 5, a gain of 113g
was obtained from the initial design while the structural safety of the component is
Design of structural parts for a racing solar car 31

ensured. The initial, intermediate and final shape are shown on Fig. 6 and the
manufactured component on Fig. 7. For fatigue resistance of the component, the
cyclic loads are found (significantly lower than the used design loads) and the
stress distribution is calculated, finding the maximum stress below the endurance
limit of the material.

Fig. 4. Final knuckle Displacement (left) and Von-mises stress distributon (right).

Fig. 5. Maximum von misses stress and weight for the design iterations.

Fig. 6. Knuckle evolution. Initial design (left), intermediate design (center), final design (right),

Fig. 7. Knuckle manufactured in CNC milling machine.


32 E. Betancur et al.

7 Conclusion

Every gram reduced in a solar car means a direct reduction on the energy con-
sumption; therefore a main objective on these projects is the minimization of the
total weight.
The proposed manual shape improvement method lists a series of rules to modify
shape design in order to reduce weight on structural components. Although the re-
sulting shape can be far from the global optimum, the improvement is noticeable.
The classical numerical shape and topology optimization methods do not take into
account the manufacturability of the components, and in most cases the resultant
shapes can only be achieved with 3D printers. With this method, designers can
have parameters to vary shapes and topology manually to guarantee manufacture
constrains.
The application of the method on the design of the suspension components has a
direct influence on the energy efficiency of the vehicle. The reduction of 113g
(20%) on each one of the 4 knuckles, means a total weight reduction of 452g and
an energy saving of 18,6Wh on the race. This procedure is recommended for the
design of all the suspension components.

References
[1] D. Roche, Speed of Light: The 1996 World Solar Challenge, University of New South
Wales Press, 1997.
[2] G. Tamai, The Leading Edge: Aerodynamic Design of Ultra-streamlined Land Vehicles,
Robert Bentley, 1999.
[3] M. P. Bendsoe and O. Sigmund, Topology Optimization: Theory, Methods, and Appli-
cations, Springer Science \& Business Media, 2003.
[4] G. I. N. a. Z. M. a. B. T. Rozvany, «Generalized shape optimization without homogeni-
zation,» Structural optimization, vol. 4, nº 3, pp. 250-252, 1992.
[5] Y. Xie and G. P. Steven, «A simple evolutionary procedure for structural optimization,»
Computers \& structures, vol. 49, nº 5, pp. 885-896, 1993.
[6] X. Yang, Y. Xei, G. Steven and O. Querin, «Bidirectional evolutionary method for stiff-
ness optimization,» AIAA journal, vol. 37, nº 11, pp. 1483-1488, 1999.
[7] J. Sokolowski and J.-P. Zolesio, Introduction to shape optimization, Springer, 1992.
[8] G. Allaire, F. Jouve and A.-M. Toader, «Structural optimization using sensitivity analy-
sis and a level-set method,» Journal of computational physics, vol. 194, nº 1, pp.
363-393, 2004.
[9] J. A. Sethian and A. Wiegmann, «Structural boundary design via level set and immersed
interface methods,» Journal of computational physics, vol. 163, nº 2, pp. 489-528,
2000.
Section 1.2
Integrated Product and Process Design
Some Hints for the Correct Use of the Taguchi
Method in Product Design

Sergio RIZZUTI* and Luigi DE NAPOLI

Università della Calabria, Dipartimento di Ingegneria Meccanica, Energetica e Gestionale –


DIMEG, Ponte Pietro Bucci 46/C, 87030 Rende (CS) Italia
* Corresponding author. Tel.: +39-0984-494601; fax: +39-0984-494673. E-mail address:
sergio.rizzuti@unical.it

Abstract. The paper discusses the problem of the correct identification of the Ob-
jective Function and the associated SNR function that designers must choose
when employing the Taguchi method in product design, considering this step as
the basic element to quantify the uncertainty of the device performance prediction.
During product design, when many design aspects must still be understood by the
design team, it is important to identify the most suitable “loss function” that can
be associated with the characteristic function. The second step considers the varia-
bility of the characteristic function. The Taguchi method considers many Signal to
Noise Ratio functions whereas in the paper the use of a unique function is sug-
gested for all kinds of loss function. The discussion is argued in the context of so-
called parameter design, with the perspective of identifying the best ranges of var-
iation of the parameters that designers have identified as influential on the charac-
teristic function, and also to adjust those ranges in order to obtain twofold results:
reduce Bias between the mean value of the characteristic function response and
the target value; obtain less variability of the characteristic function. The discus-
sion of a case of study will point out the approach and the use of a unique Noise
Reduction function.

Keywords: Taguchi method; Loss Function; Signal to Noise ratio; Noise Reduc-
tion.

1 Introduction

One of the most important phases during product design is the moment when the
designer team try to evaluate the efficiency of the design solution. Considering
that the solution under development was selected among several alternatives and
that also customers have suggested their desiderata, the designer must verify

© Springer International Publishing AG 2017 35


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_4
36 S. Rizzuti and L. de Napoli

whether the solution is sufficiently robust to behave as was suggested in the initial
design phase, when a set of requirements were stated.
The methodology of Robust Design by means of the Taguchi Method is partic-
ularly suited to be applied during design. Even if the methodology has a wide
range of uses, its employment during product design allows designers to have a
better insight on the device behavior in the scenario in which it will be used. Basi-
cally designers have to identify the characteristics that must be checked. For each
characteristic an Objective Function must be identified, in order it might be veri-
fied by the Taguchi method. This is the first problem the team must solve: it is not
simple to translate the behavior of the device into an Objective Function, which
represents it in mathematical terms. The second step consists in setting an experi-
mental plan by which to investigate the influence of a set of design parameters
over the Objective Function. The discussion of the results obtained is this starting
point addresses designers to perform the so-called “parameters design”. The pa-
rameter ranges must be chosen in order to adhere to the essence of the Objective
Function and this must be given in strict relation to the behavior of the “loss func-
tion” introduced by Taguchi. At the same time it is important to reduce the influ-
ence of noise in the device behavior: the variability. For this point Taguchi intro-
duced a set of functions, called SNR (Signal to Noise Ratio), which allows the
quantification of the dispersion of the device behavior when it is imagined operat-
ing in a set of altered scenarios, simulating real operational conditions. This latter
step must be identified in a suitable manner, because it can be responsible for mis-
understanding. The theme of SNR is one of the most critical points that has been
pointed out in the never-ending debate between statisticians and engineering
communities.
The paper discusses the problem of the correct identification of the Objective
Function and the associated SNR function, considering this step as the basic ele-
ment to quantify the uncertainty of the device performance prediction. During
product design, when many design aspects must still be understood by the design
team, it is important to evaluate the effects the design parameters have on the
mean effect on the Objective Function and the corresponding trend of the SNR,
and to identify which of them can be adjusted in order: to reduce the BIAS from
the behavior of the Objective Function and the target value assigned in the list of
requirements; to reduce the variance of the Objective Function dispersion.
The uncertainty quantification during the product design can be pursued by
CAE simulation. The authors believe that the right connection between the
Taguchi method and computer simulation, also in Multiphysics, can provide the
right suggestions towards the identification of the best solution in product design.
The right choice of the Objective Function and the law by which to measure the
dispersion of the results can affect the reasoning on the design solution. For this
reason, the paper will point out the nature of the loss function of the “target the
better” approach, and will suggest the Noise Reduction function that allows varia-
bility to be quantified.
Some Hints for the Correct Use of the Taguchi ... 37

2 Relevant problems

During the 1980s, when the Taguchi Method was successfully employed in west-
ern countries [1], a serious debate emerged that involved statisticians and industri-
al engineers, regarding some relevant problems about the method. The most fa-
mous was the round table published by Technometrics [2] in 1992. Statisticians
revealed a set of flaws in the process, even though they recognized the validity of
the Taguchi method in opening the world of statistical approach to otherwise skep-
tical people. Industrial engineers were extremely interested in how it might be ad-
dressed to the investigation of processes or products, by means of the so-called
Taguchi philosophy.
In the following, only two questions will be considered that are particularly rel-
evant to the users of the method, because the Taguchi method in many statistical
software programs is presented in the “classical” terms with which it became pop-
ular in the nineties. In fact, the principal flaws, which can be found in the supple-
mentary material of the textbook by Montgomery [3], have still not been resolved,
in the sense that several solutions are not or cannot be generally applicable.
The essential elements of the methodology derived by Design of Experiment
(DOE) will not be discussed here, referring these parts to the book of Phadke [4].
Briefly, these are: the identification of the objective function associated with the
main characteristic; the construction of the plane of experiment, or the adoption of
an orthogonal array; the identification of design parameters and their ranges of
variation; the identification of noise effects and their ranges of variation; discus-
sion of the results comparing simultaneously the Main Effect and the Signal-to-
Noise Ratio [5-6].

2.1 Problem 1: Definition of the characteristic function

At the basis of the Taguchi philosophy there is the concept of the so-called “loss
function - L”: a cost that society must sustain because of the inconsistencies of
product/process characteristics”. The main definition of Quality is then translated
from the verification of the standard condition declared in the product to the re-
duction to the minimum of the bias between the characteristic function (mean val-
ue) and its target value, in conjunction with its low variation. Taguchi introduced
the Mean Squared Deviation (MSD) that measures variation around the ideal tar-
get μ,

‫ ܦܵܯ‬ൌ  ߪ ଶ ൅  ሺ‫ݕ‬ത െ ߤሻଶ (1)

where ߪ represents the standard deviation of the characteristic function and ‫ݕ‬ത its
mean value.
38 S. Rizzuti and L. de Napoli

In order to reduce the bias it is important to choose the best ranges of variation
for each design parameter that influences the characteristic function. Operating in
conjunction with the loss function, the designer has to vary the parameter range or
move it in the direction shown by the results obtained with the plane of experi-
ments.
Even the laws “the smaller the better” (where μ = 0) and “the larger the better”
(where μ = ∞) are immediately identified with a direction (reduction or increasing)
of the characteristic function for which the ranges of variation of each design pa-
rameter must be tuned, the law “target the better”, which incidentally appears as
the easiest to learn, hides the problem of not suggesting to the designer how to ori-
entate the design parameter ranges, with their reduction or increase. Figure 1
shows the qualitative description of these laws, in which the Loss function (in red)
and the pdf (in blue) of the characteristic function are superimposed .

a) Small the better [ሺ‫ݕ‬ሻଶ ݉݅݊] b) Large the better [ሺ‫ݕ‬ሻଶ ݉ܽ‫ ”‘ ݔ‬ሺͳΤ‫ݕ‬ሻଶ ݉݅݊]

c) Target μ the better [ሺ‫ ݕ‬െ ߤሻଶ ݉݅݊ሿ


Fig. 1. The loss functions L and the pdf of the characteristic function.

2.2 Problem 2: Variance of noise effects

The concept associated with SNR (signal to noise ratio) takes into consideration
the mean value and the variance of the characteristic function at the same time in a
set of scenarios in which the device under development can operate It is derived as
the reciprocal of the coefficient of variation. Equation (2) represents the most
common SNR, generally employed in the “target the better” approach.

ܴܵܰ ൌ ͳͲ݈‫݃݋‬ଵ଴ ሺ‫ݕ‬ത ଶ Τߪ ଶ ሻ (2)

At a first sight, this element is interesting for its strength in taking into considera-
tion both peculiar terms of the problem. However, a certain difficulty emerges and
Some Hints for the Correct Use of the Taguchi ... 39

at least three different formulas were defined by Taguchi. This is the second ele-
ment that should be clarified.
It has been stated that SNR must be maximized, because what really must be
pursued is to converge towards a design solution that has minimum variance, in
the sense that, whatever the noise affecting the device can be, it will perform al-
most uninfluenced by them. Figure 2 illustrates the convergence towards the more
valid design solution by the pdf with minor variance (the bell curve shown with
blue line). This is in agreement with the DSS approach (Design for Six Sigma).

Fig. 2. Reduction of noise effect.

3 New set of evaluation criteria

Considering the Taguchi Method as fundamental in product design, because it al-


lows the designer to investigate around his/her design solution with many insights,
it is important to point out the flaws that might compromise the adoption of the
right decisions.
The suggestions can derive from the following two subsections.

3.1 Reducing Bias of characteristic function

In many applications of the Taguchi method in product design the problem of the
characteristic function converging towards a target must be managed. In this kind
of application, the third case described by Taguchi cannot be applied in a straight-
forward way, because the target cannot be considered as the mean value of the
characteristic, whereas it assumes the meaning of a lower bound or an upper
bound.
The image reported in Figure 1c should be modified as reported in Figure 3.
In this case the problem must be clearly identified, verifying that the distribution
of the characteristic function does not pass the target value. We can separate this
case in two different occurrences, which can be named: target the better with the
lower bound and target the better with the upper bound.
40 S. Rizzuti and L. de Napoli

Fig. 3. The loss function L and the target μ identified as extreme value of a lower or upper bound
condition.

The management of the condition target the better with the lower bound (see
Figure 4a) requires that the characteristic function is written in such a way that it
corresponds to a coordinate translation ሺ‫ݕ‬ത െ Ɋሻ. The condition that should be sat-
isfied is the smaller the better problem with the following law: ሺ‫ݕ‬ത െ Ɋሻଶ ݉݅݊Ǥ
Figure 4b describes this kind of situation.

a) b)
Fig. 4. a) The loss function L target the better with the lower bound ; b) the coordinate transla-
tion.

The management of the condition target the better with the upper bound (see
Figure 5a) requires that the characteristic function is written in such a way it cor-
responds to a coordinate translation ሺɊ െ ‫ݕ‬തሻ. The condition that should be satisfied
is the smaller the better problem with a the following law: ሺɊ െ ‫ݕ‬തሻଶ ݉݅݊Ǥ
Figure 5b describes this kind of situation.

a) b)
Fig. 5. a) The loss function L target the better with the upper bound ; b) the coordinate transla-
tion.

3.2 Reducing variance of noise effect

The unique rule that can be followed, as reported in Figure 2, is the reduction
of variance. Or, to maintain the meaning that the Noise Reduction must be maxim-
ized, a formula can be written as:
Some Hints for the Correct Use of the Taguchi ... 41

ܴܰ ൌ  െͳͲ ݈‫݃݋‬ଵ଴ ߪ ଶ (3)

similar to those invented by Taguchi, where ߪ is the standard deviation of the dis-
tribution.
Really this is already contained in the general SNR formula (see eq. 2) if we re-
write it as

ܴܵܰ ൌ ͳͲ݈‫݃݋‬ଵ଴ ‫ݕ‬ത ଶ െ ͳͲ݈‫݃݋‬ଵ଴ ߪ ଶ (4)

where the effect of the mean value ‫ݕ‬ത was eliminated.


The Noise Reduction reported in eq. 3 is the unique formula that can be used in all
kinds of problem.

4 Case study

One of the most frequent characteristic functions to be analyzed in product design


is the Factor of Safety (FoS) of a structure. It is generally assumed as a value of 1
or greater in relation to the model employed and the condition of employment.
Considering the Taguchi philosophy on Robust Design, FoS cannot be identified
as the target of an experiment, whereas it is really a lower limit that cannot be vio-
lated. Employing the third law of the Loss function in a straightforward manner, in
this case, designers might incur in accepting combinations of design parameters
that give a not allowable condition.
The case study proposed in this section is the first dimensioning of an E-bike
frame (see Figure 6). The E-bike was designed with the electric motor integrated
in the rear wheel, and the battery housed on the main tube in order to balance the
weights.

a) b)
Fig. 6. a) The first drawing of the E-bike; b) The geometric model of the frame.

The dimensioning of the main components of the frame was performed by the
Taguchi method as a “target the better” problem. An orthogonal array L8 (2 7) was
used and a set of seven design parameters was identified. In order to evaluate the
response of the structure made in 5086 Aluminium Alloy, three different load
conditions were modeled, which allowed designers to assess the variability of the
characteristic function. The constraints were applied at the base of the head tube
and the rear dropouts. In table 1 the three load conditions (in N) are summarized.
42 S. Rizzuti and L. de Napoli

Table 1. Load conditions in several scenario.

Scenario Load con- Load on left Load on right Load on Load on Load on the
ditions handlebar handlebar left pedal right pedal saddle
1 Be seated 200 200 200 200 700
2 Out saddle 250 250 500 500 0
3 Accidental 0 0 03 0 1500

In Figure 7 the set of design parameters selected for the investigation is report-
ed. They are:

A – Slope of the seat stays = 32 ÷ 37 degrees


B – Joint of the down tube to the head tube = 40 ÷ 55 mm
C – Joint of the top tube to the head tube = 35 ÷ 48 mm
D – Diameter of the stays = 12 ÷ 17 mm
E – Thickness of the stays = 1.5 ÷ 2 mm
F – Down tube minor semi axis = 25 ÷ 33 mm
G – Down tube major semi axis = 45 ÷ 52 mm

In the following the results of the investigation are reported, following two ap-
proaches: the first reports the “classical” elaboration of the “target the better “case
(see tables 2 and 3); the second reports the elaboration performed by the present
proposal (see tables 4 and 5).

a) b)
Fig. 7. a) The set of design parameter employed during the investigation; b) CAE simulation of
the loaded frame with the response in terms of FoS in false colors.

തതതതത ൌ ʹ.
Table 2. Mean value of the Objective Function, FoS to be guided to the target value ‫ܵ݋ܨ‬
Some Hints for the Correct Use of the Taguchi ... 43

Table 3. SNR performed by eq. 4.

Table 4. Mean value of the Objective Function, FoS to be minimized, revisited by the law “tar-
തതതതതሻ ൌ Ͳ.
get the best” with lower bound: ሺ‫ ܵ݋ܨ‬െ  ‫ܵ݋ܨ‬

Table 5. NR performed by eq. 3.

As can be seen no differences appear between tables 2 and 4. The results have
obviously the same trends because they have been subjected to a translation. The
graphs of table 4 give direct information on the characteristic function and design-
er can judge immediately the state of stress in comparison to the target value.
Table 5 differs in many parts with respect to table 3. This is an interesting fact.
In detail, factor B has now a higher variability, D, F and G changed their slopes.
The comparison of the Mean Value and NR graphs (tables 4 and 5) allow the de-
signer to identify a great number of design parameter levels that better respond to
the problem. In fact, the levels that simultaneously give the small value to the
characteristic function and the maximum to Noise Reduction are A2-B2-D1-E1-
F1-G1.
Further investigation must be made on C, because ambiguity remains. Following
the procedure discussed in [6] a second step can be pursued employing a plane of
experiment at three levels. In any case, the range of factor C must be moved to-
wards the lower level, in accordance with the Mean Value graph. The new param-
eter range can be (30; 35; 40). The other parameters to be examined in an L9 (34)
44 S. Rizzuti and L. de Napoli

orthogonal array are A, D and F, obtained by ANOVA as the most influential on


the characteristic function Mean Value, and their initial range can be subdivided.

5 Conclusion

The main task of the present paper is underlying the extreme importance of the
Taguchi method in product design. During the first step of the embodiment phase,
designers need to understand and evaluate the points of strength and weakness of
the solution under development. To project a plan of experiment, together with the
identification of the possible design parameters and their ranges, the identification
of the possible noise sources, allows designers to familiarize with the device be-
havior and to decide the eventual changing of product architecture.
The use of the Taguchi method can be done with the help of some statistical
software, even the SNR that can be employed are the same that have been subject-
ed to criticism. Therefore the researcher must be alerted as to the straightforward
use of it. At the same time it appears that many researchers do not worry about it
and indeed it seems they ignore the discussion on the Taguchi method.
The Taguchi method does not require specialized software because it can be
used with the help of a datasheet. Designers should be invited to implement it with
the awareness underlined in this paper.

Acknowledgments The authors would like to thank Dr. Alessandro Burgio (PhD), President of
CRETA – Regional Consortium for Energy and Environment Protection, and the students Raso
A., Tropeano E.F., Sergi A. and Mirabelli F. for the materials produced and presented in the pa-
per.

References

[1] Taguchi G. and Wu Y. Introduction to Off-line Quality Control, Central Japan Quality Con-
trol Association, Nagoya, Japan, 1985.
[2] Nair V.N. editor. Taguchi’s Parameter Design: A Panel Discussion. Technometrics, 34 (2),
1992.
[3] Montgomery D.C., Design and analysis of experiment, 8 th Ed., 2012, John Wiley & Sons
[4] Phadke M.S., Quality engineering using Robust Design, 1989, Prentice-Hall International.
[5] Rizzuti S., The Taguchi method as a means to verify the satisfaction of the information axiom
in axiomatic design, Smart Innovation, Systems and Technologies, 2015, 34, pp. 121-131
[6] Rizzuti S., A procedure based on Robust Design to orient towards reduction of information
content, Procedia CIRP, 2015, 34, 37-43.
Neuro-separated meta-model of the scavenging
process in 2-Stroke Diesel engine

Stéphanie Cagin1*, Xavier Fischer1

ESTIA Recherche (France),


1

Corresponding author E-mail address: s.cagin@estia.fr

Abstract The complexity of flow inside cylinder leads to develop new accurate
and specific models. Influencing the 2-stroke engine efficiency, the scavenging
process is particularly dependent to the cylinder design. To improve the engine
performances, the enhancement of the chamber geometry is necessary. The devel-
opment of a new neuro-separated meta-model is required to represent the scaveng-
ing process depending on the cylinder configuration. Two general approaches
were used to establish the meta-model: neural networks and NTF (Non-negative
Tensor Factorization) separation of variables. To fully describe the scavenging
process, the meta-model is composed by four static neural models (representing
the Heywood parameters), two dynamic neural models (representing the evolution
of gases composition through the ports) and one separated model (the mapping of
the flow path during the process). With low reduction errors, these two methods
ensure the accuracy and the relevance of the meta-model results. The establish-
ment of this new meta-model is presented step by step in this article.

Keywords: Neuro-separated meta-model; model reduction; neural network; NTF


variables separation; scavenging; 2-stroke engine; ports.

1 Introduction

Due to drastic emissions standards and fuel consumption constraints, internal


combustion engines used for transportation are widely studied to improve effi-
ciency and obtain better performances. Despite its low efficiency, automotive in-
dustry has recently a renewed interest for 2-stroke engines thanks to its ad-
vantages. Smaller and lighter than 4-stroke engines, 2-strokes engines fire once
every revolution offering higher power, greater and smoother torque (Mattarelli
[1]; Trescher [2]). The interest is also linked to the development of new technolo-
gies approaches which increase their efficiencies.
The complexity of engines leads to focus the studies on one special moment of
the engine cycle. Less studied but just as important, the scavenging process has a

© Springer International Publishing AG 2017 45


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_5
46 S. Cagin and X. Fischer

considerable influence on engine pollution, especially in 2-stroke engines with


ports. Scavenging is the process by which the fresh gases come in the combustion
chamber, pushing through the exhaust ports the burnt gases. This process is highly
dependent on the cylinder geometry. To improve the scavenging efficiency, it is
necessary to model it and to observe the influence of geometric parameters on it.
Some scavenging models have been already developed such as the perfect dis-
placement and perfect mixing models proposed by Hopkinson [3], the Maekawa
“three-zone” model [4] or the Sher “S-shape” [5]. These models evaluate the ex-
change of gas masses during scavenging. Their use requires the evaluation of
some empirical parameters and they do not explicitly integrate cylinder parameters
which make them unsuitable in a design optimization process. Other models have
been established ever since but they are specific to the engine studied.
To represent the scavenging process in 2-stroke engine, we needed to develop
new models perfectly adapted to our Diesel engine (2-stroke with ports) and to our
use (to find an optimized cylinder design).

2 Scavenging process

2.1 The new model

During the engine cycle, scavenging occurs between the expansion and the com-
pression phases (Blair [6]). Only ports are used to do the gaseous exchanges.
Three phenomena are usually associated to the scavenging by ports: the back-
flow (Mattarelli [1]), the short-cutting and the mixing (Lamas [7]). The backflow
is observed when the burnt gases flow through the intake ports stopping fresh gas-
es from entering. The short-cutting characterizes the exit of fresh gases before the
end of the scavenging process. Less problematic than the short-cutting, the mixing
implies that a part of fresh gases goes out the cylinder but they entail some burnt
gases with them. To enhance the scavenging process, both backflow and short-
cutting should be reduced. The two phenomena can evaluated knowing the com-
position of the gases going through the ports.
To fully characterize the scavenging efficiency, it is also necessary to evaluate
the masses of gases exchanged: the aim of scavenging is to replace burnt gases by
fresh gases as far as possible. The gaseous exchanges are determined thanks to the
four Heywood parameters [8]: the delivery ratio Λ, the trapping efficiency ηtr, the
scavenging efficiency ηsc and the charging efficiency ηch.
Finally, the model is complete by the evolution of the gases distribution inside
the cylinder during the process. The observation of the flow path is also useful in a
design optimization process to determine the more suitable cylinder parameters.
Neuro-separated meta-model of the scavenging … 47

2.2 The cylinder variables

To optimize a design, the parameters of the cylinder directly influencing the scav-
enging process have been defined (Cagin [9]): the angles βin and βexh, the height of
the exhaust port θend_exh_port, the advance of opening of intake and exhaust ports -
θin_advance and θexh_advance-, the boost pressure Pboost and the difference between in-
take and exhaust pressures ΔP. Their ranges of value have been determined to re-
spect the engine constraints (mechanical constraints, duration of power phase…).
Then, a database of behavior has been built from 2D CFD results [10]. The dif-
ferent configurations of cylinder (combinations of parameters) were determined
using a design experiment [11]. The developed models ensue from CFD results.

3. Neuro-separated meta-model

The new developed model is called “neuro-separated meta-model” because of the


techniques used to establish it. Indeed, to explicitly express the cylinder variables,
neural networks have been used to develop two sub-models: one for the Hey-
wood’s parameters and one for the evolution of the gases composition going
through the ports. The path flow model is defined thanks to a separated variables
method, the β-NTF approach. All the sub-models are brought together to form the
neuro-separated meta-model of the scavenging process.

3.1. Neural network

Neural networks (NN) can be seen as a complex mathematical function that ac-
cepts all numerical inputs and generates associated numerical outputs. After the
training phase, the NN is able to approximate the system’s behavior. Its ability to
handle all input combinations makes it very useful in optimization process: suc-
cessive combinations of variables can be tested to converge towards an optimized
design. The NN analytical model of the scavenging is established with multiple in-
terests. First, the analytical model will run faster than CFD calculations (few se-
conds vs 10h). Second, the NN model size is far smaller than a database of CFD
results. Third, any parameter combination can be tested if the scavenging behavior
is considered continuous and linear outside the learning points.
48 S. Cagin and X. Fischer

3.2. Heywood neural sub-model

The neural networks were developed thanks to python code and pybrain library.
The network is used to evaluate the Heywood parameters (λ, ηtr, ηsc, ηch) at the end
of the process depending on the cylinder design.
After several tests with different structures (changing the number of neurons
and the activation function type), the results illustrated the NN’s inability to re-
duce the relative error of all four outputs at the same time. So, one NN for each
parameter is used. Depending on the number of hidden neurons and the activation
functions selected, the relative error for the learning data and the relative error for
all the data are calculated. Weights and biases of the network are randomly initial-
ized. The results for each Heywood parameter NN are presented in Table 1.
Table 1. Neural network results
Relative error
Output Activation function Number of hidden neurons Training data All data
λ Sigmoid 10 2.56% 7.9%
ηtr Sigmoid 8 0.65% 2.5%
ηch Tanh 12 0.36% 7.1%
ηsc Tanh 8 3.02% 5.2%
The first striking result is that the best structure for each output is different from
the others which confirms the need to develop one NN per parameter.
These four outputs indicate the influence of each input on the scavenging pro-
cess in order to optimize the cylinder design. However, the relative errors of de-
livery ratio and charging efficiency are over 7% which makes the NNs’ relevance
questionable. To reduce the errors, the database can be completed with other CFD
results. Another way is to change the NN structure to something more appropriate
and/or to initialize the weights and biases with better values.
In any case, NNs are very flexible with the input values which is very useful in
optimization problems which justify their widespread use in various sectors. The
four neural networks of Table 1 are integrated to the neuro-separated meta-model.

3.3. Mass fractions neural sub-model

Now, the goal of the model developed in this section is to describe the evolution
of the gases composition flowing through the ports. The composition is evaluated
0.99 crankshaft degrees. The mass fraction of burnt gases is the output of the NN.
The function the NN has to approximate (the blue in Fig. 1) models the Behav-
ior of a Design Configurations Family (the BDCF model). All the inputs were
numbered from 1 to 2429 (intake data) or to 4077 (exhaust data), they correspond
to the abscissa axis. On the ordinate are the outputs (mass fraction of burnt gases).
Neuro-separated meta-model of the scavenging … 49

The number of layers is chosen as small as possible. The number of nodes on


the hidden layer arbitrarily varies from 6 to 12. To compare the performance of
different NNs, the training duration and the relative error are computed.
(1)
And to obtain the average error, equation (2) was used:
(2)
®
Matlab software was used to generate and train the NNs. The linear activation
function was selected for the input layer, whereas the tangent sigmoid is used for
the hidden and output layers [11]. The results of the NN are provided in
Table 2.
Table 2. Network efficiencies depending on the number of nodes on the hidden layer
Number of nodes Absolute error
Training duration
(hidden layer) Average Minimum Maximum
6 5:48 3.67 % ~ 10-6 26.9 %
8 6:03 3.33 % ~ 10-7 24.6 %
10 6:19 3.15 % ~ 10-6 28.8 %
-5
12 6:41 3.14 % ~ 10 27.0 %
The influence of neurons number is quite low: all the errors are almost the
same. With less than 4% average relative errors, whatever the number of nodes in
the hidden layer, the neural networks appear to be very efficient for modelling the
scavenging process. Only the maximum relative error remains unsatisfying. To
decrease the maximum relative error, others hidden layers can be added. In this
case, a compromise between error reduction and the increased complexity of the
analytical model has to be found.

Fig. 1. Expected outputs vs BFDC model


The NN with 8 nodes seems to be the best compromise between training dura-
tion and relative errors. The general shape of NN outputs is close to the function
shape of the real outputs as shown in Fig. 1. The use of the sigmoid function for
the output layer is validated. This NN was selected to compute the composition
through the exhaust ports. Same NN structure is used for the intake ports. The two
NNs are also integrated to the neuro-separated meta-model.
50 S. Cagin and X. Fischer

3.4. Gases distribution sub-model

The Non-negative Tensor Factorization (NTF) algorithm is very attractive because


of its ability to take into account spatial and temporal correlations between varia-
bles more accurately than 2D Non-negative Matrix Factorization (NMF). First
proposed by Shashua and Hazan [12], NTF is a generalization of the NMF (Lee
and Seung [13]). Based on a PARAFAC (PARAllel FACtors) analysis, the partic-
ularity of the NTF method is to impose nonnegative constraints on tensor and fac-
tor matrices. NTF also provides greater stability and a unique solution, as well as
meaningful latent (hidden) components or features with physical or physiological
meaning and interpretation [14]. Finally, the NTF algorithm provides powerful
implementation with multi-array data. The principal of decomposition of NTF is
illustrated by Fig. 2:

Fig. 2. NTF Concept


The 3D matrix form of the separated model is particularly suitable for data stor-
age and data mapping: the separated variable model is used to visualize the flow
path during the process. To use the NTF algorithm with the β divergence ex-
pressed by Cichocki [14], data are organized to obtain the Y model. The three di-
mensions were respectively associated with space, time and combinations of cyl-
inder parameter (cf. Fig. 2). The dimensions of the Y matrix in this study are 174
x 6561 x 27: 174 crankshaft angles for the scavenging duration, 6561 mesh nodes
(after normalization of the data) and 27 configurations of engine modeled.
First, the influence of the number of design configurations on the average rela-
tive error was tested (Fig. 3). The number of design configurations directly im-
pacts the model size and the dimension of the Y matrix. The relative error was
calculated with (1) and the average error with (2).

Fig. 3. Evolution of the relative error depending on the number of configurations


Fig. 3 shows that, for the same number of modes, the reduction error rises when
the number of configurations increases. However, it also highlights that the influ-
ence of the model size on the error decreases when the size of the model increase.
Beyond 14 configurations, the error increases are almost negligible. Any new con-
figuration results added to the CFD model will not impact the accuracy of the re-
duced model.
Neuro-separated meta-model of the scavenging … 51

As seen in Table 3, the number of modes should be chosen carefully. This


number influences both the reduction error and the time needed to establish the
reduced model. The duration of the model development proportionally increases
with the number of modes. With 160 modes, more than 5 days are needed to get
the complete reduced model.
Table 3. Results of NTF reduced models
Average rel. Normalized Reduced model
Modes % of reduction Time
error model size size
10 7.12% 30,823,578 67,620 99.78% 18.6h
80 3.45% 30,823,578 540,960 98.24% 71.2h
120 2.80% 30,823,578 811,440 97.37% 99.2h
160 2.41% 30,823,578 1,081,920 96.49% 123.2h

Fig. 4. Average relative error depending on the number of modes

On the contrary, the error decreases when the number of modes rises (Fig. 4). In
addition, over 10 modes, the error is always under 8% which confirms the NTF
method is well adapted to extensive models: it is able to compress 99.8% introduc-
ing less than 8% of errors. An error of 2% seems to be the minimum that can be
achieved (Fig. 4). The difference between CFD and NTF models are presented by
Fig. 5.

Fig. 5. Comparison between CFD and NTF models (for 60 and 160 modes)

Due to its matrix form, the NTF reduced model is only usable for the 27 con-
figurations tested. It cannot directly provide results for any other design configura-
tions but, associated with a kriging method, the separated variable model forecasts
the evolution of the distribution of gases inside the cylinder during the whole
scavenging process whatever the configuration thanks to data interpolation [11].
The NTF reduced model completes the neuro-separated meta-model already in-
tegrating six neural models (four neural networks for each Heywood parameter
and two for the gases compositions).
52 S. Cagin and X. Fischer

5. Conclusion

Thanks to neural networks and NTF separation of variables method, a neuro-


separated meta-model of the scavenging process has been developed. This new
meta-model characterizes the whole process integrating seven reduced models: six
neural and one separated variable models. The meta-model combines static as-
pects of scavenging thanks to the Heywood parameters evaluated at the end of the
process and dynamic aspect with the evolution of gases composition through the
ports and the mapping of the flow path during the process. In addition to be per-
fectly adapted to our specific engine, the meta-model has the advantage to explic-
itly integrate the design variable. This meta-model can be easily used in a design
optimization process; the low reduction errors ensure the accuracy and the rele-
vance of the meta-model results. This kind of new meta-models, composed by
several accurate reduced sub-models, will be more and more used: each of them
offers the opportunity to represent one phenomenon or one aspect of a global pro-
cess whereas their combination provides a complete description of the process.

References

[1] Mattarelli E. "Virtual design of a novel two-stroke high-speed direct-


injection Diesel engine". International Journal of Engine Research. 10, 3,
175–93, 2009;
[2] Trescher D. "Development of an Efficient 3–D CFD Software to Simulate
and Visualize the Scavenging of a Two-Stroke Engine". Archive of Compu-
tational Methods in Engineering. 15, 1, 67–111, 2008;
[3] Hopkinson B. "The Charging of Two-Cycle Internal Combustion Engines".
Journal of the American Society for Naval Engineers. 26, 3, 974–85, 1914;
[4] Maekawa M. "Text of Course". JSME G36. 23, 1957;
[5] Sher E. "A new practical model for the scavenging process in a two-stroke
cycle engine". SAE Technical paper 850085. 1985;
[6] Blair G.P. "Design and Simulation of two-stroke engines". Society of Au-
tomotive Engineers, Inc.; 1996.
[7] Lamas Galdo M.I., Rodríguez Vidal C.G. "Computational Fluid Dynamics
Analysis of the Scavenging Process in the MAN B&W 7S50MC Two-
Stroke Marine Diesel Engine". Journal of Ship Research. 56, 3, 154–61,
2012;
[8] Heywood J.B. "Internal Combustion Engines Fundamentals". McGraw-Hil.
Duffy A, Moms JM, editors. 1988.
Neuro-separated meta-model of the scavenging … 53

[9] Cagin S., Fischer X., Bourabaa N., Delacourt E., Morin C., Coutellier D. "A
Methodology for a New Qualified Numerical Model of a 2-Stroke Diesel
Engine Design". The International Conference On Advances in Civil, Struc-
tural and Mechanical Engineering - CSME 2014. Hong-Kong; 2014.
[10] Cagin S., Bourabaa N., Delacourt E., et al. "Scavenging Process Analysis in
a 2-Stroke Engine by CFD Approach for a Parametric 0D Model Develop-
ment". 7th International Exergy, Energy and Environment Symposium. Va-
lenciennes; 2015.
[11] Cagin S., Fischer X., Delacourt E., et al. "A new reduced model of scaveng-
ing to optimize cylinder design". Simulation: Transactions of the Society
for Modeling and Simulation International. 2016;
[12] Shashua A., Hazan T. "Non-negative tensor factorization with applications to
statistics and computer vision". ICML ’05 Proceedings of the 22nd Interna-
tional Conference on Machine learning. New-York; p. 792–9, 2005.
[13] Lee D.D., Seung H.S. "Learning the parts of objects by nonnegative matrix
factorization". Nature. 401, 788–91, 1999;
[14] Cichocki A., Zdunek R., Choi S., Plemmons R., Amari S.-I. "Non-Negative
Tensor Factorization using Alpha and Beta Divergences". IEEE Internation-
al Conference on Acoustics, Speech and Signal Processing. Honolulu, HI;
2007.
SUBASSEMBLY IDENTIFICATION METHOD BASED

ON CAD DATA

Imen BELHADJ1, Moez TRIGUI1 and Abdelmajid BENAMARA1


1
Author one affiliation Mechanical Engineering Laboratory, National Engineering School of
Monastir, University of Monastir, Av. Ibn Eljazzar, 5019, Monastir, Tunisia.

* Corresponding author. Tel.: +216-95 296 671. E-mail address: imenne.belhadj@gmail.com

Abstract Seen the significant number of parts constituting a mechanism,


assembly or disassembly sequence planning became a very hard problem. The
subassembly identification concept can constitutes the most original way to solve
this problem particularly for complex product. This concept aims to break down
the multipart assembly product into particular number of subassemblies, and each
subassembly is constituted by a small number of parts. Consequently the
generation of assembly or disassembly sequence planning between parts can be
determined relatively easily because it becomes between the subassemblies
constituting the product. Then, each subassembly is assembled or disassembled
using the same approach. In literature subassemblies identification approach from
CAD model is not very developed and still a relevant research subject to be
improved. In this paper, a novel subassemblies identification approach is
presented. This proposed approach starts with the exploration of the CAD
assembly data to get an adjacency matrix. Then, the extracted matrix is enriched
by adding the contact in all directions in order to determine and to classify the
base parts initiator of each subassembly. The next step is to identify subassemblies
using a new matrix called sum matrix obtained from contact all direction matrix
and fit matrix. For better discussing and explaining the stages of the proposed
approach an example of CAD assembly product is presented in all sections of this
paper.

Keywords: CAD data extraction, Subassembly concept, Base parts

1 Introduction
The search space of Assembly or Disassembly Sequence Planning (ASP/DSP)
for a product becomes a very complex task not only in the design cycle but also in
the product cost. This complexity is proportional to the number of parts
constituting an assembly product. To reduce this difficulty a novel concept such
Subassembly Identification (SI) was proposed by many researchers [1]. This
concept aims to break down the multipart assembly product into particular number
of subassemblies, and each subassembly is constituted by a small number of parts.
Then, the ASP/DSP of the subassemblies can be realized easily. Despite the
© Springer International Publishing AG 2017 55
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_6
56 I. Belhadj et al.

efficiency of the subassemblies method, the automatic identification of


subassembly from CAD model is still a research subject to be improved.
Due to the different varieties of assembly constraints, the result of SI is
multiple for the same assembly product involving the erroneous one.
Consequently, many criteria to evaluate the correctness or the feasibility of the SI
were introduced according to the various engineering applications.
In the literature, many techniques of SI are presented. The most important will
be presented later
Some researchers classified SI into two principal categories which are
topological and geometrical assembly constraints [2]. Other investigators
proposed the method of the graph theory which represents contacts or connections
between assembly parts. The nodes represent the component constituting the
assembly and the lines between nodes represent the contacts [3]. This method is
ameliorated by taking into account assembly precedence with various methods.
The “ask–answer” method is used by Kara et al. [4] to identify subassemblies and
generate the DSP of the product. The arrowed lines between parts represent
assembly precedence. Wang et al. [5] introduced the concept of base parts, which
is very practical in the SI. The base parts and precedence are judged by experts,
which represents a limitation of this approach. Wang et al. [3] used the assembly
tree to break down a complex CAD model to some subassemblies and each
subassembly can enclose some simpler subassemblies. Given the importance of
the assembly structure tree, the SI can’t be determinate automatically because the
assembly structure tree is not unique, depending on how the planner realizes the
assembly [6, 7]. Sugato [8] proposed an SI approach based on the assembly
structure knowledge. Then, these subassemblies are taken as typical cases to
extract the other similar subassemblies. In the same ways, other researchers used a
case-based reasoning method to identify subassemblies. This method is based on
existing cases or experiences [9]. Santochi et al. [10] used a set of rules to
construct an incidence matrix which represents the assembly mating features. The
subassemblies are recognized with respect to the incidence matrix and the inherent
information. Referred to the literature overview detailed previously, some relevant
points are identified:
x There is not a unique or standard approach to identify subassembly.
x SI is a combinatory problem, the search space of SI increases with the
number of parts.
x The intervention of the planner is necessary in most of the previous
research works; subsequently the automation of the SI is not
satisfactory.
x Few works use the CAD data to identify subassembly automatically.
x The use of base parts can provide an advanced way to identify
subassembly from the complex product.
This paper aims to presents a SI approach based on the extracted CAD
assembly constraints. The remaining part of this paper is organized as follows:
First, the data extraction and relationship method from the CAD assembly model
is detailed. Then, an adjacency matrix is constructed to be enriched later by
mounting parameters and the SI procedures can be started. An industrial example
is dealt with all sections of this paper in order to explain different stages of the
proposed approach.
Subassembly Identification Method … 57

2. Proposed approach to identify subassemblies


The figure 1 describes the strategy adopted to present the SI approach which is
composed of two principal steps.
- CAD assembly data extraction: it aims to extract all assembly constraints
and all parts attributes;
- Subassembly identification process: it aims to identify the subassemblies
set based on assembly constraints and parts attributes;

Assembly CAD assembly Subassembly


STEP1 data STEP2 identification
CAD model
extraction procedures

Figure 1. Strategy of the proposed subassembly identification approach


Begin
CAD
Identification
Assembly End of the
data
CAD assembly subassemblies
extraction
model list
Save the list of
subassemblies

Identify
assembly All Bpi treated
constraints
Yes
(a) No All contact

No
All assembly with Bpi (d)
constraints are treated Identification
are of the base
identified parts set
Analyze
Yes connection with all
Identify parts
topological and
geometrical data
Save base
of parts Select base part parts list
Bpi
Yes
All
No parts All Fni
No
are calculate
treated d

Yes
Calculate Fni
Browse feature
manager design
tree Select part i
All
No connector Yes
Identify parts are (c)
(b) connector parts treated
Supressio
Remove all n of all
connector parts connector
parts

Figure 2. Flowchart of the proposed approach

2.1. CAD assembly data extraction

The use of Application Programming Interface (API) facilitates the extraction


of informations related to the assembly model created by CAD model. This
information can be exploited to generate the subassemblies from CAD assembly
model (figure 2 (a)). The data extraction step is composed of two stages:
58 I. Belhadj et al.

x Extraction of geometrical and topological informations related to the part


(vertex (coordinate); Edge (associated vertex, length); Wire (associated
edges, orientation, length); face(associated wires, normal and area))
x Data extraction associated to the assembly constraints between parts
(mate type, mate entities, mate parameters, concerned parts).
These data will be stored in a data-base related to the part and the assembly.
The proposed approach is presented by the flowchart in figure 2. In the
following section, each step will be described.

2.2 Subassembly identification process

A demonstrative CAD model is introduced which is a Reduction Gear (figure


3), aims to explain the different stages of this approach. This mechanism is
composed of 23 parts.
19
1 18
20
3
5
8 4
2
9
11
13 12
14

23
21
22
7 6

16
10
17
15

Figure 3. Illustrative example: Reduction Gear.

2.2.1 Adjacency matrix of CAD model

The first result of data extraction step is the adjacency matrix [Adj].The [Adj] is a
symmetric and square matrix, its size is equal to (NxN) where N represents the
total number of parts. The Adj (i,j) element represents an existing contact between
two parts, i and j can have three possible attributes as follows:
- Adj(i,j)= 1 if there is a relationship between i and j;
- Adj(i,j)= 0 if there is no relationship between i and j;
- Adj(i,j)= 0 if i=j.
The adjacency matrix in the treated example is given by equation (1). As can be
noticed from the size of [Adj] matrix, the number of connector element (screw,
bolt) is significant when compared with the total numbers of parts. Moreover these
parts are generally used to link together all the subassembly of a mechanism.
Therefore, to minimize the solution space of SI, the size of [Adj] matrix is reduced
by removing all connector elements (figure 2 (b)). These connectors are identified
directly from the hierarchical assembly tree. In this example, all parts from item
18 to item23 are removed from the adjacency to get a reduced one [Adj].
Subassembly Identification Method … 59

ª 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23º Total Contact


«1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 »» 3
«
«2 0 0 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0» 4
« »
«3 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0» 2
«4 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0» 2
« »
«5 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0» 3
«6 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 » 2
« »
«7 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 0» 6
«8 1 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1» 5
« »
«9 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0» 2
« »
«10 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0» 3
«11 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0» 2
>Adj@ « »Ÿ
«12 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0» 3
«13 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 1 1 1 1 1 1 » 3
« »
«14 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0» 2
«15 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0» 4
« »
«16 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0» 2
«17 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 »» 2
«
«18 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0» 3
« »
«19 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0» 3
«20 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 » 3
« »
« 21 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0» 3
«22 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0» 3
« »
«¬23 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 »¼ 3
(1)

2.2.2 Determination of the base part set

The identification of the Base parts (Bp), which represents a key step for SI can
be started, once the reduced adjacency matrix is achieved. A base part is an
initiator of each subassembly; it can be defined as a central part on which most
parts will be mounted As can be noticed from the adjacency matrix of the
illustrative example (Eq.2), the Input wheel (15), having relatively an important
volume, include four contacts and cannot be a base part as it must be assembled in
the intermediate shaft (7). Moreover, the Main cover (1) having a contact number
equal to 3, smaller than the input wheel, can constitute a base part. To circumvent
this problem and make a decision in the determination of the base parts, two steps
are proposed. The [Adj] matrix is transformed by adding the contact according to
the three directions. A fitness function involving other criteria is introduced.
An explanation of the two aspects is detailed in the following section.
x Transformed matrix [Cad]
x The transformation of [Adj] matrix consists in considering the three
directions (x,y,z) contacts between parts. The new Contact of all
directions matrix [Cad] is calculated as follows:
Cad Cx & Cy & Cz > @ > @ > @ > @ (2)
Where, [Cx] , [Cy] and [Cz] represent the three contact matrix according to the
directions (x, y, z).
The new total contact of the Input wheel (15) becomes 7 while for the Main
cover (1) it is 9, this results shows the importance of this stage in the detection
procedure of base parts.
In the second steps, a fitness function considering more than the contact
criterion is introduced. The considered criteria are: The largest boundary surface,
the higher volume and the maximal number of relationships (identified from the
adjacency matrix). Figure 2 (c) details the flowchart of this step. The Bp is
60 I. Belhadj et al.

identified according to a fitness function Fn. As a result, for a part i, the score of
the fitness function is calculated by equation (3).
Si Vi (3)
Fni α  βNr  γ
St Vt
Where
- Si :represents the boundary surface of part Pi;
- St: represents the total surface of parts existing in the assembly;
- Nr :represents the total of relationships between part Pi and other
parts in the assembly calculated from the adjacency matrix [Adj];
- Vi :represents the volume of part Pi;
- Vt: represents the total volume of parts existing in the assembly;
- α, β, γ: represent the weighting coefficients introduced by planer.
Figure 4 illustrates the evolution of the fitness function score of each part of the
Reduction Gear with different values of α, β, and γ. It has been found that the
obtained base parts list is {1, 2, 7, 8, 12, 13} for the different values used

α 0.3, β 0.4, γ 0.3 α 0.3, β 0.2, γ 0.5 α 0.5, β 0.2, γ 0.3

Figure 4: Evolution of the fitness function score, with different values of the weight
coefficients

2.2.3 Subassembly research

When the determination of the base parts list is established, the SI algorithm
begins (Figure 2 (d)). The SI algorithm starts by browsing each base part (Bpi)
and its relations ships with the other parts and removing all connections with other
base parts. Figure 5 shows the graph liaisons of the treated example before and
after this suppression. When analyzing the achieved graph, two particular cases
(illustrated in figure 6) are presented.
Subassembly Identification Method … 61

Bp13 16 Bp13 16

14 Bp12 15 17 14 Bp12 15 17

10 10

Bp1 6 Bp7 Bp1 6 Bp7


11 11

5 5
3 3

Bp8 9 Bp2 4 Bp8 9 Bp2 4


(a) (b)
Figure 5. Graph liaison of Reduction Gear

(a) Pk
(b)
Pn We(BpM,Pl), Pl We(Pl, BpK)
Pm
BpM BpM BpK

Figure 6. Mechanism Graph liaisons. (a): Situation 1, (b): Situation 2.


Case1: figure 6 (a), all parts Pm, Pn and Pk belong to the considered Bp M set.
Case2: if the situation of figure 6 (b) is presented (in the illustrative example,
the part (9) has two connections with Bp2 and Bp8), the SI algorithm is decided
by considering the weight (We) of each connection as follows:
- If We( BP ,Pl) ! We( Pl,Bp ) ,Pl belongs to the set of BpM ;
M k

- Else, Pl belongs to the set of Bpk.


The (We) is calculated by the formula (4).
We(Pi,Pj) S(i, j ) (4)
Where >S@ >Cad @& >Fit@ (5)
[S] is a sum matrix calculated using the formula (5) and [Fit] is a square and
symmetric matrix, and its size is equal to (N*N) where N represents the total
number of parts. The element Fit (i,j) of [Fit] which represents an existing fitting
contact between two parts Pi and Pj can have three possible attributes as follows:
- Fit (i,j) = 1 if the contact between i and j is a tight fit;
- Fit (i,j) = 0 if i=j and if the contact between i and j is a clearance fit.
ª 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 º
«1 0000 0000 1110 0000 0000 1110 0000 1110 0000 0000 0000 0000 0000 0000 0000 0000 0000»»
«
«2 0000 0000 1111 1110 1110 0000 0000 0000 1111 0000 0000 0000 0000 0000 0000 0000 0000»
« »
«3 1110 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000»
«4 0000 1110 0000 0000 0100 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000»
« »
«5 0000 1111 0000 0100 0000 0000 1011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000»
«6 1110 0000 0000 0000 0000 0000 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000»
« »
«7 0000 0000 0000 0000 1011 1111 0000 0000 0000 1111 0000 0000 0000 0000 1110 1110 1111»
«8 1110 0000 0000 0000 0000 0000 0000 0000 1110 1010 1110 0000 1110 0000 0000 0000 0000»
>S @ « »
«9 0000 1111 0000 0000 0000 0000 0000 1110 0000 0000 0000 0000 0000 0000 0000 0000 0000»
« »
«10 0000 0000 0000 0000 0000 0000 1111 1010 0000 0000 0000 0000 0000 0000 0100 0000 0000»
«11 0000 0000 0000 0000 0000 0000 0000 1110 0000 0000 0000 1111 0000 0000 0000 0000 0000»
« »
«12 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 0000 0000 1111 1010 0000 0000»
«13 0000 0000 0000 0000 0000 0000 0000 1110 0000 0000 0000 0000 0000 1110 0000 0000 1110»
« »
«14 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1110 0000 0000 0000 0000»
«15 0000 0000 0000 0000 0000 0000 1111 0000 0000 0100 0000 1011 0000 0000 0000 0100 0000»
« »
«16 0000 0000 0000 0000 0000 0000 1110 0000 0000 0000 0000 0000 0000 0000 0100 0000 0000»
«17 0000»¼
¬ 0000 0000 0000 0000 0000 0000 1111 0000 0000 0000 0000 0000 1110 0000 0000 0000
(6)
In the Fit matrix of the reduction gear, there are 11 contacts with fitting, for
example between the bearing1 (5) and the output shaft (2). The Sum matrix of the
62 I. Belhadj et al.

mechanism is presented by equation (6). This procedure is repeated for each part
Pi in the assembly without considering the base parts. This stage represents the
achievement of the first base part and the output is the first set of the subassembly.
The SI algorithm repeats this procedure for alls base parts (figure 2 (d)). The
output of the SI algorithm is a set of subassemblies. For the treated example, the
list of the identified subassemblies is represented by figure 7.
Sub 2 Sub6: {13} Sub3 : {2,3,4,5,9}
{

Sub 3
Sub5 : {12,11,14}
Sub 6

Sub 5

Sub 1

Sub 4
Sub1 : {7,6,10,15,16,17}
Sub2 : {8} Sub4 : {1}

Figure 7. The identified subassemblies of the Reduction gear.

3. Conclusion
In this paper, an SI approach composed of two main steps is proposed. It starts
with the exploration of the CAD assembly data to generate three matrices
(Adjacency matrix, Contact all directions matrix and Sum matrix). Then, the
extracted matrix is enriched by mounting parameters in order to extract the base
parts and identify subassemblies. To highlight the efficiency of the SI approach,
SolidWorks© and Matlab© are used to perform the numerical implementation and
an example of CAD assembly mechanism is tested.
4 References
[1] Hyoung RL., Gemmill DD., Improved methods of assembly sequence determination for
automatic assembly systems. Eur J Oper Res 131(3):611–621. 2001.
[2] Laperrière L., EIMaraghy HA., Assembly sequences planning for simultaneous engineering
applications. Int J Adv Manuf Technol 9(4):231–244. 1994.
[3] Lai HY., Huang CT., A systematic approach for automatic assembly sequence plan
generation. Int J Adv Manuf Technol 24 (9/10):752–763. 2004.
[4] Kara S., Pornprasitpol P1., Kaebernick H., Selective disassembly sequencing: a methodology
for the disassembly of end-of-life products. Annals of the CIRP 55(1):37–40. 2006.
[5] Wang JF., Liu JH., Zhong YF., Integrated approach to assembly sequence planning of
complex products. Chin J Mech Eng 17 (2):181–184. 2004.
[6] Moez Trigui, Riadh BenHadj, Nizar Aifaoui, An interoperability CAD assembly sequence
plan approach. Int J Adv Manuf Technol (2015) 79:1465–1476. 2015.
[7] Imen Belhadj, Moez Trigui, Abdelmajid Benamara, Subassembly generation algorithm from
a CAD model. Int J Adv Manuf Technol (2016):1–12. 2016.
[8] Sugato C., A hierarchical assembly planning system. Texas A&M University, Austin. 1994.
[9] Swaminathan A., Barber KS., An experience-based assembly sequence planner for
mechanical assemblies. IEEE Trans Robot Autom 12(2):252–266.1996
[10] Santochi M., Dini G., Computer-aided planning of assembly operations: the selection of
assembly sequences. Robot Comput-Integrated Manuf 9(6):439–446. 1992.
Multi-objective conceptual design: an approach
to make cost-efficient the design for
manufacturing and assembly in the
development of complex products

Claudio FAVI1*, Michele GERMANI1 and Marco MANDOLINI1


1
Università Politecnica delle Marche, via brecce bianche 12, 60131, Ancona (IT)
*Tel.: +39-071-220-4880; fax: +39-071-220-4801. E-mail address: c.favi@univpm.it

Abstract: Conceptual design is a central phase for the generation of the best
product configurations. The design freedom suggests optimal solutions in terms of
assembly, manufacturing, cost and material selection but a guided decision mak-
ing approach based on multi-objective criteria is missing. The goal of this ap-
proach is to define a framework and a detailed approach for the definition of fea-
sible design options and for the selection of the best one considering the
combination of several production constrains and attributes. The approach is
grounded on the concept of functional basis and the module heuristics used for the
definition of product modules and the theory of Multi Criteria Decision Making
approach (MCDM) for a mathematical assessment of the best design option. A
complex product (tool-holder carousel of a machine tool) is used as a case study to
validate the approach. Product modules have been re-designed and prototyped to
efficiently assess the gain in terms of assembly time, manufacturability and costs.

Keywords: Conceptual Design, Multi-objective Design, Multi Criteria Decision


Making, Design to Cost, Design for Manufacturing and Assembly.

1 Introduction

Design-for-X (DfX) methods have been developed in recent years to aid designers
during the design/engineering process for the maximization of specific aspects.
Methods for efficient Design-for-Assembly (DfA) are well-known techniques and
widely used throughout many large industries. DfA can support the reduction of
product manufacturing costs and it provides much greater benefits than a simply
reduction in assembly time [1, 2]. However, these methods are rather laborious
and in most cases, they require a detailed product design or an existing prod-
uct/prototype. Other approach investigates the product assemblability starting

© Springer International Publishing AG 2017 63


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_7
64 C. Favi et al.

from the product functional structure [3, 4]. In this way, the DfA technique can be
applied during the conceptual design phase when decisions greatly affect produc-
tion costs. Even so, the conceptual DfA, do not consider manufacturability aspects
such as the material selection or the most appropriate process to build up compo-
nents and parts. Furthermore, product design and optimization is a multi-objective
activity and not only limited to the assembly aspects.
In this context, this paper proposes an improvement to overcome the above-
mentioned weak points and to optimize the product assemblability as well as the
parts manufacturability by taking into account the best cost-effective technical so-
lutions. The main goal of this work is to define a multi-objective design approach
which aims to have a comprehensive analysis of the manufacturing aspects. This
is particularly important to avoid design solutions which can be excellent, for ex-
ample, from the assembly point of view but not cost-efficient in terms of manufac-
turing costs and investments.
In the following sections, the proposed approach is reported in detail after a
brief review of the research background. The general workflow of the proposed
approach and its application in a real case study (tool-holder carousel) has been
analysed, including a results discussion and future improvements.

2 State of the art and research background

The design stage is a long and iterative process for the development of certain
products. Design stage activities can be divided into four main phases: (i) Problem
definition and customer needs analysis, (ii) Conceptual design, (iii) Embodiment
design, and (iv) Detail design. In the first phase, customer requirements are col-
lected and analysed, then, the requirements are translated into product features,
and finally, concepts that can satisfy the requirements are generated and modelled
[5]. It is well-known that, although design costs consume approx. 10% of the total
budget for a new project, typically 80% of manufacturing costs are determined by
the product design [6, 7]. Manufacturing/assembly cost is decided during the de-
sign stage and its definition tends to affect the selection of materials, machines and
human resources that are being used in the production process [8].
DfA is an approach which gives the designer a thought process and guidance so
that the product may be developed in a way which favors the assembly process
[9]. In the industrial practice, the Boothroyd and Dewhurst (B&D) is one of the
most diffused DfA approach [2]. Different design solutions can be compared by
evaluating the elimination or combination of parts in the assembly and the time to
execute the assembly operations [10]. The main drawback of this approach is that
DfA is applied in the detailed design phase when much of the design solutions
have been already identified. Stone et al. [3] define a conceptual DfA method in
order to support designers during the early stages of the design process. The ap-
proach uses two concepts: the functional basis and the module heuristics [11]. The
Multi-objective conceptual design … 65

functional basis is used to derive a functional model of a product in a standard


formalism and the module heuristics are applied to the functional model to identi-
fy a modular product architecture [12]. The approach has two weak points: (i) the
identification of best manufacturing process for part production and (ii) related
cost-efficient material.
The selection of the most appropriate manufacturing process is dependent on a
large number of factors but the most important considerations are shape complexi-
ty and material properties [13]. According to Das et al. [14], Design-for-
Manufacturing (DfM) is defined as an approach for designing a product which: (i)
the design is quickly transitioned into production, (ii) the product is manufactured
at a minimum cost, (iii) the product is manufactured with a minimum effort in
terms of processing and handling requirements, and (iv) the manufactured product
attains its designed level of quality. DfA and DfM hardly integrate together, and
the Design-for-Manufacturing-and-Assembly (DfMA) procedure can typically be
broken down into two stages. Initially, DfA is conducted, leading to a simplifica-
tion of the product structure and economic selection of materials and processes.
After iterating the process, the best design concept is taken forward to DfM, lead-
ing to detailed design of the components for minimum manufacturing costs [15].
Cost estimation is concerned with the predication of costs related to a set of ac-
tivities before they have actually been executed. Cost estimating or Design-to-
Cost (DtC) approaches can be broadly classified as intuitive method, parametric
techniques, variant-based models, and generative cost estimating models [16].
However, the most accurate cost estimates are made using an iterative approach
during the detail design phase [17]. While DtC is usually applied at the embodi-
ment design or even worse in the detail design phase, to be efficient DtC requires
to be applied at the same time of DfMA (conceptual design phase) [18, 19]. In this
way, DtC is only an optimization of an already selected design solution.
The only way to overcome the aforementioned issues is the multi-objective ap-
proach which takes into account all the production aspects (assemblability, manu-
facturability, materials, costs, etc.) at the same time. Different mathematical mod-
els can be used as a solver for the multi-objective problem. MCDM is one of the
common approach for multi-objective problems [20]. Novelty of the proposed ap-
proached is based on the application of MCDM in the conceptual design phase to
account multiple production aspects in the development of complex products.

3 Multi-objective conceptual design approach

In order to describe the proposed multi-objective design approach, some concepts


need to be introduced. The first one is to set out the product modules and proper-
ties considering the functional basis and the module heuristics. Then, grounded on
the concept of morphological matrix it is necessary to define feasible design solu-
tions. Finally, considering the multi-objective approach based on the MCDM theo-
66 C. Favi et al.

ry, suggestions for the product structure simplification and for the selection of
economic materials and manufacturing processes are stated.
Fig. 1 shows the workflow of the proposed multi-objective design approach.
Different target design methodologies (DfX) can be applied early in the product
design concept. In particular, the focus of this research work is related to the pro-
duction (assembly, manufacturing, material selection and cost) aspects.

Fig. 1: Flow diagram of the proposed multi-objective conceptual design approach

3.1 Product modules, properties definition and design solutions

Through functional analysis and module heuristic approach, it is possible to de-


termine the number of functions which identify a product and the related flows
(energy, material and signal). The functional analysis is able to break up the prod-
uct in its constituent functions as a first step of design process. This is the first step
of the conceptual design and helps designers and engineers in the definition of the
product functions as well as in the identification of the overall product structure.
The module heuristic identifies the in/out flows of each function. By using this
approach, it is possible to translate the product functions into functional modules.
Functional modules define a conceptual framework of the product and the initial
product configuration. A one-to-one mapping between product functions and
modules is expected, but can be possible that several functions are developed only
by one physical module.
Furthermore, heuristics allow determining the specific properties of each func-
tional module. Attributes and properties need to be defined for each module in or-
der to identify the technical and functional aspects which must be guaranteed as
well as a basis for the definition of the feasible and not-feasible design solutions.
The transition from product modules to potential design solutions (components
or sub-assemblies) is based on the knowledge of specific properties identified dur-
Multi-objective conceptual design … 67

ing the generation of the product modules. A very helpful tool at this step is the
morphological matrix which can improve the effectiveness of the conceptual anal-
ysis and translates functional modules to physical modules such as sub-assemblies
or components. A morphological matrix is traditionally created by labelling each
line with all the identified products’ modules and, for each module, the possible
design options, listing the solutions as columns and the product’ modules as rows.
[20]. In a manual engineering design context, the morphological matrix is limited
to the concepts generated by the engineer, although the morphological matrix is
one technique that can be used in conjunction with other design activities (brain-
storming processes, knowledge repository analysis, etc.) [21].
In particular, the alternative design options are developed and analyzed based
on the concepts of DfA, DfM and DtC to retrieve, at conceptual level, the best
configuration in terms of costs and productivity. Designer skills, supplier and
stakeholder surveys as well as well-structured and updated knowledge repositories
can help in the definition of the design options suitable to implement the module
under investigation and for the population of the morphological matrix. The mor-
phological matrix finally shows existing design options for each functional mod-
ule of a complex system and it permit a rapid configuration of the product with the
selection of the best option for a specific module. Design options must be reliable
and compliant with the properties defined in the module assessment.

3.2 Multi-objective approach

The multi-objective approach is the core of the proposed workflow and aim to
balance different aspects of industrial production, such as assembly, materials and
manufacturing processes taking into account the overall cost as a driver for the op-
timization design process. The multi-objective approach is following the product
modules definition and the classification of design solutions, but it is still part of
the conceptual design phase. In fact, in this phase are available only general in-
formation and not specific details about geometry, shape, manufacturing parame-
ters, material designation, etc. The selection of the best design options is made us-
ing a MCDM method call TOPSIS (Technique for Order of Preference by
Similarity to Ideal Solution). The TOPSIS was first developed by Hwang & Yoon
and it is attractive in that limited subjective input (the only subjective input needed
from decision makers is weights) [22]. According to this technique, the best alter-
native would be the one that is nearest to the positiveǦideal solution and farthest
from the negative ideal solution. The positive ideal solution is a solution that max-
imizes the benefit criteria and minimizes the cost criteria [23]. Using a TOPSIS
method, the different design options are ranked. The TOPSIS method is not time
consuming due to the easy implementation in a common spreadsheet or in a dedi-
cated software tool. Inputs required are only: (i) attributes weight (based on com-
pany targets and requirements) and (ii) scores for each design option in relation to
68 C. Favi et al.

the selected attributes. Obviously, a sensitivity analysis of the results is recom-


mended due to the dependency with scores and weights assigned during the
evaluation. This issue does not limit the applicability of the approach but encour-
age to set the weights based on the specific targets and to implement a sensitivity
analysis to investigate the influence of each attribute.

4 Case study: A tool-holder carousel of machine tool

A tool-holder carousel of a machine tool for wood processing and machining has
been analysed. This system is responsible to feed the tool head with different tools
for specific manufacturing operations (cutting, milling, drilling, etc.). By the func-
tional analysis and the modular approach, several product modules have been
identified in the conceptual design stage. The overall function of this complex sys-
tem is “feed the machine head with specific tool”. Different design options have
been pointed out for each product module by the use of morphological matrix.
Alternative design solutions have been analyzed following the multi-objective ap-
proach and the TOPSIS methodology. An overview of the implementation of the
approach for the Bracket module is presented in Fig. 2.

Fig. 2: TOPSIS implementation for the ranking of the Bracket module options

Different design options and a rating for each aspect of production (Assembly,
Material, Manufacturing and Cost) have been assessed by the different target de-
sign methodology. Weights assignment for each attribute have been done based on
the company targets and requirements. The approach is cost-driven and for this
reason the maximum weight has been assigned to the cost feature.
As educational example, a complete re-design process has been carried out to
compare, accurately, design alternatives after the conceptual design and so in the
detail design phase. Complete 3D CAD models have been built up for a compre-
hensive and detailed analysis as well as method validation. Fig. 3 highlight the re-
Multi-objective conceptual design … 69

sults obtained for the Bracket module (Welded structure vs. Plastic piece). Produc-
tion rate has been roughly estimated approx. 2500 pieces in 10 years according to
the average production rate of the machine tool.

Fig. 3: CAD models and features of Bracket module options (Welded structure vs. Plastic piece)

5 Results discussion and concluding remarks

The proposed work aims to develop a multi-objective design approach for a com-
prehensive analysis of the manufacturing aspects in the conceptual design phase.
The approach is able to support engineering team in the selection of the optimal
design solution. An overview of the results obtained for the proposed case study
(tool-holder carousel) is presented in Table 1.

Table 1. Main attributes comparison for the tool holder carousel before and after re-design.

Components Assembly time Total Cost (material + manuf. + assembly)


Original design 325 pcs. 88 min. 359.73
After re-design 123 pcs. 33 min. 225.74

In particular, more than 35% of cost saving is highlighted by the application of


this approach and approx. 60% reduction in assembly time and the number of
components. Another important outcome has been the easy implementation of the
proposed approach in the traditional design workflow of the of the company.
Future perspectives on this topic will be a deeply validation of the method for
other case studies as well as the definition of a framework for the implementation
of the approach in a design tool. A step forward will be to include other interesting
production aspects such as environmental impacts, energy consumptions, etc.

References

1.De Fazio T.L., Rhee S.J., and Whitney D.E. Design specific approach to design for assembly
(DFA) for complex mechanical assemblies. In IEEE Robotics and Automation, 1999,
pp.869-881.
70 C. Favi et al.

2.Boothroyd G., Dewhurst P., Knight W. Product design for manufacture and assembly, 2nd edi-
tion, 2002 (Marcel Dekker).
3.Stone R.B. and McAdams D.A. A product architecture-based conceptual DFA technique. De-
sign Studies, 2004, 25, pp.301-325.
4.Favi C. and Germani M. A method to optimize assemblability of industrial product in early de-
sign phase: from product architecture to assembly sequence. International Journal on Inter-
active Design and Manufacturing, 2012, 6(3), pp. 155-169.
5.Pahl G. and Beitz W. Engineering design: a systematic approach, 2nd edition, 1996 (Springer).
6.Ulrich K.T. and Eppinger S.D. Product design and development, 3rd Edition, 2003 (McGraw-
Hill Inc.).
7.Huang Q. Design for X: concurrent engineering imperatives, 1996 (Chapman and Hall).
8.Nitesh-Prakash W., Sridhar V.G. and Annamalai K. New product development by DfMA and
rapid prototyping. Journal of Engineering and Applied Sciences, 2014, 9, pp.274-279.
9.Otto K. and Wood K. Product design: techniques in reverse engineering and new product de-
velopment, 2001 (PrenticeHall).
10.Samy S.N. and ElMaraghy H.A. A model for measuring products assembly complexity. In-
ternational Journal of Computer Integrated Manufacturing, 2010, 23(11), pp.1015-1027.
11.Stone R.B., Wood K.L. and Crawford R.H. A heuristic method for identifying modules for
product architectures. Design Studies, 2000, 21, pp.5-31.
12.Dahmus J.B., Gonzalez-Zugasti J.P. and Otto K.N. Modular product architecture. Design
Studies, 2001, 22(5), pp.409-424.
13.Estorilio C. and Simião M.C. Cost reduction of a diesel engine using the DFMA method.
Product Management & Development, 2006, 4, pp.95-103.
14.Das SK., Datla V. and Samir G. DFQM - An approach for improving the quality of assembled
products. International Journal of Production Research, 2000, 38(2), pp. 457-477.
15.Annamalai K., Naiju C.D., Karthik S. and Mohan-Prashanth M. Early cost estimate of prod-
uct during design stage using design for manufacturing and assembly (DFMA) principles.
Advanced Materials Research, 2013, pp.540-544.
16.Nepal B., Monplaisir L., Singh N. and Yaprak, A. Product modularization considerıng cost
and manufacturability of modules. International Journal of Industrial Engineering, 2008,
15(2), pp.132-142.
17.Hoque A.S.M., Halder P.K., Parvez M.S. and Szecsi T. Integrated manufacturing features and
design-for-manufacture guidelines for reducing product cost under CAD/CAM environ-
ment. Computers & Industrial Engineering, 2013, 66, pp.988-1003.
18.Shehab E.M. and Abdalla H.S. Manufacturing cost modelling for concurrent product devel-
opment. Robotics and Computer Integrated Manufacturing, 2001, 17, pp.341-353.
19.Durga Prasad K.G., Subbaiah K.W. and Rao, K.N. Multi-objective optimization approach for
cost management during product design at the conceptual phase, Journal of Industrial Engi-
neering International, 2014, 10(48).
20.Ölvander J., Lundén B. and Gavel H. A computerized optimization framework for the mor-
phological matrix applied to aircraft conceptual design. CAD, 2009, 41, pp.187-196.
21.Bryant Arnold C.R., Stone R.B. and McAdams D.A. MEMIC: An interactive morphological
matrix tool for automated concept generation. In the proceedings of Industrial Engineering
Research Conference, 2008.
22.Hwang C.L. and Yoon K. Multiple attribute decision making: methods and applications. 1981
(Springer-Verlag).
23.Wang Y.J. and Lee H.S. Generalizing TOPSIS for fuzzy multipleǦcriteria group deci-
sionǦmaking. Computers & Mathematics with Applications, 2007, 53, pp.1762Ǧ1772.
Modeling of a three-axes MEMS gyroscope with
feedforward PI quadrature compensation

D. Marano1 , A. Cammarata2∗ , G. Fichera2 , R. Sinatra2 , D. Prati3


1Department of Engineering ”Enzo Ferrari”, University of Modena and Reggio Emilia, Italy.
E-mail: acamma@diim.unict.it
2 Dipartimento Ingegneria Civile e Architettura, University of Catania, Italy. E-mail:
acamma@diim.unict.it, gabriele.fichera@dii.unict.it, rsinatra@dii.unict.it
3 ST Microelectronics, Catania, Italy, E-mail: daniele.prati@st.com
∗ Corresponding author. Tel.: +39-095-738-2403 ; fax: +39 0931469642. E-mail address:

acamma@diim.unict.it

Abstract: The present paper is focused on the theoretical and experimental analysis
of a three-axes MEMS gyroscope, developed by ST Microelectronics, implement-
ing an innovative feedforward PI quadrature compensation architecture. The gyro-
scopes structure is explained and equations of motion are written; modal shapes
and frequencies are obtained by finite element simulations. Electrostatic quadrature
compensation strategy is explained focusing on the design of quadrature cancel-
lation electrodes. A new quadrature compensation strategy based on feedforward
PI architecture is introduced in this device to take into account variations of de-
vice parameters during lifetime. Obtained results show a significant reduction of the
quadrature error resulting in a improved performance of the device. Fabrication and
test results conclude the work.

Keywords: Quadrature error, MEMS, Gyroscope, FEM modeling, Electrostatic


quadrature compensation, Feedforward PI.

1 Introduction

Gyroscopes are physical sensors that detect and measure the angular rotations of
an object relative to an inertial reference frame. MEMS gyroscopes are typically
employed for motion detection (e.g. in consumer electronics and automotive con-
trol systems), motion stabilization and control (e.g. antenna stabilization systems,
3-axis gimbals for UAV cameras) [1]. Combining MEMS gyroscopes, accelerome-
ters and magnetometers on all three axes yields an inertial measurement unit (IMU);
the addition of an on-board processing system computing attitude and heading leads
to a AHRS (attitude and heading reference system), highly reliable device, in com-
mon use in commercial and business aircrafts. Measurement of the angular position

© Springer International Publishing AG 2017 71


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_8
72 D. Marano et al.

in rate gyroscopes can be achieved by numerical integration of the gyroscope’s out-


put; the time integration of the output signal, together with the associated errors and
noise, leads to orientation angle drifts [2]-[4]. Among all the major error sources, the
undesired sense-mode vibration resulting from the coupling of drive-mode displace-
ment and sense mode of the gyroscope is the mechanical quadrature signal [5]-[11].
Since its magnitude can reach thousand degrees per second, the measurement of
a low electric signal generated by very small Coriolis force in presence of a much
bigger electric signal becomes a difficult problem [12]. Several techniques, based ei-
ther on mechanical or electronic principles, have been proposed for quadrature error
compensation; among all, an efficient approach able to provide a complete quadra-
ture error cancellation is the electrostatic quadrature compensation. This approach is
based on the electromechanical interaction between properly designed mechanical
electrodes and the moving mass of the gyroscope: electrostatic forces, mechanically
balancing quadrature forces, are generated biasing electrodes with differential dc
voltages [13]-[18]. In most devices, the magnitude of biasing dc voltages is deter-
mined in order to nullify an experimentally measured quadrature error. In this way,
however, it is not possible changing the dc voltages during the lifetime of the de-
vice to accomplish variations of structural device properties. A possible solution to
this problem is addressed in the present paper, where an innovative feed-forward PI
quadrature compensation architecture implemented on a novel three-axes MEMS
gyroscope, manufactured by ST Microelectronics, is discussed.

2 Gyroscope Structure And Dynamics

2.1 Structure

The three-axes Coriolis Vibrating Gyroscope presented in the following is a compact


device, manufactured by ST Microelectronics, combining a triple tuning-fork struc-
ture with a single vibrating element. The device is fabricated using ThELMA-ISOX
(Thick Epipoly Layer for Microactuators and Accelerometers) technology platform,
a surface micromachining process proprietary of ST Microelectronics. This platform
allows to obtain suspended seismic masses electrically isolated but mechanically
coupled with high and controlled vacuum inside the cavity of the device. The struc-
ture (Fig.1) is composed of four suspended plates (M1,2,3,4 ) coupled by four folded
springs, elastically connected to a central anchor by coupling springs. The funda-
mental vibration mode (driving mode) consists of a planar oscillatory radial motion
of the plates: globally, the structure periodically expands and contracts, similarly to
a ”beating heart”. Plates M1,2 are actuated by a set of comb-finger electrodes and
the motion is transmitted to the secondary plates M3,4 by the folded springs at the
corners. The sensing modes of the device consist of two out-of-plane modes (Roll
and Pitch) characterized by counter-phase oscillation of plates M1,2 (M3,4 ) and one
in-plane counter-phase motion of the yaw plates (M3,4 ) (Yaw mode). Rotation of
Modeling of a three -axes MEMS gyroscope … 73

yaw plates (M3,4 ) is measured by a set of parallel-plate electrodes, P P1,2 , located


on the yaw plates. Pitch and roll angular rotations are measured sensing the ca-
pacitive variations between each plate and an electrode placed below (respectively
R1,2 and P1,2 for roll and pitch masses); the driving mode vibration is measured by
additional comb-finger electrodes SD1,2 . Electrostatic quadrature compensation is
implemented on Roll (Quadrature Compensation Roll, QCR) and Pitch axis (QCP)
by means of electrodes placed under each moving mass. Yaw axis quadrature com-
pensation electrodes (QCY) are slightly different from the ones of other axis since
they are not placed underneath the moving mass and have height equal to the gyro-
scope’s rotor mass.

Fig. 1: Case-study gyroscope layout

2.2 Dynamics

The gyroscope’s equations of motion are derived in the general case in [4, 19]. The
coordinate-system model shown in Fig. 2 consists of three coordinate frames re-
spectively defined by their unit vectors Σi = [X, Y, Z]; Σp = [x, y, z]; Σ =
[x̂, ŷ, ẑ]. The frame Σi represents the inertial reference system, Σp is the inertial
platform frame, Σ is a body-frame with origin at a point P of a moving body (for
a 3-axes gyroscope the considered body is one of the four moving suspended plates
and the platform frame is usually assigned to the fixed silicon substrate). For a de-
coupled three axes gyroscope simplifying assumptions (constant angular rate inputs,
operating frequency of the gyroscope much higher than angular rate frequencies)
can be done [19, 20], and the equations of motion (EoM) become:
74 D. Marano et al.

mr̈x + cx ṙx + kx rx = −2mΩy ṙz + 2mΩz ṙy + FDx (1a)


mr̈y + cy ṙy + ky ry = 2mΩx ṙz − 2mΩz ṙx + FDy (1b)
mr̈z + cz ṙz + kz rz = −2mΩx ṙy + 2mΩy ṙx + FDz (1c)

Fig. 2: Coordinate system model for the derivation of kinematic equations

2.2.1 Modal analysis

The device eigenfrequencies are determined by FEM simulation (Fig. 3). As im-
posed by mechanical design the fundamental mode of vibration consists of an in-
plane inward/outward radial motion of the plates in which the structure cyclically
expands and contracts. Several spurious modes at higher frequencies, not reported
here for brevity, have been also identified.

3 Electrostatic quadrature cancellation

3.1 Quadrature force

The dynamics equations of a linear yaw vibrating gyroscope can be expressed, con-
sidering the off-diagonal entries of the mechanical stiffness matrix, as
Modeling of a three -axes MEMS gyroscope … 75

Fig. 3: Fundamental vibration modes (drive, pitch, yaw, roll)

       
m 0 dx 0 kx kxy Fd
p̈(t) + ṗ(t) + p(t) = (2)
0 m 0 dy kyx ky FC
where p(t) = [x(t), y(t)]T is the position vector of the mass in drive and sense
direction, m represents the Coriolis mass, dx (dy ) and kx (ky ) represent the damping
and stiffness along the X-axis (Y-axis); kxy (kyx ) are the cross coupling stiffness
terms bringing the quadrature vibration response; Fd is the driving force and FC is
the Coriolis force. The dynamic equation in sense direction can be expressed as

mÿ + dy ẏ + ky y = FC + Fq (3)
where FC = −2mΩz ẋ is the Coriolis force and Fq = −kyx x is the quadra-
ture force. The Coriolis mass is usually actuated into resonant vibration with con-
stant amplitude in drive direction, thus the drive-mode position can be expressed
by x(t) = Ax sin(ωx t). Introducing the sinusoidal drive movement, Coriolis and
quadrature force can be expressed as

FC = 2mΩz ωx Ax cos(ωx t), Fq = −kyx Ax sin(ωx t) (4)

3.2 Quadrature cancellation electrodes design

Quadrature compensation electrodes for out-of-plane Roll (Pitch) motion are shown
in Fig. 4; the electrostatic force generated by the i-th electrode is given by
76 D. Marano et al.

 
H0
± Ax sin(ωx t) L0
1 2
FR,P i = ± 0 (V ± ΔV )2 (5)
2 (g)2

where Ax sin(ωx t) = x(t) is the drive movement, H0 and L0 are respectively width
and length of quadrature compensation electrodes and g is the air gap. The voltage
sign is chosen either positive (V + ΔV ) or negative (V − ΔV ) according to the
electrode biasing, whereas the x sign is chosen according to the overlap variation
among the proof mass and quadrature compensation electrodes (QCE) as shown in
Fig. 4. The total force is obtained as the product
of the force generated by a single
electrode by the number n of electrodes: Ftot = i Fi · n.

Fig. 4: Roll (Pitch) quadrature compensation electrode; detail of Fig. 1 (QCR and QCP electrodes)

The quadrature force FQ (Eq. (4)) is balanced by the drive dependent component
of the electrostatic force, properly tuning the ΔV potential applied to the pitch (roll)
quadrature compensation electrodes:

1 Ax sin(ωx t)L0
kyx Ax sin(ωx t) = 0 (V ± ΔV )2 (6)
2 (g)2

Quadrature compensation electrodes for the in-plane yaw motion are shown in Fig.
5. The electrostatic force generated by the i-th electrode is given by

1 (LOV ± x)
F Yi = ±  0 h (V ± ΔV )2 (7)
2 (g ± y)2

where h denotes the electrodes height and g the air gap between the moving mass
and the quadrature compensation electrode.
Design parameters of quadrature cancellation electrodes for the three-axes gyro
are reported in Tab. 1 respectively for roll (pitch) and yaw electrodes. Quadrature
compensation forces are regulated tuning the differential voltage ΔV such that the
Modeling of a three -axes MEMS gyroscope … 77

Fig. 5: Yaw quadrature compensation electrode; detail of QCY2,3 electrodes in Fig. 1

Table 1: Quadrature compensation electrodes parameters


Axis g [μm] H0 [μm] L0 [μm] LOV [μm] h [μm]
Roll 1.2 20 1200 - -
(Pitch)
Yaw 1.1 - - 25 24

residual quadrature is canceled out; the ΔV value corresponding to the minimum


residual quadrature is denoted by ΔVOpt . Residual quadrature signals are reported
in Tab. 2

3.3 Feedforward PI architecture

Quadrature is measured for each device during the electric wafer sorting test, here
tension variation ΔVopt is set for each device during the calibration phase. A seri-
ous limit of this approach is that structural parameters of devices can change unpre-
dictably during lifetime, causing variations of quadrature error. The value of ΔVOpt
is therefore no longer an optimal value for the new operating conditions. A pro-
posed solution to this problem is to adopt a closed loop architecture, based on feed-
forward PI in which the optimal ΔVOpt is the feedforward action and PI controller
compensates for lifetime quadrature variations. This procedure results in a further
optimization of residual quadrature values, as shown in Tab. 2.

Table 2: Residual quadrature results


Axis Residual quadrature OL [Nm] Residual quadrature CL [Nm]
Pitch 6.46 · 10−12 2.04 · 10−16
Roll 9.09 · 10−12 2.87 · 10−16
Yaw 3.66 · 10−13 1.15 · 10−17
78 D. Marano et al.

4 Fabrication and test results

All individual devices present on the wafer are tested for functional defects by elec-
tric wafer sorting (EWS). The quadrature amplitude is evaluated for each gyroscope
of the wafer, as shown in Fig. 6.

Fig. 6: EWS Testing: quadrature distribution (Yaw axis) on wafer

4.1 Experimental quadrature cancellation

The quadrature compensation strategy has been electrically simulated for an isolated
device inside the wafer. Applying a differential dc voltage to quadrature compensa-
tion electrodes quadrature error variation is observed and ΔVOpt value is obtained
by interpolation; in Fig. 7 results for roll axis are shown.

Fig. 7: Residual quadrature amplitude (Roll axis) for different voltages applied to Roll quadrature
cancellation electrodes
Modeling of a three -axes MEMS gyroscope … 79

5 Conclusion

In this paper a theoretical and experimental analysis of a three-axes MEMS gyro-


scope, developed by ST Microelectronics, has been presented. Exploiting the equa-
tions of motions for a 3-DoF gyroscope structure provided an estimation of the
drive and sense motion amplitude. Natural mode shapes and frequencies of the de-
vice have been obtained by finite element simulations to characterize the device.
Equations for the design of quadrature compensation electrodes have been derived,
and residual quadrature calculated with open loop architecture. A new quadrature
compensation strategy, based onan innovative feedforward PI architecture, accom-
plishing for changes of device parameters during lifetime of device has been intro-
duced and results discussed. Finally, fabrication details and measurement results of
test devices have been reported.

References

1. V. Kaajakari, Practical MEMS, Small gear publishing, Las Vegas, Nevada, 2009
2. M. Saukoski, L. Aaltonen, K.A.I. Halonen, Zero-Rate Output and Quadrature Compensation
in Vibratory MEMS Gyroscopes”, IEEE Sensors Journal, Vol.7, No. 12, December 2007
3. B.R. Johnson, E. Cabuz, H.B. French, and R. Supino, Development of a MEMS gyroscope for
northfinding applications, in Proc. PLANS, Indian Wells, CA, May 2010, pp. 168-170.
4. Volker Kempe, Inertial MEMS, Principles and Practice, Cambridge University Press, 2011
5. A. S. Phani, A. A. Seshia, M. Palaniapan, R. T. Howe, and J. A. Yasaitis, Modal coupling in
micromechanical vibratory rate gyroscopes, IEEE Sensors J., vol. 6, no. 5, pp. 11441152, Oct.
2006.
6. H. Xie and G. K. Fedder, Integrated microelectromechanical gyroscopes, J. Aerosp. Eng., vol.
16, no. 2, pp. 6575, Apr. 2003.
7. W. A. Clark, R. T. Howe, and R. Horowitz, Surface micromachined Z-axis vibratory rate
gyroscope, in Tech. Dig. Solid-State Sensor and Actuator Workshop, Hilton Head Island, SC,
USA, Jun. 1996, pp. 283287.
8. A. Cammarata, and G. Petrone, Coupled fluid-dynamical and structural analysis of a mono-
axial mems accelerometer, The International Journal of Multiphysics 7.2 (2013): 115-124.
9. S. Pirrotta, R. Sinatra, and A. Meschini, A novel simulation model for ring type ultrasonic
motor, Meccanica 42.2 (2007): 127-139.
10. M. S. Weinberg and A. Kourepenis, Error sources in in-plane silicon tuning fork MEMS gy-
roscopes, J. Microelectromech. Syst., vol. 15, no. 3, pp. 479491, Jun. 2006.
11. Mikko Saukoski, System and circuit design for a capacitive MEMS gyroscope, Doctoral Dis-
sertation, Helsinki University of Technology
12. R. Antonello, R. Oboe, L. Prandi, C. Caminada, and F. Biganzoli, Open loop compensation
of the quadrature error in MEMS vibrating gyroscopes, IEEE Sens. J., vol. 7, no. 12, pp.
1639-1652, Dec. 2007
13. Ni, Yunfang, Hongsheng Li, and Libin Huang. ”Design and application of quadrature com-
pensation patterns in bulk silicon micro-gyroscopes.” Sensors 14.11 (2014): 20419-20438.
14. W. A. Clark and R. T. Howe, Surface micromachined z-axis vibratory rate gyroscope, in Proc.
Solid-State Sens., Actuators, Microsyst. Work-shop, Hilton Head Island, SC, Jun. 1996, pp.
283-287
15. E. Tatar, S. E. Alper and T. Akin, Quadrature error compensation and corresponding effects on
the performance of Fully decoupled MEMS gyroscopes, IEEE J. of Microelectromechanical
systems, vol. 21, no. 3, June 2012
80 D. Marano et al.

16. A. Sharma, M.F. Zaman, and F. Ayazi, A sub 0.2◦ /hr bias drift micromechanical gyroscope
with automatic CMOS mode-matching, IEEE J. of Solid-State Circuits, vol. 44, no. 5, pp.
1593-1608, May 2009
17. B. Chaumet, B. Leverrier, C. Rougeot, and S. Bouyat, A new silicon tuning fork gyroscope
for aerospace applications, in Proc. Symp. Gyro Technol., Karlsruhe, Germany, Sep. 2009, pp.
1.1-1.13
18. Weinberg, M.S., Kourepenis A., Error sources in in-plane silicon tuning-fork MEMS gyro-
scopes, Journal of Microelectromechanical Systems. Volume 15, Issue 3, June 2006, pp. 479-
491
19. C. Acar, A. Shkel, MEMS Vibratory Gyroscopes, Structural Approaches to Improve Robust-
ness, Springer, 2008.
20. Acar, C., Shkel, A. M. (2003). Nonresonant micromachined gyroscopes with structural mode-
decoupling. Sensors Journal, IEEE, 3(4), 497-506.
A disassembly Sequence Planning Approach for
maintenance

Maroua Kheder1,*, Moez Trigui1 and Nizar Aifaoui1

Mechanical Engineering Laboratory, National Engineering School of Monastir, University of


Monastir, Av Ibn El jazzar Monastir, Tunisia
* Corresponding author. Tel. 0021658398409; Fax: 0021673500514. E-mail address:
maroua.kheder@gmail.com

Abstract: In recent years, more and more research has been conducted in close
collaboration with manufacturers to design robust and profitable dismantling sys-
tems. Thus, engineers and designers of new products have to consider constraints
and disassembly specifications during the design phase of products not only in the
context of the end of life but more precisely in the product life cycle. Consequent-
ly, optimization of disassembly process of complex products is essential in the
case of preventive maintenance. In Fact, Disassembly Sequence Plan (DSP),
which is among the combinatorial problems with hard constraints in practical en-
gineering, becomes an NP-hard problem. In this research work, an automated DSP
process based on a metaheuristic method named “Ant Colony Optimization” is
developed. Beginning with a Computer Aided Design (CAD) model, a collision
analysis is performed to identify all possible interferences during the components’
motion and then an interference matrix is generated to identify dynamically the
disassembly parts and to ensure the feasibility of disassembly operations. The
novelty of the developed approach is presented in the introduction of new criteria
such as the maintainability of the usury component with several other criteria as
volume, tools change and disassembly directions change. Finally, to highlight the
performance of the developed approach, an implemented tool is developed and an
industrial case is studied. The obtained results prove the satisfactory side of these
criteria to identify a feasible DSP in a record time.
Keywords: Disassembly Sequence Plan, Computer Aided Design, Interference
Analysis, Optimization, Ant Colony algorithm, Maintenance.

1 Introduction

Maintenance requires the replacement of failed components; removal and reas-


sembly of these components will take up a large proportion of time and cost in
maintenance task. Indeed, Preventive Maintenance (PM) refers to the work carried
out to restore the degenerated performance of a system and to lessen the likelihood
of it failing. It is important to note that the removal or dismantling parts require
maintenance engineers to identify a feasible and near optimal disassembly se-
© Springer International Publishing AG 2017 81
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_9
82 M. Kheder et al.

quence before carrying out the disassembly operations. For this reason, in a manu-
facturing system, DSP takes an important place in the life phase of a product by
the way it has gained a great deal of attention by designers and researchers [1].
Chung et al. treated the problem of selective DSP based on the Wave Propagation
(WP) method, focuses on topological disassemblability of parts [2]. Moreover, the
capability to generate automatically an efficient and feasible DSP still a topic sub-
ject to be improved. In the classification of optimization problems, DSP is consid-
ered among NP-hard combinatorial optimization problems with hard constraints in
practice engineering. The increasing number of components affects the space of
disassembly solutions which will be more and more complex [3]. To surmount
this difficulty, the metaheuristic methods seem to be the most suitable to DSP
problem, especially the family of swarm intelligence, such as Ant Colony Optimi-
zation (ACO).

ACO is inspired by the foraging behavior of natural ants, these algorithms are
characterized by a type of indirect coordination that relies on the environment to
stimulate the performance of subsequent actions by the same or other agents,
which is called stigmergy [4]. Wang and al. applied ACO in selective disassembly
planning by appointing the target list of components to be repaired [5]. Indeed,
ACO is used as a powerful technic to solve complex NP-hard combinatorial opti-
mization problems in recent years [6-7]. In this work, we are more partially con-
cerned about the ACO method to optimize DSP of CAD assembly in a context of
preventive maintenance. The remaining part of this paper is organized as it fol-
lows. First, the optimal DSP by ACO is formulated. Then, beginning with a CAD
model, the approach of geometric precedence feasibility in disassembly sequence
is presented. The treated example explains the novel criteria used to generate an
optimum DSP to optimize the preventive maintenance. The developed approach
considers several criteria such as part volume, tools change, the change of disas-
sembly directions and maintainability of usury component. Finally, an academic
case study is presented to illustrate the efficiency of the proposed method.

2 Ant Colony Research for DSP

2.1 Flow chart

The main goal of this approach is to exploit historic and heuristic information to
construct candidate solutions and fold the information learned from constructing
solutions into the history. The stages of ACO can be structured in a flow chart pre-
sented in Figure 1.

2.2 Free part search


A disassembly Sequence Planning Approach … 83

The assembly model created by CAD systems encloses many appropriate data
which can be useful in DSP problem such as the data related to the part and the
data associated to the assembly constraints between parts. Based on the work of
Ben Hadj et al. [8] which proposed a MMIDE (Mate and Mate In Place Data Ex-
traction), an exploration of the CAD data can conduct to the elaboration of inter-
ference matrix [I]k along the +k-axis direction with k ϵ (+X, +Y, +Z). In order to
explain all the algorithm stages, an illustrative example of belt tightener was treat-
ed. Figure 2 and Table 1, present the treated mechanism which is composed of 7
parts and needs three tools G1, G2 and G3 to be disassembled.

Begin

Initialize the ACO parameters

Locate ants randomly at primary


parts No all parts have
been selected?
Determine probabilistically Yes
which part to be selected next
Evaluate all solution
Move to the next part and delete
it from the interference matrix
Update of pheromone

The termination No
condition satisfied
?
Yes
Optimal Solution found

End

Fig. 1. Flow chart of Ant Colony Algorithm for DSP.


84 M. Kheder et al.

5 4 6
3

Fig. 2. The CAD model of belt tightener.

Table 1. Component list of belt tightener and its characteristic.

Component Name Maintainability Tool Volume(mm3)105

1 Tree 2 G1 1,05

2 Built 1 G1 4,11

3 Pad 3 G2 0.327

4 Bearing spacing 1 G1 0.253

5 Pulley 2 G1 2.97

6 Nut HEX 1 G3 0.177

7 Screw HEX 2 G3 0.167

For the illustrative example, the interference matrices in the three directions
(+X, +Y, +Z) is given by:

ª P P P P P P Pº ª P P P P P P Pº
1 2 3 4 5 6 7 ª P P P P P P Pº
«P 1»
1 2 3 4 5 6 7 1 2 3 4 5 6 7

«P 0 0 0 0 0 0 0 » 0 1 1 1 1 1 «P 0 1 1 1 1 1 1»
«1
» «1
» « 1
»
«P 1 0 1 1 1 0 1» «P 1 0 0 1 0 1 1» «P 1 0 0 0 0 0 1»
« »
2

«P » « »
2 2

1 0 0 0 1 0 0 (1)
>I@+z = «3
1 0 0 0 1 0 0
» >I@+x = «P
3
1 0 0 0 1 0 0
» > I@ = « P
3
»
«P 1 0 1 0 1 0 0» «P 1 0 0 0 0 0 0»
+y
«P 1 0 0 0 0 0 0»
« » « » « »
4 4 4

«P
5
1 0 0 0 0 0 0» «P
5
1 0 1 0 0 0 0» «P5
1 0 1 0 0 0 0»
«P 1 1 1 1 1 0 1 » «P 1 0 0 0 0 0 0 » «P 1 0 0 0 0 0 0»
«6
» «6
» « 6
»
¬P
7
1 1 1 1 1 0 0¼ ¬P
7
1 1 0 0 0 0 0¼ ¬P7
0 0 0 0 0 0 0¼
A disassembly Sequence Planning Approach … 85

Where
i P1...PN, represent the N parts of the assembly.
i (Iml) is equal to 1 if there is interference between part m and part l when
disassembling along the +K-axis direction, otherwise is equal to 0.
The hard task of DSP is how to detect a possible sequence without any collision
among the disassembly operations. In this work, the generation of feasible DSP is
essentially based on the free part concept which consists of checking the [I]k
elements matrices and identifies a Free Part (FPm) among the +k-axis direction or
the k-axis direction. In fact, If the component P m of an assembly does not
interfere with another component Pl in the direction of the +k-axis, the component
Pm can be disassembled freely in the direction of the +k-axis. If [I]k is the interfer-
ence matrix in the direction of +k, the transpose matrix [I]kT represents the inter-
ferences along the opposite direction -k. This interesting property allows to limit
the component translations to 3 main directions during the CAD stage, although
the information regarding the 6 directions of disassembly is obtained. By using the
approach described above, we note that part 1, part 6 and part 7 represents no in-
terference with another component along their disassembly in the direction respec-
tively of +Z, Z and +Y. Consequently, the free parts detected according to the il-
lustrative example are (1, +Z), (6, Z) and (7, +Y) and they are shown in red in
the interference matrix.

2.3 Feature selection with ACO and solution construction

As mentioned in the introduction of the ACO, the main objective of the ant is find-
ing the shortest path which is equivalent in our case to the optimal disassembly
sequence planning with a minimal cost. To construct their solution, ant k is sum-
moned to select from part m the next part l to visit based on the probabilistic state-
transition rule Pk (m, l).

­ >W (m, l )@ . >K (m, l ) @


D E

if l  J k ( m)
°
Pk (m, l ) ® u¦ >W (m, u )@D . >K (m, u )@E (2)
°
J ( m)
k

¯ 0 else

The probability depends, firstly, on the pheromone concentration in the path


τml which corresponding to the positive feedback of the track. In this study, the size
of the pheromone matrix is represented as a (6nu6n). Secondly, it depends on the
heuristic information ɳml which combines the criteria of change of disassembly di-
rection and tools change from part m to part l. The matrix heuristic information’s
size, [ɳ], represented as (6nu6n) and the expression is computed as follow:
86 M. Kheder et al.

K (m, l ) w1 d (m, l )  w2 t (m, l ) (3)

Where:
d (m ,l): is an integer representing the direction change between part m and part l
which can take the following values:
i 2: if there is no change between two consecutive parts.
i 1: if there is a change of 90° between two consecutive parts.
i 0: if there is a change of 180° between two consecutive parts.
t (m ,l): is an integer corresponding to the tool change between part m and part l,
which can take the following values:
i 0 if there is no change of tools between two successive parts.
i 1: if there is a change of tools is needed between two successive parts.
w1, w2 represent two weight coefficients and α and β are two parameters which
determine respectively the relative influence of the pheromone trail and the heuris-
tic information. Jk (m) is the complete candidate list generated dynamically using
the interference matrix after the part m has been removed. Indeed, the transition of
part m to part l is based on the roulette wheel selection to avoid the premature
convergence.

3 Optimization of DSP
The optimization of DSP is a multi-objective problem, so it’s necessary to intro-
duce and integrate more objectives that can be automatically quantified. Thus, the
optimal disassembly sequence consider four objectives: the maintenance of usury
parts, the disassembly direction change, the disassembly tool change, and the part
volume. The purpose is to obtain an optimum DSP by disassembling the smaller
parts first, disassembling the maximum number of parts in the same direction
without changing the tool and easier access to remove the defective components
[9].Where OF is the objective function which represents the quality of the DSP
given as follow:

OF
max N  (
J

D 1 T
G

P
M
 \V ) (4)

Where
mi N
vi
¦ N  k  1 ¦ 6v ( N  i  1)
N
M V
k 1
¦m i i 1 i
A disassembly Sequence Planning Approach … 87

N N
D ¦ d ( pi , pi1 )
i 1
T ¦T ( p , p
i 1
i i 1 )

x N is the number of parts in the mechanism,


x M is the relative factor of maintenance for each component: mi can take the
following values:
i 1: if there is no maintenance needed to the component m
i 2: if a corrective maintenance of component m is needed
i 3: if a preventive maintenance of component m is needed
x V is the relative volume of each component in the mechanism.
x γ, δ, μ , and ψrepresent weight coefficients that can be chosen according to
the objectives of the designer.
x D, T: represent respectively the total value of direction change and the total
value of the tools change of the disassembly sequence where pi and pi+1
are two parts successively disassembled.
In the treated example the value of weight coefficient is γ = δ= ψ= 0.2. Moreo-
ver, attention is paid to coefficient μ = 0.4.

3.1 Pheromone trail


If all ants have finished their tasks and built their sequence completely, the pher-
omone update occurs.

W ij (t  1) (1  U )W ij (t )  Q (5)
ma
­ 'W k if (i, j)  sequence S of k½
°¦ ° (6)
Q ® k 1 ij ¾
°¯ 0 Otherwise °¿

Where: t represents the different iterations of ant colony algorithm , Q repre-


sents the sum of the contributions of all ants that used to move from part m to part
l for constructing their solution and ma is the number of ants that found iteration
best sequence. The extra amount of pheromone is quantified by:

'W ml k G u OFsk (6)

δ (δ > 0) is a parameter that defines the weight given to the best solution and ρ ϵ
[0.1] is the rate of evaporation. The evaporation mechanism helps ants to progres-
88 M. Kheder et al.

sively forget what happened before and extend their research towards new direc-
tions without being overly constrained by its past decisions.

4 Implementation and Case study

The data processing implementation of the proposed approach has been performed
using Matlab R2013b (Matrix Laboratory), SolidWorks CAD system and its API
(Application Programming Interface). The output of ACO shown in Figure 3 pre-
sents the evolution performance of the algorithms, the objective function OF ver-
sus generation number of the related example (Figure 2) and the optimal sequence
with the associated direction.

Optimal DSP 7 6 2 4 1 5 3
Direction +Y -Z -Z -Z +Z +Z +Z

Fig. 3. The output of the implemented ACO Disassembly Sequence Tool.

The output of the implemented tool exposed in Fig.3 highlights the five steps:
x The import of an assembly in CAD format,
x The extraction of assembly data,
x The generation of interference matrix,
x The entrance of both objective function and ACO parameters ,
x The generation of the DSP.
The optimal DSP and the associated disassembly directions of the treated ex-
ample are presented in Table 2. The computation time is 5.03 s which proves the
efficiency of the proposed approach.

Table 2. Best disassembly sequence and its associated direction.


A disassembly Sequence Planning Approach … 89

Optimal DSP 7 6 2 4 1 5 3

Direction +Y -Z -Z -Z +Z +Z +Z

5 Conclusion
In this paper, an optimization of DSP based on an ant colony approach for preven-
tive maintenance is proposed. The precedence relationships between parts were
considered using a free part process which permits the generation of feasible DSP.
A Computer based tool was implemented permitting the generation of optimal
feasible DSP from a CAD model. The obtained results, shown in an industrial ex-
ample, reveal the credibility of the proposed approach.

References
1. Moore K. E., Gungor A. and Gupta M. S. Petri net approach to disassembly process planning
for products with complex AND/OR precedence relationships. Computer and Industrial Engi-
neering, Vol 35, 1998, pp.165-168.
2. Chung C.H. and Peng Q.J. An integrated approach to selective-disassembly sequence plan-
ning. Robotics & Computer-Integrated Manufacturing, Vol. 21, No. 4, 2005, pp. 475-85.
3. Lambert A. J. D. Optimizing disassembly processes subjected to sequence-dependent cost.
Computers and Operations Research, Vol 34 (2), 2007, pp. 536-55.
4. Grassé, P. P. La reconstruction du nid et les coordinations interindividuelles chez Bellicoiter-
mes natalenis et Cubitermes sp. La théorie de la stigmergie: Essai d’interprétation des termites
constructeurs, Insectes Sociaux, Vol. 6, 1959, pp. 41-81.
5. Wang J F., Liu J H and Zhong Y. F. Intelligent selective disassembly using the ant colony al-
gorithm. Artificial intelligence for engineering design, analysis and manufacturing, Vol 17, 2003,
pp. 325-333.
6. Mullen R. J., Monekosso D.; Barman S. and Remagnino P. A review of ant algorithms. Expert
Systems with Application, Vol 36, 2009, pp 9608-9617.
7. Aghaie A. and Mokhtari H. Ant colony optimization algorithm for stochastic project crashing
problem in PERT networks using MC simulation. International Journal of Advance Manufactur-
ing Technology, Vol 45, 2009, pp. 1051–1067.
8. Ben Hadj R., Trigui M., and Aifaoui N. Toward an integrated CAD assembly sequence plan-
ning solution. Journal of Mechanical Engineering Science, Vol 229, 2014, pp. 2987-3001.
9. Kheder M., Trigui M., and Aifaoui N. Disassembly sequence planned based on a genetic algo-
rithm. Journal of Mechanical Engineering Science, Vol 229, 2015, pp. 2281-2290.
A comparative Life Cycle Assessment of utility
poles manufactured with different materials and
dimensions

Sandro Barone1, Filippo Cucinotta2, Felice Sfravara2


1
University of Pisa.
2
University of Messina.
* Corresponding author. Tel.: +39-090-3977292. E-mail address: filippo.cucinotta@unime.it

Abstract In the production of utility poles, used for transmission, telephony, tel-
ecommunications or lighting support, for many years, the steel has almost entirely
replaced wood. In recent years, however, new composite materials are a great al-
ternative to steel. The questions are: is the production of composite better in terms
of environmental impact? Is the lifecycle of composite pole more eco-sustainable
than lifecycle of steel pole? Where is the peak of pollution inside the lifecycle of
both of technologies? In the last years, in order to deal with new European polices
in environmental field, a new approach for the impact assessment has been devel-
oped: the Life Cycle Assessment. It involves a cradle-to-grave consideration of all
stages of a product system. Stages include the extraction of raw material, the pro-
vision of energy for transportation and process, material processing and fabrica-
tion, product manufacture and distribution, use, recycling and disposal of the
wastes and the product itself. A great potentiality of the Life Cycle assessment ap-
proach is to compare two different technologies designed for the same purpose,
with the same functional unit, for understanding which of these two is better in
terms of environmental impact. In this study, the goal is to evaluate the difference
in environmental terms between two different technologies used for the production
of poles for illumination support.

Keywords: Life Cycle Assessment, Green Design, manufacturing optimization,


utility poles

1 Introduction

In the last years, the need to reduce the environmental impact of a product has led
to define a new regulatory framework for the assessment of the lifecycle in term of
© Springer International Publishing AG 2017 91
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_10
92 S. Barone et al.

eco-sustainability. The Europe Society of Environmental Toxicology and Chemis-


try, in the years between 1990 and 1993 produced the development of a tool called
Life Cycle Assessment described by Fava et al [1]. This new approach allows in-
vestigating the environmental impact of the single stage of the production, from
the raw material extraction to the disposal phase. In the last 30 years many devel-
opments have been conducted and a summary of these is reported by Klöpffer [2]
and Finnveden et al.[3]. The principal parts of LCA are goal and scope definition,
Life Cycle Inventory (LCI), impact assessment and the interpretation of results.
Vigon and Jensen [4] have conducted a study on how the results are influenced by
data quality collected in the life cycle inventory phase. Baumann and Rydberg [5]
carried out evaluation of results by using different databases in the inventory stage
of a life-cycle assessment. Another important aspect is the uncertainty of data in
inventory phase, so Maurice et al. have conducted studies in this part of LCA [6].
In addition to the theoretical aspects of LCA, the researchers focused on practical
aspects, defining the parts of LCA that need more specified study for achieving of
a more solid conclusion [7]. Thanks to LCA it is possible to understand if, in addi-
tion to a change of mechanical behavior, materials are also different in environ-
mental term. An example of this kind of study is reported in [8].
The aim of this study is the assessment of the cradle-to-grave life cycle environ-
mental impacts related to two different manufacture types of utility poles: steel
galvanized pole and fiberglass pole. The assessment concern the interpretation and
comparison of impact indicator values of greenhouse gas (GHG) emissions, abiot-
ic depletion (AD) and Abiotic depletion fossil (ADF), eutrophication potential
(EP), acidification potential (AP), global warming potential (GWP), freshwater
aquatic ecotoxicity potential (FAETP), human toxicity potential (HTP), marine
aquatic ecotoxicity potential (MAETP), ozone layer depletion potential (ODP),
photochemical ozone creation potential (POCP) and terrestric ecotoxicity potential
(TETP).

2 Material and methods

2.1 Goal definition and unit function

Aim of this study is the application of life cycle assessment methodologies in re-
spect to guidance provided by the International Organization for Standardization
(ISO) in standards ISO 14040 [9] and 14044 [10] for comparison of environmental
impact of two different manufacture process of utility pole. In this study, the fol-
lowing aspects are included: Goal and Scope definition, Inventory analysis, Im-
pact assessment and Interpretation. The environmental inputs and outputs that
concerns steel galvanized utility pole and fiberglass utility pole are reported, an
A comparative Life Cycle Assessment of utility … 93

assessment of impact indicators for each product and a comparison between them
is conducted. The purpose of these poles is ensure a lighting support to a height
from the ground of 6 and 8 meters, for a period of 60 years, the poles are installed
at equivalent spacing. The use life of a steel pole is about 60 years [11] so for pe-
riod under investigation is sufficient one quantity. In the case of Fiberglass pole
the use life is estimated in about 20 years, so three poles are needed to cover the
entire period. The system boundary is previously defined (from extraction of raw
material to disposal phase) and the geographic areas of production of these poles
are two different plants in South-Italy. The assessment of impact factors is carried
out with a Gabi Educational Software, in respect to standard ISO 14044. For all
processes not directly performed inside the plants (production of glass fibers, pro-
duction of steel sheet, galvanization process) the evaluation gate-to-gate is con-
ducted thanks to the use of database inside the software (GaBi Databases) that is
fully refresh every year and developed in accordance with the LCA standard.

2.2 Fiber-Glass composite utility pole inventory

The Fiberglass composite pole under study is manufactured with two principal
distinct stages. A first phase that concerns the weaving of the fabric form (particu-
lar condition because usually the fabric is bought from third parts), a second phase
that concerns centrifugal casting. The final fabric is unidirectional fiberglass with
addiction of random chopped fibers. The fiberglass material is bought from other
company, so for this reason it were evaluated the provenance and the type of
transportation. At this phase there are different machines to work. The cutting ma-
chine for chopped production, so the continuous filament are cut to the random fi-
ber of length equal to 5 cm, weaving machine for the manufacturing of the final
fabric, length controls machine, wrapping and cutting machine. The second phase
is the centrifugal casting, the fabrics are transferred on a worktop, a first layer of
polyester (with the purpose of protect the fiberglass from external agents) is lay
out, on it different layers of fabric are disposed so that the final shape of the pole
is trunk-conical. All layers are wrapped and put inside the centrifugal casting ma-
chine where the resin with catalyst is injected. The angular velocity of a perma-
nent mold inside the centrifugal casting machine (about 800 rpm) allows to push
the stack of layers near the walls in a cylindrical shape and allows also to do flow-
ing the resin along the whole height of the pole. The velocity is reduced to about
300 rpm when every part of the pole is wetted by resin. The total process of cen-
trifugal casting is completed in about 20-30 minutes. The geometrical characteris-
tics of the fiberglass pole are reported in Table 1, the reported diameter is to the
base of the pole. The materials bought are reported in Table 2 where there are also
the provenance, type of transport and type of material.
94 S. Barone et al.

Table 1. Characteristics of Fiberglass pole (6 meter and 8 meter height)

Characteristics 6 meter pole 8 meter pole


Shape Trunk-conical Trunk-conical
Height 6000 [mm] 8000 [mm]
Diameter 174,5 [mm] 213 [mm]
Thickness 5 [mm] 5 [mm]
Material Fiberglass Fiberglass
Final weight 20,24 [kg] 35 [kg]

The total energy absorption of every single phase involved in the process is re-
ported in Table 3, the energy used in this fabric is auto-produced with photovolta-
ic panels. The process with the higher consumption of energy is the centrifugal
casting phase. The entire process is modelled inside Gabi Software, the cradle-to-
grave life cycle stages considered in the LCA of Fiberglass pole are illustrated in
Figure 1b, the distribution of mass is immediately guessed by the thickness of ar-
rows.

Table 2. Type of material and distance of provenance and type of transport

Material Type & Transport Type


Producer & km

Roving 1200 E-Glass OCV Truck 1400


Roving 2400 E-Glass OCV Truck 1400
Chopped E-Glass OCV Truck 1400
Subbi Polyester Alphatex Truck 2654
Film Polyester Nontex Truck 1400
Resin Polyester COIM Truck 1400
Accelerant Cobalt Bromochim Truck 1400
Dye Grey Comast Truck 1400
Catalyst Retic C Oxido Truck 800

Table 3. Energy absorbed for every process

Weaving department
Machine Power Working time Type of energy
Cutting chopped machine 2 kW 1 h 20 min Photovoltaic
Loom machine 3 kW 1h 20 min Photovoltaic
Control machine 0,5 kW 1h 20 min Photovoltaic
Winder Machine 1,5 kW 1h 20 min Photovoltaic
Resin handling
Machine Power Working time Type of energy
Pump 1 kW 10 min Photovoltaic
A comparative Life Cycle Assessment of utility … 95

Centrifugal Casting department


Machine Power Working time Type of energy
Mixer 1 kW 5 min Photovoltaic
Centrifugal Casting 12 kW 30 min Photovoltaic
Packing machine 2 kW 1 min Photovoltaic

2.3 Steel galvanized pole inventory

A cradle-to-grave life cycle inventories are not available for steel utility poles so a
life cycle inventory ad-hoc of this process has been conducted. In the production
of steel galvanized pole, the fabric buys the non-galvanized steel sheet. On these
sheets, different holes are executed by a punching machine and successively a
bending of sheet is performed by a press machine. When the process of bending is
concluded, the welding process begins through a submerged arc-welding machine.
When the pole is completed, all accessories are applied on it for the final imple-
mentation with an arc welding. The pole, after initial manufacture, is hot-dip gal-
vanized with zinc for the protection from external corrosive agent. Subsequently,
there are the use of a grinder for polishing of the pole and of an overhead crane for
its movement and a drill for mounting accessories. Every process inside the plant
is quantified in terms of energy and mass, but the production of metal sheet and
the final process of galvanization are made by other plants. In these two cases, the
standard processes are evaluated thanks to the Gabi Software database. The steel
pole in the disposal phase is modeled as if the 100 % is recycled as scrap steel. A
summary of principal characteristics of steel pole is shown in Table 4.

Table 4. Characteristics of Steel pole (6 meter and 8 meter height)

Characteristics 6 meter pole 8 meter pole


Shape Conical Conical
Height 6000 [mm] 8000 [mm]
Diameter 200 [mm] 270 [mm]
Thickness 5 [mm] 5,5 [mm]
Material Galvanized Steel Galvanized steel
Final weight 20,2 [kg] 35,0 [kg]
Sheet surface 3,77 [m2] 6,78 [m2]
Weight of sheet 148 [kg] 293 [kg]
Weight galvanized pole 155,0 [kg] 307,0 [kg]
A summary of selected inventory inputs and outputs, so the total energy absorp-
tion of every single phase involved in the process are reported in Table 5, the en-
ergy used in this plant is that of the national grid. The only material bought is the
not-galvanized sheet metal, that is sent from 735 km by truck, the galvanization
96 S. Barone et al.

process is done in another plant distant about 60 km and the transportation of the
pole is done by truck.

Table 5. Energy absorption in every process

Machine Power Working time Working time


[kW] 6 meter pole [s] 8 meter pole [s]
Punching machine 4 60 60
Press machine 44 600 600
Submerged arc welding 14 424 424
Arc welding machine 4 300 300
Grinding machine 2,2 300 300
Drill 100 120 120
Overhead crane 18 300 300

The entire process is modelled inside Gabi Software; the cradle-to-grave life cycle
stages considered in the LCA of steel pole are illustrated in Figure 1a.
A comparative Life Cycle Assessment of utility … 97

Figure 1. Flow modelled inside Gabi Software for Steel pole (a – upper figure) and Fiber glass
pole (b – lower figure)

3 Results and Conclusions

According to Standard ISO, the results are normalized dividing them by a refer-
ence value. It is clear that there are different possibility of normalization sets, de-
pending on region and year. The normalization set chosen in this study is the CML
2001; the factors in this type of normalization are described in [12]. The normal-
ized results are shown in Figure 2, in which Steel pole is set as unit.
98 S. Barone et al.

Figure 2. Comparison of impact indicators for 6 m (a – upper figure) and 8 m (b – lower figure)
between Fiberglass pole and Steel pole. Steel pole is set as unit.

The impact indicators of Fiberglass pole are almost always better than those of the
Steel pole. For 6 meters pole, the only impact indicator that for Fiberglass pole is
higher than Steel Pole is the Eutrophication Potential (EP) because of disposal
treatment of composite material (a lack of regulations) and the production of glass
fiber material. For the 8 meters pole, the Freshwater Aquatic Ecotoxicity is higher
for Fiberglass pole. This indicates the non-linearity of behavior of the impact indi-
cators respect to the length of the pole. The principal difference in the two differ-
ent manufacturing processes is the weight of material used. The LCA of Steel pole
is strongly influenced by mass and energy introduced in the process. An important
quantity of environmental impact is relative to extraction of raw material for the
Steel pole respect to Fiberglass. The results show that in South Italy, the choice of
A comparative Life Cycle Assessment of utility … 99

composite pole it’s the better solution in environmental terms respect to steel pole.
To perform this quantification, the methodology that is well established over the
years as an effective tool for environmental performance is LCA. The paper show
also the mass and energy input and output of every single process inside a produc-
tion plant of composite poles and steel poles. The division in sub-processes allows
to intervene, in those with higher environmental impact, in an optimization loop
focused in the environmental impact improvement of the entire product. The pur-
pose of this article is then quantifying the difference between the two products in
order to have, in phase of choice, an additional criterion beyond classic ones
(structural and costs) according new European polices. Thanks to LCA method,
the impact environmental is a quantified and measured variable that can be used
like each other technical variable in project phase.

Acknowledgments The research work reported here was made possible thanks to Eng. G.
Cirrone of NTET Company SpA - Belpasso CT, Italy, that furnished data for inventory analysis.

References

1. Fava, J., Denison, R., Curran, M., Vigon, B.W., Selke, S., Barnum, J.: A technical framework
for life-cycle assessment. , Pensacola - Florida (1991).
2. Klöpffer, W.: Life cycle assessment. Environ. Sci. Pollut. Res. 4, 223–228 (1997).
3. Finnveden, G., Hauschild, M.Z., Ekvall, T., Guinée, J., Heijungs, R., Hellweg, S., Koehler,
A., Pennington, D., Suh, S.: Recent developments in Life Cycle Assessment. J. Environ.
Manage. 91, 1–21 (2009).
4. Vigon, B.W., Jensen, A. a.: Life cycle assessment: data quality and databases practitioner
survey. J. Clean. Prod. 3, 135–141 (1995).
5. Baumann, H., Rydberg, T.: Life cycle assessment. J. Clean. Prod. 2, 13–20 (1994).
6. Maurice, B., Frischknecht, R., Coelho-Schwirtz, V., Hungerbühler, K.: Uncertainty analysis
in life cycle inventory. Application to the production of electricity with French coal power
plants. J. Clean. Prod. 8, 95–108 (2000).
7. Heijungs, R.: Identification of key issues for further investigation in improving the reliability
of life-cycle assessments. J. Clean. Prod. 4, 159–166 (1996).
8. Puri, P., Compston, P., Pantano, V.: Life cycle assessment of Australian automotive door
skins. Int. J. Life Cycle Assess. 14, 420–428 (2009).
9. European Standard ISO: Environmental management - Life cycle assessment - Principles and
framework. (2006).
10. European Standard ISO: Environmental management -- Life cycle assessment --
Requirements and guidelines. (2006).
11. Bolin, C.A., Smith, S.T.: LCA of pentachlorophenol-treated wooden utility poles with
comparisons to steel and concrete utility poles. Renew. S. Energy Rev. 15, 2475–2486
(2011).
12. CML - Department of Industrial Ecology: CML-IA Characterisation Factors,
universiteitleiden.nl/en/research/research-output/science/cml-ia-characterisation-factors
Prevision of Complex System’s Compliance
during System Lifecycle

J-P. Gitto1,2,*, M. Bosch-Mauchand2, A. Ponchet Durupt2, Z. Cherfi2,


I. Guivarch1
1
MBDA, 1 avenue Réaumur, 92358 Le Plessis-Robinson Cedex, France
2
Sorbonne Universités, Université de Technologie de Compiègne, CNRS, Laboratoire
Roberval, Centre Pierre Guillaumat, CS60319, 60203 Compiègne Cedex, France
* Corresponding author. Tel.: +33 1 71 54 36 09. E-mail address: jean-philippe.gitto@utc.fr

Abstract: In this paper, we propose a methodology to define a predictive model of


complex systems’ quality. This methodology is based on a definition of system’s
quality through factors and allows taking into account the specificities of the
company. The model obtained with this methodology helps quality practitioners to
have an objective view of system’s quality and predict the future quality of the
system all along its lifecycle. This approach is illustrated through its application to
design a model for compliance prediction, in an aeronautic and defense group,
MBDA.

Keywords: Product Compliance; Compliance Forecasting; Product Design;


Product Quality; Decision Making.

1 Introduction

In a changing world, with global competition, it is essential for companies to satis-


fy their customers’ requirements. As for all companies, in particular those who
produce complex systems, a management system of the quality is essential to or-
ganize the work and to insure customer satisfaction. A complex system is consist-
ed of electronic components, mechanical parts and software. Its lifecycle extends
over several years and involves many engineering fields. This increases the com-
plexity of project organization and monitoring [1, 2].
In the scope of our study, the complex systems are produced in small series and
have a lifespan of several decades. Their development is based on a contract with
the customers who define their requirements at the beginning of the project. These
requirements are translated into technical definitions early in the process of devel-
opment and can evolve all along the system lifecycle.
© Springer International Publishing AG 2017 101
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_11
102 J-P. Gitto et al.

Thereby, it is difficult for quality practitioners to have a comprehensive and ob-


jective view of the quality of the future system in use (i.e. at completion). Several
methods and tools were developed to manage quality of products and of process
[3], but they don’t answer to all needs according to companies specificities [4, 5],
(like industrial sector, product complexity, internal organization…) and are not
based on direct measurements on the product. So, there is a real need to connect
quality measurements on system during its lifecycle, with the ability to meet the
satisfaction of customers’ requirements, when the system is in use. The system
lifecycle is often divided in several phases. The transition from one phase to an-
other corresponds to a gate (G0 to G4 in Figure 1) and is, certainly, the best mo-
ment to assess the work done and to compare it with the planned advancement. To
help quality practitioners to do this assessment and to compare development ma-
turity, to customers’ requirements, a quality predictive model is necessary.
Delivery to the
Customer
Develop
G0 G1 Design G2 Qualify G3 Produce G4 Sustain
Concept

Fig. 1. System lifecycle phases and gates

The proposed model allows predicting system quality at completion. It helps


quality practitioners to identify the cause of the deviation and to decide if correc-
tive actions are required (Figure 2). This model has to be used at each gate of the
system lifecycle to update the prevision of the system’s quality, according to the
last design improvements.
Corrective Actions Corrective Actions
no
Gi,i+1
Phasei Are goal Phasei+1
Quality achieved ? yes
Practitioners Quality Assessment
Indicators Quality
Predictive Model Prediction of Quality at Completion
Processes

Fig. 2. Use of Predictive Model at each gate

Section 2 is dedicated to a review of some existing models with their strengths


and weaknesses to answer the problematic previously exposed. Then, in section 3,
the proposed methodology to build the predictive model is defined and is illustrat-
ed with an application on an industrial case study.

2 Review of Existing Methods and Models

To address the need of quality management system, several methods and tools
have been developed, classified here in two categories.
On one hand, there are Quality Assurance (QA) methods, spread since the 80’s
in companies [6], to implement appropriate techniques and practices in order to
Prevision of Complex System’s Compliance … 103

provide high quality products [3]. To take into account the voice of customers, To-
tal Quality Management [7] has been defined. TQM principles have been translat-
ed in ISO 9000 standards. They offer a guide to ensure the quality of future system
by the control of the development and production methods [8]. QA methodologies
are based on the control of the process and not on the product itself [9] to ensure
quality achievement.
On the other hand, Systems Engineering methods based on indicators [10] are
used to monitor future performances of a system [11]. The indicators of Systems
Engineering are defined for a specific product’s feature or a specific technology
[12]. All the product subsystems and components have various paces of develop-
ment and validation which may overlap. This makes problematic to assess the
forecast of system compliance before the final validation of series production.
Besides, the use of these two categories of methods does not allow to make a
link between the performance of the development process and the future satisfac-
tion of customer’s requirements [13]. To make this link, the engineers and the
quality practitioners must have a clear overview of the project and must have a
long experience in development. Else, the risk is to miss a part of customer’s
needs or to detect a compliance problem too late in the development process, what
is generally expensive to correct in the final stages of product development [14].
Some models to predict product quality already exist for software. They are based
on the Factor, Criteria, Metrics model (FCM) [15]. The proposed methodology is
an adaptation of the FCM to build a predictive model of complex systems’ quality
at completion and to identify early any deviation from the target.

3 Methodology and associated Model

In the context of this study, complex systems are produced in too small quantities
to apply statistical analysis. Hence, the identification of factors, criteria and
metrics, and the model structure are based on expert knowledge elicitation [16].
The proposed methodology allows formalizing experts’ experiences, and their
heuristics. It is a way to deal with uncertainty [17] adapted to this context.
In the FCM model, quality factors are characteristics, reflecting customers’
point of view, which actively contribute to the quality of the system. Those factors
are broken down into criteria which characterize company’s processes quality
from the internal point of view. Model’s metrics provide mechanisms to quantify
quality criteria level. Thus, the model’s inputs, “indicators” in Figure 2,
correspond to the metrics values during the project. Based on these metrics values,
the model gives at each gate indications about the quality of criteria and, outputs a
forecast of system’s quality at completion. Several product quality factors have
been identified to characterize our systems’ quality at completion, for instance
compliance, reliability, safety, usability, maintainability…
104 J-P. Gitto et al.

In this paper, only compliance factor is considered to apply the proposed


methodology.

Predictive Model Construction Methodology

Factors + Step 2: Step 3: Step 4:


Step 1: Factor’s Predictive
Goal at Criteria Criteria Metric Metrics Model Validation
Goal Definition Model
Completion Definition Definition Setting

Company’s Company’s Experts


Context
Processes Processes Elicitation
Company’s Experts Experts
Strategy Elicitation Elicitation

Fig. 3. Synoptic of the proposed methodology to build predictive quality model

Each step of the methodology (Figure 3) is illustrated by its application for


MBDA, a European aeronautic and defense group, to the product quality factor
Compliance. To ease the reading of the methodology, a paragraph at the end of
each step explains how it was applied to the MBDA case.

1.1 Step 1: Factor’s Goal Definition

A goal must be defined for the system quality factor to forecast its quality level at
completion. An associated mean of measurement must also be chosen to assess the
factor’s quality level when the system is in use. This goal is based on the existing
industrial practices and takes into account the company’s organization and the
context of use of the product.

Application to Compliance Factor:


During production and use of system, each non-compliance (NC) is recorded in a
database to be treated. The objective is to deliver system to the customers without
NC and to not discover new NC during the use phase. So, the compliance quality
level at completion is characterized by the quantity of NC recorded, knowing that
the goal is to have no NC.

1.2 Step 2: Criteria Definition

To evaluate a factor, during systems’ lifecycle, the company must identify the
processes which influence the future level of the factor, at completion. Criteria are
defined to characterize quality or performance of those influencing processes in
the predictive model. Whereas factors derive from customers’ point of view,
criteria derive from the company interest. To identify relevant criteria in the
Prevision of Complex System’s Compliance … 105

company, a literature review can help to define the most common criteria. But
criteria are highly dependent on the company’s organization and an audit inside
the company is essential. It is then necessary to determine how the company’s
processes implied in the development of the system, impact the future quality.
People implied in the factor development must be asked to identify suitable
criteria. To avoid self-censorship, it is preferred to interview employees
individually. Furthermore, it is easier to plan individual interview than a single
meeting with all participants.

Application to Compliance Factor:


In the MBDA case and regarding to the compliant quality factor, five criteria
have been identified from the processes of the company (Figure 4):

Design’s Justification Production System Supply Chain


Requirements Quality Design Quality
Quality Quality Quality
Compliance

Fig. 4. Compliance predictive model’s Criteria

The three first criteria are for the compliance of system’s definition to customer’s
requirements. The last two, are for the compliance of the system regarding to its
definition.

1.3 Step 3: Metric Definition

For each criterion, metrics must be defined to assess the criterion quality level all
along the system lifecycle. Quality practitioners and experts who work on the
processes concerned by criteria are asked to say which metrics are adapted to
characterize criteria level. When the model is used, to prepare the passage of a
product lifecycle gate, the value of each of metric is calculated in order to be
processed by the model. The metrics can be based on company’s databases,
prototype’s characteristics, documentation, subjective evaluation… To be treated
in the predictive model, the metric’s values are expressed on a numerical scale.

Application to Compliance Factor:


In the case of system’s compliance model for MBDA, 17 metrics have been
defined to characterize all criteria previously identified; the chosen scale to
express metric’s value is from 0 to 100. Metrics for the compliance of the
system’s definition to customer’s requirements are given in Figure 5.
106 J-P. Gitto et al.

System Sub-system
Need TRS TRS % Design TRL Class of Justification Justification
Coverage Maturity Maturity Published Evolution Difficulty Coverage Relevancy
TRS Rate

Requirements Design Design’s


Quality Quality Justification
Quality

Compliance of
Definition

Fig. 5. Compliance of definition predictive model

After metrics definition, the whole model can be structured. The selected
formalization consists in building a network where the factor, criteria and metric
are placed on nodes. The whole criteria identified for a factor have an influence on
this one. Thus, all the criteria will be connected to the factor in the network and
each metric will be related to one criterion at least. The structured network has the
layout defined in Figure 5. Once the results of interviews are analyzed and the
model is structured, it is necessary to plan a global meeting with all participants to
validate the proposed selection of criteria and metrics. This review allows
participants to give their judgment about the good translation of their answer in
the model and to discuss some points, if they have different opinions.

1.4 Step 4: Model Setting

The network architecture having been defined, the model must be parameterized
to establish the relations which make possible the evaluation of the criterion,
according to the metrics.
For each arc of the network, a weight p is defined to characterize the influence
of one parent node, to its child. The chosen convention is to define the sum of all
arcs’ weight between a child node and it parents, equal to 1. The value of one
child node (a criteria node or the factor node) is calculated by adding all of its
parents’ node values, pondered by the weight of their arc. For a criterion Cj and its
metrics Mi it can be expressed by the equation (1) where cj is the value of the
criterion Cj; mi, the value of metric Mi and pi,j the weight of the arc between M i
and Cj:

…୨ ൌ ෍ ’୧ǡ୨ ൈ ୧ ሺͳሻ
୧ୀଵ
Prevision of Complex System’s Compliance … 107

In general, arcs’ weight can be defined by statistical analysis on data record.


But in the context of complex system development, set of recorded data don’t
include enough development case to be analysed. Consequently the elicitation-
based method has been chosen to define those weights. Experts having
experiences about a criterion are questioned for this purpose. In simple cases, the
questioned expert can directly give the weight of each arc, either by its value, or
by positioning each parent node on a scale of importance. The weights are
distributed among all arcs proportionally of their importance.
If the combination of metrics or criteria is too complicated to be directly
assessed by experts, arcs’ weight can be defined by an indirect method. The
principle is to ask the participant to give a value for each parent node in two cases,
for a standard development and for a minimum admissible development. The
procedure is illustrated through its application to the compliance factor.

Application to Compliance Factor:


For example, the criterion C1 “Requirements Quality” has 4 metrics: M1 “need
coverage”, M2 “system technical requirements (TRS) maturity”, M3 “sub-system
TRS maturity” and M4 “percentage of published TRS”. According to (1):

…ଵ ൌ ’ଵ ଵ ൅ ’ଶ ଶ ൅ ’ଷ ଷ ൅ ’ସ ସ (2)

Each participant is questioned to give the value of the child node in the
expected case and for all possible cases were only one parent node is on its
minimum admissible case. The experts give the minimum admissible value for
each metrics: 90% for m 1 and m4, 60% for m2 and m3; and they give the expected
level for those metrics: 100% for each metrics. After, participants evaluate C1
value in each case exposed in the Table 1:

Table 1. Metrics and criterion values estimated by experts


m1 m2 m3 m4 c1
90% 100% 100% 100% 97%
100% 60% 100% 100% 92%
100% 100% 60% 100% 90%
100% 100% 100% 90% 98%

We deduce that p1 = 0.3, p2 = 0.2, p3 = 0.25, p4 = 0.25. Each node of the


model can be determined by this method if it is necessary. The weights taken into
account in the model are the mean of participants’ answers.
The model have been tested on two systems in development, called A and B.
The model have been used for the system A for the 3 first gate (start of the
development to the start of qualification) and for system B, the model have been
used for 2 gates. To assess factor’s quality level, the values of model’s metrics are
evaluated and are added in the model.
108 J-P. Gitto et al.

Calculated compliance levels are in Figure 6:

Fig. 6. Compliance level given by the model for Systems A and B.

For system A, the compliance levels calculated with the model is under the
expected levels for the three gates. The progression between the two first gates is
too low and the gap between expected level and actual level grows. At gate G2 a
part of the gap is filled. For system B, the gap is lesser on G1 and the goal is
reached on G2. Those results are consistent with posteriori analysis of the quality
of the development. Evaluate the compliance at G0 would have helped to correct
the gap before G1. The model helps to identify the roots of the problem by the
metrics value. For system A, the gap is explained by a poor coverage of
customer’s need and maturity of requirements at the beginning of the
development; for system B, quality is lower because of some delay.

1.5 Discussion

The model consistency has been tested on several scenarios of development and
reviewed with quality practitioners. This assessment can be used to alert project
manager as soon as possible on a risk to have non-conformities at the end of the
development phase and the model gives indications to identify the root causes of
the problem. To understand why and take corrective actions, experts working on
affected criteria must be involved in the treatment of this anomaly. Thereby,
project manager can take decisions knowing their impact on future system’s
compliance. The model has been tested on two systems in development and
reliability factor was also treated. First results confirm the consistency of the
model. However, complex system development last several years and we will
know the system compliance in use only after years of use. There is limited
number of development case to test the model which limits the validation
possibilities. The work on validation test is in progress. The model should evolve
gradually as new experiences are gained.
Prevision of Complex System’s Compliance … 109

Conclusions

Existing tools to monitor complex system development are oriented to


processes’ performance and don’t make the link with quality at completion from
customers’ point of view. The proposed methodology allows an industrial
company to build its own predictive model for system’s quality, usable all along
the system lifecycle. The methodology for predictive model is based on FCM
method, and the main steps describe here are the definition of a goal for complex
system’s quality factor, the definition of criteria and metrics for a product quality
factors and the setting of the model, based on experts elicitation. This method
requires many participants and is time consuming to be implemented in a
company. Further research can be done to improve elicitation strategy and expert
selection.

References

1. Fellows, R. and Liu, A.M.M. Managing organizational interfaces in engineering construction


projects  : addressing fragmentation and boundary issues across multiple interfaces. Construc-
tion Management and Economics, 2012, 30(8), pp. 653 – 671.
2. Hoegl, M. and Weinkauf, K. Managing Task Interdependencies in Multi-Team Projects : A
Longitudinal Study. Journal of Management Studies, 2005, 42(6), pp. 1287 – 1308.
3. Oakland, J. S. TQM and operational excellence: text with cases, 2014 (Routledge)
4. Powell, T.C. Total quality management as competitive advantage: A review and empirical
study. Strategic Management Journal, 1995, 16, pp. 15–37.
5. Söderlund, J. Pluralism in Project Management Navigating the Crossroads of Specialization
and Fragmentation. International Journal of Management Reviews, 2011, 13, pp. 153–176.
6. Prajogo, D. and Sohal, A.S. TQM and innovation: a literature review and research frame-
work. Technovation, 2001, 21(9), pp.539–558.
7. Cua, K.O., et al. Relationships between implementation of TQM , JIT , and TPM and manu-
facturing performance. Journal of Operations Management, 2001, 19, pp.675–694.
8. Tari, J.J. & Vicente, S. Quality tools and techniques: Are they necessary for quality manage-
ment. International journal of Production Economics, 2004, 92(3), pp.267–280.
9. Kitchenham, B. Towards a constructive quality model Part I: Software quality modeling,
measurement and prediction. Software Engineering Journal, 1987, 2(4).
10. Lead, C.M. et al. Systems Engineering Measurement Primer A Basic Introduction to Meas-
urement Concepts and Use for Systems Engineering, 2010 (INCOSE).
11. Orlowski, C. et al. A Framework for Implementing Systems Engineering Leading Indicators
for Technical Reviews and Audits. Computer Science, 2015, 61, pp.293–300.
12. Sauser, B.J. et al. A system maturity index for the systems engineering life cycle. Int. J. In-
dustrial and Systems, 2008, 3(6), pp.673–691.
13. Zairi, M. Measuring performance for business results, 2012 (Springer).
14. Azizian, N. et al. A framework for evaluating technology readiness, system quality, and pro-
gram performance of US DoD acquisitions. Systems Engineering, 2011,14(4), pp.410–426.
15. McCall, J.A., Richards, P.K. and Walters, G.F. Factors in Software Quality Concept and Def-
initions of Software Quality, 1977 (Rome air development center).
16. Ayyub, B.M. Elicitation of expert opinions for uncertainty and risk, 2001 (CRC Press).
17. Herrmann, J.W. Engineering Decision Making and Risk Management, 2015 (J.W. & Sons).
Framework definition for the design of a mobile
manufacturing system
Youssef BENAMA1, Thecle ALIX2, Nicolas PERRY3*
1
Université de Bordeaux, I2M UMR5295, 33400 Talence, Fr,
2
Université de Bordeaux, IMS UMR5218, 33400 Talence, Fr.
3
Arts et Métiers ParisTech, I2M UMR5295, 33400 Talence, Fr.
* Corresponding author. Tel.: +33-556845327; E-mail address: nicolas.perry@ensam.eu

Abstract The concept of mobile manufacturing systems is presented in the litera-


ture as an enabler for improving company competitiveness by cost reduction,
delay respect and quality control. In comparison with classical sedentary systems,
added characteristics should be taken into consideration, such as the system life
phases, the dependency to the production location, human qualification as well as
means supply constraints. Such considerations might be addressed as soon as
possible in the design process. This paper aims at presenting a contribution for the
design of mobile manufacturing systems based on three analysis: (1) an analysis of
the mobile manufacturing system features (2) an identification of the attributes
enabling the system mobility assessment, and (3) the proposal of a framework for
mobile production system design considering new context-specific decision crite-
ria.

Keywords: production system, mobile manufacturing system, design of manufac-


turing plant.

1 Introduction

Ensuring shipment of bulky and fragile product can be economically and techni-
cally challenging. The solution that can be adopted is to conduct production activi-
ties close to the end client. In case of a one-time demand, implanting a permanent
production plant may seem unrealistic and then the concept of Mobile Manufac-
turing System (MMS) that consists in using the same production system to satisfy
successively several geographically dispersed customer orders, directly on the end
client location can be a good alternative.
The use of mobility of production systems has been encountered in many in-
dustries: construction industry [1], shipyard industry, etc. As interesting as it
seems, the concept has been rarely discussed in the literature. The few existing
definitions of mobility depend on authors and contexts [2]. Mobility is also de-

© Springer International Publishing AG 2017 111


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_12
112 Y. Benama et al.

fined at different levels for manufacturing system. There is an internal mobility


concerning manufacturing system modules (machinery, material handling mod-
ules, etc.) and a global or external mobility concerning the movement of the whole
manufacturing system. This last level is analyzed across geographic areas and
underpins strategic considerations with medium to long-term implications.
In order to facilitate the movement of the manufacturing system to a new geo-
graphical location, Rösiö [3] evokes three required characteristics that are: the
mobility of module, the modularity [4] and the integrability of modules.
In this paper, a holistic view of the manufacturing system is adopted. The mo-
bility of the manufacturing system is defined as the ability of a manufacturing
system, defined by its technical, human and information components, to move and
produce on a number of successive geographical locations. The definition includes
two aspects:
x Transportability: the manufacturing system must be transportable and must be
able to adapt to the requirements of the different transportation modes (road, sea,
etc.)
x Operationality: the system must be able to be quickly operational on different
locations for which it is designed.
The following section proposes to discuss how a Mobile Manufacturing System
(MMS) may differ from a sedentary manufacturing system.

2 Requirements for manufacturing system mobility

The concept of mobility implies to consider some additional system life-phases


regarding traditional manufacturing systems: mobility of modules, on-site mainte-
nance management, organizational aspects and training needs, energy supply.

2.1 Manufacturing system design


The production system design process is based on four macro phases [7]: (1) ini-
tialization, (2) preliminary design, (3) embodiment design and, (4) detailed design.
Each of these phases consists in selection, evaluation and decision activities. Tak-
ing into account the characteristics of mobility is built through each phase of the
MMS design process. Obviously, in a context of mobility, the production system
environment changes from one implementation location to another and the analy-
sis of the system’ environment is of huge importance.
A production system is currently seen as a system composed of several subsys-
tems generally analyzed through an external and internal views coupled with other
physical system, decisions and informational views. Then the production system
design depends on the design (or selection of items when solutions may exist in
the market) of each system component, but also depends on the connection be-
tween these subsystems for their integration into the overall system.
Framework definition for the design … 113

A production system can be defined as a system of systems to the extent that


on the one hand, it is composed of a set of subsystems. Each one has its own life
cycle and each one may be defined independently from the others. On the other
hand, interactions between these subsystems define constraints for the system of
systems, affecting the performance of the overall system [8].
The Systems engineering adopts two complementary points of view for sys-
tems analysis [9]:
x An external view or “black box” approach defining the system boundaries used
to identify the external environment elements that force the system and that the
system must respond to by providing the expected services. The environment is
defined by all factors that might influence or be influenced by the system [9].
x An internal view or “white box” approaches that considers the internal system
interacting elements, which define its organization (architecture) and its operation.

2.2 Additional system life-phases


During its operation, the MMS is first put into service on its implantation site be-
fore being used for production. Throughout this phase, maintenance and configu-
ration operations carried out in order to adapt its behavior to meet at best the ex-
pected performance. However, unlike to sedentary manufacturing systems, Mobil-
ity requires additional operational phases:
x Transportation phase (a): the MMS is packaged and transported to its implan-
tation location.
x On-site installation phase (b): the MMS arriving on site is composed of inde-
pendent modules and components that are integrated and lead to the plant installa-
tion. Upstream, operations to prepare the site are performed. Downstream, the
factory is installed and operations of verification and commissioning are carried
out.
x On-site production phase (c): the plant is used to produce locally. In parallel,
maintenance operations are necessary to maintain high system performance.
x Diagnosis and control phase (d): at the end of the production phase, a diagno-
sis of all modules is carried out to ensure that the mobile plant will be operational
for the next production run. The modules requiring heavy maintenance or re-
placement are identified. Replacement and procurement orders are launched dur-
ing this phase.
x Dismantling phase (e): the plant is dismantled. Various modules and compo-
nents are conditioned and prepared for the transportation phase.
x Transportation phase (f): the modules are placed in the transportation configu-
ration, two scenarios are then possible depending on the business strategy of the
company:
o A new order arrives and a new site is identified. The MMS is routed to
the new location and the operational cycle resumes at the phase (b);
o No new order is identified and thus no new implantation location is iden-
tified. The MMS is then routed to its storage location that corresponds to the
114 Y. Benama et al.

phase (g). Depending on negotiations with the manager (client, institution,


etc.) of the site where the system has been used, the MMS storage phase could
take place in the former location in the expectation of a new order.
x Storage phase (g): During the MMS inoperability period, the modules have to
be stored until a new order. The storage can take place on the stationary basis, or
on the latest operating location in order to stay closer to a potential market. During
this phase, heavy maintenance operations can be conducted such as: maintenance
or replacement of machines, modules reconfiguration, etc.
The identification of the life-phases is important as evaluating the overall per-
formance (cost, delay, etc.) of the system depends on it.

2.3 Organizational aspects and training needs


Geographic mobility of the manufacturing system requires adapting the automa-
tion level required to the qualification level of the personnel available on-site. To
ensure the production system independence regarding the on-site operator qualifi-
cation, the level of the manufacturing system automation must be adapted. An
independent production system to operator’s qualification can be imagined as a
highly automated system. However, too many automation leads to a complexity
requiring some expertise to ensure MMS maintenance operations. A trade-off
must be achieved between the required automation level and the on-site available
qualification. An on-site operator’ training offer facilitate this trade-off.
System mobility means that each time a new team is involved in the system for
a new implantation location [5]. Hence, the need to provide operator training for
running the manufacturing system is crucial. Moreover, Fox recalls the need for a
qualified local middle management which makes the link between foreign person-
nel and the local population, and who could be also responsible for applying best
practices [6].

2.4 Mobility of modules


Mobility of the manufacturing system modules implies that each module is being
transportable and operational on site. Modularity is an enabler of component mo-
bility. The weight and volume of each module must be compliant to transportation
modes. In addition, the modules must withstand different transportation con-
straints (mechanical shock, tightness constraints, etc.). Finally, the equipment on-
site operationality (equipment) must adapt to on-site available energy sources. The
equipment must be easily integrated and commissioned.

2.5 On-site maintenance management


On the one hand, maintaining system performance during the operation phase
implies to adopt a comprehensive strategy that takes into account the duration of
the manufacturing system presence on a specific implantation location in order to
minimize the need for shutdown. On the other hand, in order to carry out on-site
Framework definition for the design … 115

interventions, spare parts supply chain management must be adapted according to


the manufacturing system mobility.

2.6 Energy supply


Depending of each implantation location characteristics, the energy supply issue
arises each time. The MMS autonomy depends on its ability to be independent in
supplying the necessary energy required for the operations of its resources [1].
The energy supply system can be based on diesel generators or by using solar
panels to provide the necessary power [6]. The issue of energy consumption (na-
ture and quantity) can be a determining factor for choosing the MMS constituent
resources.
After reviewing the requirements to be taken into account into a mobile manu-
facturing system analysis, we propose to discuss in the next section the system
design issue

3. A design framework adapted for a single implantation


location

The sequence of the key steps in the MMS design process [10] (figure1) starts
with 1) a refinement of the requirement specification 2) the determination of what
is to be carried out in-house or is to be outsourced and 3) some technical solutions
proposal (MMS configuration design). These three steps are discussed hereafter.

3.1 Requirements specification refinement


The design activity starts from the requirements specification that contains a de-
scription of the product to be manufactured (BOM) and details the client’s request
(production volume, delays, requirements, etc.). The initial requirements specifi-
cation will be supplemented with information and details obtained after MMS and
implantation location environment analysis. This first enhanced specifications
version (noted Requirement_1 in figure 1) allows imagining a first configuration
of MMS. This MMS configuration, not economically efficient, represents a gener-
ic definition able to satisfy the demand on the proposed location.

3.2 Manufacturing strategy analysis


The MMS generic configuration will be then refined through an analysis of what
is relevant to produce on site or what needs to be outsourced. This analysis in-
volves several criteria and requires the establishment of evaluation process and
decision support [11]. The analysis of the make or buy strategy enables to decide
the MMS functionalities, i.e. operations that the MMS should be able to carry on
the implantation location. The description of the necessary MMS functionalities
116 Y. Benama et al.

supplements the previous requirements specification (noted Final Requirement in


figure 1). The MMS design activity can be now conducted.
Requirement_1
•Product specification
•Client request

Initial Requirement Analysis of MMS • Location’s


•Product and implantation Information iinformation, etc.
specification location’
•Client request environment Constraints •R
•Resources'
constraints, etc.
Final Requirement
Cost Delay Quality mobility
•Product specification
•Client request
Design Analysis of • Location’s information, etc.
requirements Design of generic manufacturing •Resources' constraints, etc.
Data on ressources MMS configuration strategy: Make or •MMMS functionalities

Internal Information on 0 Buyy


production management

Multi criteria
i Evaluation Expert Decision Aid Model
Analysis Formalization Knowledge

Cost Delay Quality Surface mobility Integrability On site Resources


Resources’
availability

Design requirements
MMS
Data on Design
D i off MMS configuration
fi ti adapted
d t d to
t a single
i l location
l ti off Configuration
ressources implantation for 1site
Internal Information on
production management

Multi criteria Evaluation Expert


Analysis Formalization Knowledge

Fig. 1 Mobile Manufacturing system design framework adapted for single implantation location

3.3 Design of MMS configuration


This activity considers as input the latest requirements specification version and
technical data about all the resources that will be integrated into the MMS config-
uration as well as the production management information. The choice of MMS
configuration is based on classical criteria such as cost, quality and delay; in addi-
tion to other criteria specific to the mobility concept, namely: the mobility indica-
tor

3.4 Design of MMS configuration


This activity considers as input the latest version of the specification and the tech-
nical data about all resources that will be integrated into the MMS configuration
as well as production management information and assumptions. The choice of the
MMS configuration is based on several decision criteria.
In addition to the typical cost, quality and delay requirements, the proposed
approach incorporates new criteria that are specific to the context of mobility [10]:
the mobility index, the integrability index and the criterion of on-site resources
availability

3.4.1 Mobility index


Framework definition for the design … 117

Analyzing mobility during the embodiment design phase concerns the whole pro-
duction system defined by all its components. These components can be classified
into two categories: technical equipment and human modules. The assessment of
technical equipment and human modules mobility is based on different approaches
involving several criteria. It is therefore necessary to evaluate each category and
then aggregate the results to give a unique appreciation of the whole manufactur-
ing system mobility [10]. This appreciation can be expressed by a quantitative
value between 0 and 1 that indicates a satisfaction index. The index construction
approach is based on a multi-criteria analysis. Two important concepts are used:
the expression of preference and the criteria aggregation.
On the one hand, the mobility of MMS technical module has to be satisfied
through all its life phases. To be mobile, a technical module must be: transporta-
ble, mountable on site, operating on site and dismantled. On the other hand, the
human system operates by providing flexible working ability to carry out simple
or complex operations contributing to the functioning of MMS. This requires
skills acquired or developed on-site during the on-site production phase. The hu-
man system mobility can be understood as the mobility of one or more skills nec-
essary for the manufacturing system operation.

3.4.2 Integrability index


Generating a MMS configuration consists in the integration of various independ-
ent modules (machines, operators, conveyors, etc.). In order to have feasible con-
figurations, it is necessary to ensure that the selected modules can be integrated
with each other. Each module has one or more interfaces to bind to other modules.
The Integrability evaluation process of a MMS configuration combines two ap-
proaches [10]:
x A decomposition analysis approach (top-down): The MMS configuration is
broken down into individual modules. Each module integrates common interfaces
with one or more other MMS modules. The analysis of Integrability is carried out
at the level of each MMS configuration’ elementary module.
x An assessment approach based on integration (bottom-up): it is based on the
definition and evaluation of all nodes in the system configuration. Individual
measurements will be aggregated to give a single measure of the MMS configura-
tion’ Integrability.

3.4.3 Criterion of on-site resource availability


For a given MMS configuration, the evaluation phase of the availability of the
competences starts with the assessment of required skills in this configuration.
Thus, for each configuration’ entity, required skills are identified based on the
attribute “needed sills” contained in the description of each resource. This attribute
is faced to available competences on the implantation location. An evaluation
method is proposed to ensure that the required resources by the MMS configura-
tion that had been suggested are available on the implantation location [10]. The
assessment of skills availability is split into three stages: identification of the re-
118 Y. Benama et al.

quired skills, identification of relevant actor profiles and assessment of the profiles
availability on the implantation location.

4 Conclusions

In this communication, the concept of mobile manufacturing system is discussed.


The mobility requirements were addressed and a mobile manufacturing system
design framework is presented. The design process is based on some decision
criteria. In addition to the typical cost, quality and delay criteria, three other deci-
sion criteria are proposed: the mobility index, the integrability index and a criteri-
on of on-site resources availability. The proposed design approach is limited to the
consideration of a single implantation location. However, the concept of succes-
sive mobility requires that the same production system is operated successively on
several implantation locations. The design approach must be adapted to the multi-
sites context by integrating the concept of reconfigurability. A first analysis of this
issue is presented in [10]. This issue of successive multi-sites mobility will be
addressed in future communications.

References
1. Erwin R. and Dallasega P. Mobile on-site factories-scalable and distributed manufacturing
systems for the construction industry. 2015
2. Stillström C. and Mats J. The concept of mobile manufacturing. Journal of Manufacturing
Systems 26 (3-4). 2007, pp.188Ǧ93.
3. Rösiö C. Supporting the design of reconfigurable production systems. 2012. Jönköping Uni-
versity.
4. Flores, A. J. Contribution aux methods de conception modulaire de produits et processus
industriels. 2005. Institut National Polythechnique de Grenoble
5. Olsson E., Mikael H. and Mobeyen U. A. Experience reuse between mobile production mod-
ules-an enabler for the factory-in-a-box concept. In. Gothenburg,2007, Sweden.
6. Fox, S. Moveable factories: How to enable sustainable widespread manufacturing by local
people in regions without manufacturing skills and infrastructure. Technology in Society 42
2015, pp: 49Ǧ60.
7. Pahl G., Beitz W., Feldhusen J. and Karl-Heinrich G. Engineering Design: A Systematic
Approach. Springer Science & Business Media. 2007
8. Alfieri A., Cantamessa M., Montagna F. and Raguseo E. Usage of SoS Methodologies in
Production System Design. Computers & Industrial Engineering 64 (2). 2013, pp: 562Ǧ72
9. Fiorèse S. and Meinadier J.P. Découvrir et comprendre l’ingénierie système. AFIS. Cépaduès
Éditions. 2012
10. Benama Y. Formalisation de la demarche de conception de système de production mobile :
integration des concepts de mobilité et de reconfigurabilité. Thèse de doctorat. 2016.
Unversité de Bordeaux.
11. Benama Y., Alix T. and Perry N. Supporting make or buy decision for reconfigurable manu-
facturing system, in multi-site context. APMS, Ajaccio, September 2014, pp.
An automated manufacturing analysis of plastic
parts using faceted surfaces
Jorge Manuel Mercado-Colmeneroa, José Angel Moya
Murianab, Miguel Angel Rubio- Paramioa, Cristina Martín-
Doñatea*
a
Department of Engineering Graphics Design and Projects. University of Jaen. Spain. Campus
Las Lagunillas, s/n. 23071. Jaen (Spain)
3
ANDALTEC Plastic Technological Center, C/Vilches s/n, 23600 Martos -Jaen, Spain

* Corresponding author. Tel.: +34 953212821; fax: +34 953212334. E-mail address:
cdonate@ujaen.es

Abstract

In this paper a new methodology of automated demoldability analysis for parts


manufactured via plastic injection molding is presented. The proposed algorithm
uses as geometric input the faceted surface mesh of the plastic part and the parting
direction. Demoldability analysis is based on a sequential model to catalog nodes
and facets of the given mesh. First, the demoldability of nodes is analyzed, subse-
quently, from results of previous nodes analysis, facets of the mesh are cataloged
in: demoldable (facets belong cavity and core plate), semi-demoldable (plastic part
manufactured by mobile mechanisms, side cores) and non-demoldable (plastic
part not manufacturable). This methodology uses a discrete model of plastic part,
which provides an additional advantage since the algorithm works independent of
the modelling software and creates a new virtual geometry providing information
on its manufacture, exactly like CAE software. All elements of the mesh (nodes
and facets) are stored in arrays, according with their demoldability category, with
information about their manufacture for possible uses in other CAD/CAE applica-
tions related to design, machining and costs analysis of injection molds.

Keywords: Manufacturing analysis; mesh analysis; injection molding; CAD.

1 Introduction and Background

The manufacturing process of injection molding is the industrial method most


commonly used for producing plastic parts that require: finishing details with tight
tolerances, and dimensional control. Currently the plastic industry demands
searching for graphics and computational tools to reduce the design time of the
plastic parts and the manufacturing of the plastic injection molds that perform

© Springer International Publishing AG 2017 119


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_13
120 J.M. Mercado-Colmenero et al.

them. Currently, CAD and CAE systems enable design engineers to reduce design
time tasks, simulation, analysis of product manufacturability, and cost estimation.
The demoldability analyses of the plastic part, and detection of slides and inter-
nal undercuts, have established an important area of research within the field of in-
jection mold design because they affect directly the design and its final cost. Dif-
ferent methodologies have addressed the demoldability analysis of the plastic part
by means of visibility techniques along the parting direction. Authors such as
Chen et al. [1], which proposed to address the concept of visibility and estimation
of optimal parting direction through the concept of pockets, or Manoochehri [2],
have been pioneers in this technique.
Other authors have focused their research on the recognition of the features of
the plastic part in CAD format. A feature is defined as a discrete region of the part
that has information about its modeling and manufacturing. Features extraction
methodology makes available the part information and enters it as an input of a
structured algorithm. Fu et al. developed a set of algorithms for solving the
demoldability analysis by means of the features recognition, including the recog-
nition of undercut features [3], the definition of the parting direction [4], the part-
ing line and parting surface [5] and the recognition of the upper cavity and lower
cavity [6], and design of sides cores [7]. Yin et al. [8] proposed a methodology to
recognize undercut features for near net shapes. Ye et al. [9] provided an undercut
features recognition hybrid method and [10] extended their work to side core de-
sign. Other methods combine features recognition with visibility algorithm given a
parting direction with discretizing the plastic part. Singh et al. [11] describes an
automated identification, classification, division and determination of complex
undercut features of die-cast part.
Nee et al. [12, 13] proposed to solve demoldability analysis by means of classi-
fying the plastic part surfaces according to its relative orientation to the parting di-
rection and the connection between them. This method uses the dot product be-
tween the parting line and the normal vectors surface in order to define the
demoldability of surfaces.
Huang et al. [14] and Priyadarshi et al. [15] have focused their research on the
application of demoldability and visibility analysis of multipart molds. Thus, a
facet of the discretized geometry of the plastic part is demoldable if accessible
along the parting direction and not obstructed by any other facet of the rest of the
part. The applicability of this type of mold is very limited to the scope of prototyp-
ing.
Rubio et al. [16] and Martin et al. [17,18] based their research of demoldability
analysis on algorithms based on model discretization by means of sections by cut-
ting planes, which are crossed by straight lines. A set of intersection points on the
workpiece is generated and analyzed according to their demoldability. Neverthe-
less the obtained precision is far from that obtained by other methods (i.e. recogni-
tion features). Finally, other authors used the GPUs as a tool for detecting the un-
dercuts in the plastic part. Khardekar et al. [19] limited the use of the GPU'S to
recognize the possible parting directions that do not generate any undercut. This
An automated manufacturing analysis of plastic … 121

paper proposes a new method of automated demoldability analysis based on the


geometry of the discretized plastic part (set of mesh nodes-facets). It allows the
independence from the CAD modeler and is valid for any type of surface mesh in
any plastic part. After analysis, a new virtual geometry which incorporates manu-
facturing information of the plastic part is generated.

2 Methodology

2.1 XOY Planes Beam Generation, Preprocessing

Starting from a 3D plastic part to be manufactured, a three-dimensional mesh


formed by a set of nodes (N) and facets (F) is generated. The facets that make up
the mesh are triangular; hence a facet, Fi ‫ג‬Թ3, has 3 unique nodes, Nij ‫ג‬Թ3, associ-
ated to it.
The presented methodology is based, first, on an arrangement of nodes N ij ‫ג‬Թ3
according to their Z coordinate (parting direction, Fig. 1). Then, a sheet set of
XOY analysis planes πp, such that each node Nij ‫ג‬Թ3 of the mesh belongs to a plane
πpk ‫ג‬Թ3 (equation 1). Each plane XOY is associated with a node of the mesh and
therefore also the facet to which the node belongs.

(1)

This arrangement of the mesh elements is performed downwardly along the


parting direction. Note that a facet Fi ‫ג‬Թ3 belongs to only a plane πpk‫ג‬Թ3 defined
by the node of the facet with the greatest z dimension along the parting direction.

2.2 Recognition algorithm of demoldable facets along the parting


direction, Processing

Before describing the set of logical operations that make the algorithm, a set of in-
itial premises that complement it should be established:
x Demoldability analysis is performed along the parting direction, Dz (Fig. 1),
that is established as an input of the present algorithm.
x For reclassification of facets in cavity and core plate, a double sweep is per-
formed, along the parting direction, with positive and negative sense.
x Vertical and non-vertical facets are analyzed independently.

2.2.1 Not-vertical facets

The algorithm begins with the facets belonging to the first plane πp1 ‫ ג‬Թ3, which
are classified as demoldable and therefore both facets and nodes that compose it
are classified as demoldable by means of the cavity plate and belonging to βf ‫ ג‬Թ3.
122 J.M. Mercado-Colmenero et al.

(2)

Where βf ‫ ג‬Թ3 (Fig. 1) represents the array of facets that are demoldable by
means of cavity plate. For the following levels of analysis [2,m], this algorithm
proposes to assess the facet demoldability by projecting its nodes and control
points PGauss according the parting direction. Given the analysis plane πpk ‫ג‬Թ3, the
demoldability of a facet Fi ‫ ג‬Թ3 associated with it is analyzed using as a reference
the whole information of all facets analyzed in previous planes. Based on the
above premise, the facets belonging to the first level are classified as demoldable.
Thus, given a facet Fi, it is considered demoldable if the projection of its asso-
ciated nodes Nij and PGauss,i do not intersects with the facets assigned as demoldable
(belonging to βf ‫ ג‬Թ3) or not-demoldable (belonging to ηfcav ‫ ג‬Թ3) in immediately
preceding planes.

(3)

(4)

Where ηfcav ‫ ג‬Թ3 (Fig. 1) represents the array of facets of the mesh that are not
demoldable by means of cavity plate or, as provided in subsequent sections, semi-
demoldable facets.

Fig. 1. Location of the facets belonging to βf (green, demoldable facets) and to ηfcav (red, not-
demoldable).

2.2.2 Vertical facets.

Vertical facets (facets that meet the geometrical condition of perpendicularity to


the parting direction Dz) are cataloged from the set of facets assigned as non-
An automated manufacturing analysis of plastic … 123

vertical and demoldable by means of cavity plate previously analyzed. To do a


border contour (equation 5) is established by facets belonging to βf ‫ ג‬Թ3.

(5)

So that a vertical facet Fi ‫ ג‬Թ3 is demoldable if the projection of its nodes N ij


and Gauss points PGauss,i belong to the border contour Fr(βf). If so, these facets are
stored in the array of facets demoldable by means of cavity plate βf ‫ ג‬Թ3 and in the
opposite case are stored in the array of facets not-demoldable by means of cavity
plate or semi-demoldable ηfcav ‫ ג‬Թ3 (Both arrays previously established).

(6)

(7)

Fig. 2. Location of vertical facets belonging to βf and to ηfcav.

2.2.3. Reallocating demoldable facets to Core Plate.

Once the scan is performed along the parting direction and in the positive direc-
tion (+Dz), the new algorithms defined in paragraphs 2.2.1 and 2.2.2 are imple-
mented, reorienting the part in the negative direction of the parting direction (-Dz).
Thus it is obtained as a result the set of facets demoldable by means of core plate,
which will be stored in the array γf ‫ ג‬Թ3. And the set of facets not-demoldable by
means of core plate, which will be stored in the array γf ‫ ג‬Թ3.
To do this, a set of unification requirements for those facets with duplicated re-
sults must be established. These being the following:

x Demoldable facets by means of both cavity and core plate (duplication of re-
sults) will be stored in the array γf ‫ ג‬Թ3 (core plate), and removed from βf ‫ ג‬Թ3
(cavity plate).
124 J.M. Mercado-Colmenero et al.

(8)

x Facets classified as demoldable by means of core plate (second analysis, -Dz),


but which have been classified as not-demoldable by means of cavity plate
(first analysis, +Dz) will be stored in the array γf ‫ ג‬Թ3 (core plate), and re-
moved from ηfcav ‫ ג‬Թ3 (Facets not-demoldable by means of cavity plate or
semi-demoldable).

(9)

x Similarly, facets classified as demoldable by means of core plate (second


analysis, -Dz), but which have been classified as demoldable by means of cav-
ity plate (first analysis, +Dz) will be stored in the array βf ‫ ג‬Թ3 (cavity plate),
and removed from ηfcor ‫ ג‬Թ3 (Facets not demoldable by means of core plate or
semi-demoldable).

(10)

Fig. 3. Demoldability analysis along +Dz y -Dz. Unification of results, Boundary Conditions.

2.3 Reallocation algorithm for facets not-demoldable by means of


lateral slides or not-demoldable undercuts.

This section describes the algorithm for the reclassification of the facets F i ‫[ ג‬ηfcav
U ηfcor]. As shown in Fig. 4, this set of facets can divide their domain, creating new
virtual polygonal facets. Automatic division of these facets allows evaluating in-
ner regions thereof which can be demoldable or not, depending on the presence of
overlap between these facets and the facets defined above as demoldable. By
means of a comparative facet-to-facet process can be determined that not-
demoldable or semidemoldable facets ([ηfcav U ηfcor]) are entirely or partially over-
lapped by demoldable facets ([βf U γf]). To check for overlapping between a pair of
facets, both facets are projected on a plane perpendicular to the parting direction
An automated manufacturing analysis of plastic … 125

and is checked by a Boolean logic operation if there is contact between both fac-
ets. One facet Fi ‫[ ג‬ηfcav U ηfcor] is semidemoldable if it meets the condition that its
nodes have a z coordinate along the parting direction below the z coordinates of
the nodes of the reference facet (belonging to [βf U γf]) and if the intersection be-
tween these facets it is not zero. Otherwise, it is reclassified as demoldable (be-
longing to [ηfcav U ηfcor]).

(11)

Where δf ‫ ג‬Թ3 represents the set of all semi-demoldable facets and Fref ‫ ג‬Թ3 repre-
sents a reference facet to check for overlap. Once the semi-demoldable facets are
defined, they are fragmented, finding for each the demoldable region by means of
upper or lower cavity and the not-demoldable region. The division of semi-
demoldable facets is performed by applying a methodology of subtraction and in-
tersection between each of the semidemoldable facets and the closed set of refer-
ence facets.

(12)

Fig. 4. Example of resolution of semi-demoldable facets. Boolean operation.

Finally, the set of facets classified as not-demoldable are again analyzed to check
its demoldability by means of applying side cores. Thus, the part is reoriented by
turning 90° around the X axis and then around the Y axis (Checking the
demoldeability in new parting directiona, D’z, perpendicular to main parting direc-
tion), Fig. 5. For each turn, the algorithms presented in previous sections are run,
excluding those facets classified as demoldable in this phase.

Fig. 5. Side Core.


126 J.M. Mercado-Colmenero et al.

3 Implementation and Results

In order to validate this new methodology of automated demoldability analysis,


we analyzed three cases of plastic injection parts. All analyzes were performed
with the same precision of the mesh (angle and deviation). The implementation of
the algorithm has been made from the numerical calculation software Matlab
R2013a®. In contradiction to other methods, this algorithm has the advantage of
being adaptable to other programming language and its application extends to any
type of mesh surface. Results of the algorithm are presented below, as it is shown
the proposed cases are grouped according to the degree of demoldability in:
demoldable, demoldeable via side core and non-demoldable.
First, Case A (Fig 6) is completely demoldable. Thus, all facets, which are part
of the mesh of the plastic part, are demoldable in the parting direction Dz:=Z. As it
is shown (Fig. 6), Case A is composed (Table 1) of 406 facets demoldables
through cavity plate and 554 facets demoldables through core plate. Therefore, its
manufacture is trivial and it does not require no slide mechanism.
Then, Case B (Fig. 6) is demoldeable by using side cores. In contradiction to
previous case, this plastic parte requires a side core for manufacturing, which as it
is shown (Fig. 6) is defined in the direction DSide:=Y, perpendicular to the parting
direction. Case B is composed (Table 1) of 120 facets demoldables through cavity
plate, 358 facets demoldables through core plate and 20 facets demoldables
through a slide mechanism or side core.
Finally, Case C is non-demoldable. Case C is composed (Table 1) of 2428 fac-
ets demoldables through the main parting direction Dz:=Z. As Case B, it has 358
facets demoldables through core plate and 20 facets demoldables through a slide
mechanism or side core. As Case B, it possess 80 facets which require a sliding
mechanism in order to be demoldable in the side direction DSide:=X (perpendicular
to the parting direction). Nevertheless, as it is shown in Fig. 6 the core of the plas-
tic parte is not demoldable in any direction. So, 518 facets are categorized as non-
demoldable facets. This implies: the need to modify the geometry of the plastic
part in order to be demoldeable or non-manufacture of it, by the technique of plas-
tic injection molds.

Side Side Non-


Case Parting Cavity Core
Core Core Demold. Manufacturable
Studies Direction Facets Facets
Facets Direction Facets
A Z 406 554 - - - Yes
Yes, through
B Z 120 358 50 Y -
side core
C Z 1214 1214 80 X 518 No
Table 1. Demoldability result for the plastic part A, B and C.
An automated manufacturing analysis of plastic … 127

Fig. 6. Demoldability result for the plastic part A, B and C.

4 Conclusions
In this paper a new methodology for evaluating the demoldability analysis for a
given parting direction is proposed. This method proposes a discrete analysis of
the geometry of the plastic part to examine the demoldability in facets and nodes
belonging to the mesh. The developed algorithm uses as input the discretized sur-
face of the plastic part and generates, after analyzing it, a new virtual geometry
that incorporates information about the manufacture of the plastic part.
The algorithm detects facets that are demoldable through cavity and core plate and
facets that are non-demoldable. In this second case, demoldability of said facets is
evaluated in a perpendicular direction to the parting direction, allowing to define
the geometry and direction of side cores. Finally, the designer of the plastic part
can adapt and modify the geometry of the part though regions of it that are cata-
loged as non-demoldable. This reduces time and costs associated with the initial
phases of design and manufacturing of injection mold.
The proposed method improves other developed methods so far since it allows the
realization of demoldability analysis independently of CAD modeler, is valid for
application to any plastic part geometry, and it does not need access to internal in-
formation of the part. The geometry of the solid remains stored in arrays for later
use in other CAD/CAE applications related to injection mold design, machining of
cavity and core plates, etc.
Additionally, a future work is the implementation of the proposed algorithm in an
automated mold design system.

Acknowledgments This work has been supported by Consejeria de Economía, Ciencia y


Empleo (Junta de Andalucia, Spain) through the project titled ” A vertical design software for
integrating operations of automated demoldability, tooling design and cost estimation in injec-
tion molded plastic parts. (CELERMOLD)” (Project Code TI-12 TIC-1623)
128 J.M. Mercado-Colmenero et al.

References
1. Chen L.L., Woo T.C. Computational geometry on the sphere with application to automated
machining. ASME Transactions. Journal of Mechanical Design 114, 288-295.
2. Weinstein M, Manoochehri S. Optimal parting direction of molded and cast parts for manufac-
turability. Journal of Manufacturing Systems 1997; 16(1):1-12.
3. Fu M.W, Fuh J.Y.H., Nee A.Y.C. Undercut feature recognition in an injection mould design
system. Computer Aided Design 1999; 31(12):777-790.
4. Fu M.W., Fuh J,Y.H., Nee A.Y.C. Generation of optimal parting direction based on undercut
features in injection molded parts. IIE Transactions 1999; 31: 947-55.
5. Fu M.W, Nee A.Y.C., Fuh J.Y.H. The application of surface visibility and moldability to part-
ing line generation. Computer Aided Design 2002; 34(6): 469-480.
6. Fu M.W., Nee A.Y.C., Fuh J.Y.H. A core and cavity generation method in injection mold de-
sign. International Journal of Production Research 2001; 39:121-38.
7. Fu M.W., The application of surface demoldability and moldability to side core design in die
and mold CAD. Computer Aided Design 2008; 40(5): 567-575.
8. Yin ZP, Han Ding, You-Lun Xiong: Virtual prototyping of mold design: geometric
mouldability analysis for near-net-shape manufactured parts by feature recognition and geomet-
ric reasoning. Computer-Aided Design 2001: 33(2): 137-154
9. Ye X.G. Fuh JYH, Lee K.S.: A hybrid method for recognition of undercut features from
molded part. Computer-Aided Design 2001; 33(14):1023-34.
10. Ye X.G. Fuh JYH, Lee K.S.: Automotive undercut feature recognition for side core design of
injection molds. Journal of Mechanical Design 2004; 126:519-26.
11. Singh R, Madan J, Kumar R. Automated identification of complex undercut features for side
core design for die casting parts. Journal of Engineering Manufacture. 2014;228(9):1138-1152.
12. Nee A.Y.C., Fu, M.W., Fuh J.Y.H., Lee K.S., Zhang Y.F. Determination of optimal parting
direction in plastic injection mould design. Annals of the CIRP.1997:,46(1): 429-432.
13. Nee A.Y.C., Fu M.W., Fuh J.Y.H, Lee K.S., Zhang Y.F. Automatic determination of 3D
parting Lines and Surfaces in Plastic Injection Mould Design. Annals of the CIRP. 1998: 47(1):
95-99.
14. Huang J, Gupta SK, Stoppel K. Generating sacrificial multi-piece molds using accessibility
driven spatial partitioning. Computer Aided Design 2003;35(3):1147–60.
15. Priyadarshi A.K.L., Gupta S.K. Geometric Algorithms for automated design of multipiece
permanent molds. Computer Aided Design 2004; 36(3): 241-260.
16. Rubio M.A.,Pérez J.M., Rios J. A Procedure for plastic parts demoldability analysis Robotics
and Computer Integrated Manufacturing 2006; 22(1):81-92.
17. Martin Doñate C, Rubio Paramio, M. A. New methodology for demoldability analysis based
on volume discretization algorithms. Computer Aided Design.2013; 45(2): 229-240.
18. Martin Doñate C, Rubio Paramio, Mesa Villar A. Método de validación automatizada de la
fabricabilidad de diseños de objetos tridimensionales en base a su geometría. Patent number: ES
2512940.
19. Khardekar R, McMains S. Finding mold removal directions using graphics hardware. In:
ACM workshop on general purpose computing on graphics processors; 2004, pp. C-19, (ab-
stract).
Applying sustainability in product development

* 2 3
Rosana Sanz , José Luis Santolaya , Enrique Lacasa
2-3 Department of Design and Manufacturing Engineering, EINA, University of Zaragoza,
C/ Maria de Luna 3, Zaragoza 50018, Spain

* Corresponding author. Tel.: +34-976-761-900; fax: +34-976-762-235. E-mail address:


rsanz@unizar.es

Abstract

Sustainable product development initiatives have been evolving for some time to
support companies improve the efficiency of current production and the design of
new products and services through supply chain management. This work aims at
integrating environmental criteria in product development projects at the same
time that traditional product criteria are fulfilled. The manufacturing process of an
airbrush was studied. Different strategies focused on the optimization of raw mate-
rials and energy consumption along the manufacturing operations, the identifica-
tion of the product components that could be modified according to a DFA analy-
sis, the evaluation of the recyclability rate for the materials making up the product
and the identification of those materials with the highest environmental impact,
were applied. An approach based on two main strategies, optimization of materials
and optimization of processes is proposed to be used by engineering designers for
a progressive education to eco-design practice.

Keywords: Sustainability; product development; design guidelines;

1 Introduction

The progress toward sustainability implies maintaining and preferably improving,


both human and ecosystem well-being [1]. Achieving sustainable development in
industry will require changes in organizational models and production processes in
order to balance the efficiency of its operations with its responsibilities for envi-
ronmental and social actions [2].
According to stimuli as the opportunities for innovation, the expected increase
of product quality and the potential market opportunities [3], sustainable product
development initiatives have been evolving for some time to support companies

© Springer International Publishing AG 2017 129


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_14
130 R. Sanz et al.

improve the design of new products and services through supply chain manage-
ment.
Several authors have contributed to the development of methods and tools con-
sidering environmental criteria in the same way as conventional design criteria
through an Eco-design approach. Using Eco-design or Design for the Environment
(DfE) all environmental impacts of a product are addressed throughout its com-
plete life cycle, without unduly compromising other criteria and specifications like
function, quality, cost and appearance. As is shown in Fig. 1, a whole product sys-
tem life cycle includes five different stages: materials obtaining, production pro-
cess, distribution, use and final disposition.

Fig. 1. Stages of the product life cycle.

Eco-design integrates Design for X (DfX) strategies of all life span phases into
one [4]. It can benefit from techniques such as design for disassembly, design for
end-of-life and design for recycling. This methodology is inspired by the concur-
rent engineering and integrated design, which imply the incorporation of down-
stream factors, such as manufacturing, assembly, maintenance and end-of-life at
the very beginning of the design project [5].
Specific tools for eco-design can be classified in environmental assessment of
products and environmental improvement tools. Environmental assessment tools
are generally based on a life cycle assessment (LCA) method. The well-known
structure of goal definition and scoping, inventory analysis, impact assessment and
interpretation was developed during the harmonization-standardization work by
SETAC and ISO 14040 [6, 7]. Environmental impact is usually expressed by
means of indicators based on LCA evaluation methods. On the other hand, envi-
ronmental improvement tools provide guidelines and rules for helping designers to
identify potential actions to improve the environmental performance of products.
Brezet and van Hemel [8] developed the Life Cycle Design Strategies (LiDS)
Wheel that identifies different strategies to achieve sustainability around the prod-
uct life cycle. The LIDS wheel can be used to estimate the environmental profile
of an existing product or to evaluate the action plan for a new product.
This work focuses on the production stage of the product life cycle. Methodol-
ogy applied and results obtained for a case study are shown in the following sec-
tions.
Applying sustainability in product development 131

2 Methodology

In order to achieve a more sustainable product, the following operative method


is proposed (Fig. 2):

Fig. 2. Methodology for a more sustainable product development.

The identification, classification and proper characterization of the different


product components is a preliminary required task. The study of the production
process implies all operations needed to the manufacture, assembly and finishing
of each product component.
A set of indicators are used to assess the sustainability of the production pro-
cess. These are the global warming (GW), which represents the mass of CO 2 emit-
ted to the atmosphere, the energy consumption and the percentage of material re-
moved. The EuP Eco-profiler tool is proposed to evaluate the global warming
indicator. Database and calculation methodology of this tool is defined in MEEuP
methodology [9]. Input data takes into account the mass of each material making
up the product and the energy consumption along the manufacturing process. Out-
put data are different eco-indicators. Energy consumption and waste percentage
are calculated by means of the elementary flows exchanged by the industrial in-
stallation.
132 R. Sanz et al.

Furthermore, a design for assembly indicator (DFA) is obtained. Most common


methods are based on measuring the ease or difficulty with which parts can be
handled and assembled together into a given product. An analytical procedure to
design for assembly is followed where the problems associated with the compo-
nents design are detected and quantitatively assessed [10]. The process of manual
assembly is divided into two separate areas: handling (acquiring, orienting and
moving of part) and insertion - fastening (mating a part to another part or group of
parts). The result of this analysis is a DFA indicator, which is obtained by dividing
the theoretical minimum time for the actual mounting assembly time.
On the other hand, recyclability was analyzed by the indicator that represents
the percentage of material that can be recovered by the method of manual
separation or trituration. Recyclability can be calculated once known the following
aspects: material type and mass of each component of the product and rate of re-
cyclability for each material (RCR) [11].
The last stage of the operative method is the product redesign. Strategies as the
reduction of materials, the selection of low impact and recyclable materials and
the easy insert, manipulation and assembly of components, are proposed. The
preservation of the design specifications is considered.

3 Case study

The product studied in this work is a professional dual action airbrush (depressing
the trigger/level action delivers air and drawing back on the trigger releases paint).
The paint is drawn into from a reservoir mounted on top of the airbrush (gravity
feed) and it is atomized outside the airbrush tip. The components of this mecha-
nism are shown in Fig. 3.
Applying sustainability in product development 133

Fig. 3. Airbrush components.

Materials to manufacture the airbrush essentially include stainless steel, brass,


Teflon and chromium. The mass of each one and the resulting GW indicator are
indicated in Table 1.
It should be noted a different contribution of each material to the global warm-
ing indicator. Chromium, which is used in the surface finishing process, represents
only 4.3% of the product mass but involves 48.4% of GW.

Table 1. Distribution of mass and environmental impact.

Materials Mass (g) GW (Kg CO2) Raw materials (g)


Steel (AISI 304) 160 0.97 360
Brass (CW 614N) 4.17 0.01 10
Teflon (PTFE) 0.023 0.001 0.07
Chromium 7.4 0.92 22
Total 171.5 1.9 392.1

The study of manufacturing process reveals that the material removed in drill-
ing and machining processes is a high percentage of the raw materials acquired
(Table 1). According to previous methodology, manufacturing, assembly and fin-
ishing operations were reviewed in order to propose a more sustainable product
development. The following sustainability strategies were applied: to reduce the
amount of material removed, to reduce parts number, to change materials and to
change surface finishing process.
Some changes in raw materials selection for each component of the airbrush
were carried out. The use of calibrated bars and tubes was proposed. Thus, the
waste percentage was reduced and several operations as drilling and turning pro-
cesses were also avoided. Results are shown in Fig. 4, where the following infor-
mation for both, initial design, Di, and redesign alternative, A, is shown for some
components: size of raw materials, manufacturing operations that were simplified
for each alternative, energy consumption and amount of material removed along
the manufacturing process. In the case of the first component (needle cup), the op-
erations of drilling and contour turning were eliminated by the proper selection of
the raw materials size. Consequently, a significant reduction in material removed
(24.8 %) and energy consumption (18.2%) were achieved.
The sequence of operations required to assembly each part of the airbrush in
terms of align, insert and manipulation was studied in detail. Each act of retriev-
ing, handling, and mating a component is called an assembly operation. This anal-
ysis is shown in Fig. 5. Column 1 shows the part identification (sorted by
assembly steps) and column 2 identifies the number of times the operation is car-
ried out consecutively. The rest of them correspond to the identification of two
separate areas: handling and insertion and fastening, which, provide two manual
codes and their corresponding time per part in order to get operation time and
134 R. Sanz et al.

costs. This coding can be found in the tables for estimation time [10]. Last column
identifies with two possible numbers, (0 - avoidable or 1-essential), the minimum
theoretical number of pieces ideal situation, in which, separate parts could be
combined into one unless, one piece, as each piece is added to assembly, the piece
should be of a different material, or must be isolated, on all other parts assembled
the piece must be separated from all other parts assembled to perform the assem-
bly of parts that meet one of the above criteria. DFA indicator let us can be ana-
lyzed when changes are carried out on product design as a comparative method
and as a tool to identify which components could be modified or redesign to
optimize the product life cycle.

Airbrush Raw materials Machining operations Energy Material


component size (mm) removed (mm) (w·h) rem. (g)
1. Needle Cup Di Ø8x6.2 Drilling (4) 0.39 2
AISI 304 A Ø7x1.5x6.2 Contour turning (0.75) 0.26 0.6
2. Nozzle body Di Ø10x9.8 0.76 5
Contour turning (0.5)
AISI 304 A Ø9x9.8 0.64 4
5. Needle Di Ø2x131.7 0.26 2
Facing (0.8)
AISI 304 A1 Ø2x130.9 0.26 2
6. Packing washer Di Ø4x2.5 0.03 0.05
Facing (0.5)
PTFE A Ø3x2.5 0.004 0.02
8. Reservoir cup Di Ø28x8 3.22 30
Contour turning (0.5)
AISI 304 A Ø27x8 2.94 27
9. Trigger Di Ø12x17.7 5.29 15
Contour turning (0.5)
AISI 304 A Ø11x17.7 4.95 13
11.Sleeve limit Di Ø10x5.7 Facing (0.3) 0.18 3
CW614N A Ø10x2.5x5.4 Drilling (5) 0.14 1
12. Spring shaft Di Ø5.5x42.7 0.66 5
Contour turning (0.5)
AISI 304 A Ø5x42.7 0.49 4
14. Needle sleeve Di Ø10x19.7 Facing (0.5) 0.5 4
AISI 304 A Ø10x4x19.2 Drilling (2) 0.42 4
15. Needle fitting Di Ø7x11.2 0.36 2
Facing (0.3)
AISI 304 A Ø7x10.9 0.35 2
16. Handle Di Ø13x59.7 Facing (1); Drilling (4) 6.03 44
AISI 304 A Ø12x4x58.7 Contour turning (0.5) 4.48 29
17. Fitting screw Di Ø9x36.7 Facing (0.8) 1.67 13
AISI 304 A Ø8x35.9 Contour turning (0.5) 1.25 9
21. Valve Body Di Ø11x21.7 Facing (0.8) 1.45 11
AISI 304 A Ø10x20.9 Contour turning (0.5) 1.1 8
23. Plunger valve Di Ø4x21.7 0.12 1
Facing (0.3)
CW614N A Ø4x21.4 0.12 1
26. Nut Di Ø11x10.7 Facing (0.5) 0.8 6
AISI 304 A Ø11x2x10.2 Drilling (7) 0.4 3
29. Body Di Ø13x82.7 Facing (0.4) 7.92 48
AISI 304 A Ø12x82.3 Contour turning (0.5) 6.44 36
Fig. 4. Airbrush components. Reduction of the amount of material removed.

The percentages of recyclability of the product according to the treatment of


end of life were estimated such as is shown in Fig. 6. This analysis reveals that
95% can be recovered by manual separation, which is always more thorough than
trituration. The majority of components are raised in steel. It presents a high RCR
(rate of recyclability) and no changes are proposed. To assess whether or not the
Applying sustainability in product development 135

manual separation process, we can quantify the economic value of the recovered
materials meet the cost of operator generated to perform the separation.
Finally, the chromed layer was proposed to be substituted by a polishing pro-
cess of the stainless steel components. Product specifications were not practically
modified because a high corrosion resistance was preserved. A substantial reduc-
tion of 48% could be obtained for the GW indicator.

Fig. 5. Airbrush components. DFA analysis.


136 R. Sanz et al.

Fig. 6. Airbrush components. Percentage of recyclability.

4 Conclusions

This work aims at integrating sustainability in product development projects.


Manufacturing, assembly and finishing process for a case study were analyzed in
detail and different strategies were applied. An airbrush was the product studied.
First, the optimization of raw materials allowed us reducing 18.2% the energy
consumption and 24.8% the amount of material removed along the manufacturing
process. Next, DFA method was used to identify those components that are likely
to be modified to detect which of them might have problems at some point of
product life-cycle management. It allowed a comparative analysis and provided an
estimation of how much easier it was to mount a design with certain characteris-
tics, which another design with different features. Recyclability analysis of the
product identified the percentage of material that could be recovered and estimat-
ed the future value of the same, for a more effective final phase of the product life-
cycle management. In this case, materials were preserved because presented high
RCR. Finally, the chromed layer applied in finishing process of the airbrush
showed a relative high impact environmental. Thus, it was proposed to be substi-
tuted by a polishing process.
Applying sustainability in product development 137

Acknowledgments The research work reported here was made possible by the work developed
on the Advanced Product Design programme (Master in Product Design Engineering in the Uni-
versity of Zaragoza.

References

1. UNCED, Agenda 21. United Nations Conference on Environment and Development, Rio de
Janeiro, June 1992.
2. Garner A. and Keolian G.A. Industrial ecology: an introduction. University of Michigan's Na-
tional Pollution Prevention Center for Higher Education: Ann Arbor, MI, 1995.
3. Van Hemel C. and Cramer J. Barriers and stimuli for ecodesign in SMEs. Journal of Cleaner
Production, 2002, 10, 439-453.
4. Holt R. and Barnes C. Towards an integral approach to 'Design for X': an agenda for decision-
based DFX research. Research in Engineering Design, 2010, 21 (2), 123-126.
5. Boothroyd G., Dewhurst P., Knight W.A. Product design for manufacture and assembly (3
ed.), 2011. Florida, USA: CRC Press. Taylor and Francis Group.
6. ISO, 2006a, 2006b. ISO 14040 International Standard. In: Environmental management - Life
cycle assessment - Principles and framework. Requirements and Guidelines. International
Organization, Geneva, Switzerland.
8. Brezet J.C. and Van Hemel C.G. Ecodesign: a promising approach to sustainable production
and consumption, 1997. UNEP, United Nations Publications, Paris.
9. Kemna R., van Elburg M., Li W., van Holsteijn, R. MEEuP Methodology Report, 2005.
10. Boothroyd G. Product design for manufacture and assembly, 1994. Marcel Dekker, N. York.
11. IEC/TR62635, Guidelines for end-of-life information provided by manufacturers and recy-
clers and for recyclability rate calculation of electrical and electronic equipment, 2012.
Towards a new collaborative framework
supporting the design process of industrial
Product Service Systems
Elaheh Maleki*, Farouk Belkadi, Yicha Zhang, Alain Bernard
IRCCYN-Ecole Centrale de Nantes,1 rue de la Noë BP 92101,44321 Nantes Cédex 03, France
* Corresponding author. Tel.: +33-240-376-925; fax: +33-240-376-930. E-mail address:
Elaheh.Maleki@irccyn.ec-nantes.fr

Abstract The main idea of this paper is to present a collaborative framework for
PSS development process. Focused on the engineering phase, this research will
use the modular ontology to support the management of the interfaces between
various engineering knowledge involved in the PSS development process. The
supporting platform is developed as a part of a collaborative framework that aims
to manage the whole PSS lifecycle.

Keywords: Product-Service System, PSS Design, Collaborative Platform,


Knowledge repository

1. Introduction

Product-Service System (PSS) has been represented in 1999 as a promising so-


lution for “sustainable economic growth” face up to the hard competitiveness in
the challenging markets [1]. Afterward, numerous economic, social, technological
and environmental incentives for PSS adoption have been discussed by different
researchers [2, 3, 4]. Being the most “feasible dematerialization strategy” [5], PSS
has been the subject of several works and innovations to support the above men-
tioned incentives.
To move towards the adoption of PSS business model, industries need to create
a new system of solution providing [6] by rethinking their current design and pro-
duction processes as well as their business relationships with both customers and
the supply chain. The interdisciplinary nature of this new phenomenon increases
the number of disciplines involved in the development process [7], implies the
need of robust coordination and collaboration efforts [8]. These efforts should be
able to provide proper communication interfaces and facilitate knowledge sharing
among product, sensor and service experts [9].

© Springer International Publishing AG 2017 139


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_15
140 E. Maleki et al.

The success of the collaborative process is strongly linked to the need of shar-
ing knowledge between actors to ensure a common representation of the problem
to be solved. This representation is an integration of a set of knowledge fragments
created separately according to the expert skills and point of view on the problem.
The role of the collaborative tools is to ensure the consistency of interconnected
data and knowledge created by various activities and managed by several infor-
mation systems, including legacy CAx tools. During the last decades several
Computer Supported Collaborative Work (CSCW) frameworks have been devel-
oped with the aim of assisting actors in their design activity [10]. Although the
PSS design process exploits classical CAD tools (mechanical, software, etc.) the
current collaborative tools fail to consider specific integration constraints and ac-
tivities of the PSS development process [11]. In this context, developing a collabo-
rative framework to support the whole lifecycle of industrial PSS is crucial.
Setting the above target, the purpose of this paper is to make a proposal for the
main foundations of a collaborative framework to support PSS engineering design
process in product-oriented PSS. To do so, literature review is made in the next
section to clarify the PSS concept. The third section of the paper discusses the de-
velopment process of PSS as well as the main functions to be ensured by ideal col-
laborative framework supporting PSS design process. The last section will de-
scribe our framework for PSS semantic modeling and the global structure of the
knowledge repository supporting the proposed platform.

2. PSS definition and characteristics

Regardless of different vocabularies to describe PSS [8], there are some mutual
entities for PSS in the literature. The first general definition of PSS is given by
Goedkoop et al. [1] in 1999. Vasantha et al. [12] reviewed different definitions of
PSS used in different methodologies and concluded that “PSS development should
focus on integrating business models, products and services together throughout
the lifecycle stages, creating innovative value addition for the system.”
Meier et al. [3] characterized Industrial PSS by ‘‘the integrated and mutually
determined planning, development, provision and use of product and service
shares including its immanent software components in Business-to-Business ap-
plications and represents a knowledge-intensive socio-technical system’’.
Baines et al. [13] describe PSS as the convergence of “servitization” of product
and “productization” of service while Tukker’s typology [2], as the “most accept-
ed classification” in literature [14], made a distinction between three main catego-
ries of PSS as “Product-Oriented Services, Use-Oriented Services, and Result-
Oriented Services”. Moderating the previous models, Adrodegari et al. [14] pro-
posed a new form of PSS typology that relies on the ownership concepts and the
“building blocks of the business model framework” as Ownership-Oriented
(Product-Focused, Product and Processes-Focused) and Service-Oriented (Access-
Towards a new collaborative framework … 141

Focused, Use-Focused, Outcome-Focused). There are numerous PSS development


methods which focus on the integrated lifecycle management of product and ser-
vice in PSS [16, 17, 18].
Inspired from various definitions of PSS in the literature [19, 2, 11, 12, 4], the
PSS concept is considered in this work as “a system of value co-creation based on
technical interfaces between product and service components as well as collabora-
tive interactions between involved actors”. Based on this definition, the PSS de-
velopment process is highly interactive and dependent on the supportive collabo-
rative infrastructures.

3. Towards a collaborative framework for PSS design support

Design is a complex iterative process that aims to progressively define a com-


plete, robust, optimal and efficient solution to answer a set of heterogeneous re-
quirements provided by various stakeholders. The classical product design process
starts with the identification of product functions for each requirement, followed
by the identification of principles of solution and types of components for each
function and ends by the detailed definition of features and interfaces of product
components.
An industrial PSS design process could follow the similar main steps of the
above process for the identification of the physical components necessary for the
achievement of product functions and service shares. But it is not enough; indeed,
the outcomes of the PSS design process are more complex and concern the de-
tailed definition of additional components and features as well as the technical so-
lutions implementing the links between product and service components (Fig. 1).
Highlighting the positive impact of ICT tools on the PSS performance [21],
collaborative frameworks supporting a special part or the whole PSS lifecycle ac-
tivities, is considered as a big challenge for the factory of the future. Building an
integrated collaborative framework to manage the PSS whole lifecycle needs to
integrate different knowledge expertise point of view such as customers, engineers
and production Point of view.
Service definition requires the identification of all information required to be
managed for the realization and exploitation of the PSS. The service features con-
cerns the identification of all material and human resources requested in the PSS
usage stage regarding the resources availabilities and working environment con-
straints. These resources are necessary to maintain permanent relationships be-
tween customer and PSS provider during the whole contractual transaction after
delivering the PSS. This is one of the main differences between product-based and
PSS-based business models. The product features concerns the identification of
physical components considered as a specific category of material resource con-
nected to some product components. The physical components can be sensors
142 E. Maleki et al.

needed to support the collection of real time service data or additional equipment
for communication between service resources and smart components of product.

Production Environment
Product Constraints
onstraintts Constraaints
Constraints
Functions Product
Features

Customerr Product -
Needs PSS Design Process Service
Links

ce
Service Service
Typee Features
Resources
Resource Usage
Usa
Availability Constraints

Fig. 1. The PSS Design process

3.1. From PSS design process to PSS design support framework

There is a breadth of related research on modular product-service development


methods which focus on the modular engineering of product, service, actors and
ICT infrastructure in PSS [19]. Knowledge and data required for the integrated so-
lution formed by the components of a supportive platform should be managed on a
common repository, structured according to a set of modular ontologies covering
all PSS aspects.
Regarding Tukker et al. [15] “Companies use formal or informal approach to
the PSS development and they also use their own tools and procedures. The com-
panies which are active more in product prefer to develop service in accordance
with the product development”. Proposing Computer Supported work facilities is
a crucial task to improve and harmonize the current development practices.
This paper focuses on the design support system which manages the multi-
disciplinary engineering process of the product oriented PSS and the related mod-
els. Respecting this multi-disciplinary essence of PSS engineering process, the de-
sign support system should assist collaboration between four main actors:
1) the project leader who has the role to fix the PSS project objectives and
validate the final result according to a set of pre-defined requirements
2) Sensor engineers in charge of the creation and management of sensor data
3) Mechanical engineers in charge of the creation and management of product
data with legacy CAD tools and respectively the collaborative framework
4) PSS engineers in charge of defining the new PSS solution as a combination
of pre-defined product components and sensors. They will interact with
mechanical and sensor engineers through the collaborative platform to fix
the final integration solution of the PSS.
Towards a new collaborative framework … 143

The functions required as the minimum, but not a comprehensive, to


be considered in the collaborative framework are:
1) Service definition facility handling the creation of service features (infor-
mation, resources, sensors, etc.).
2) Sensor management helps the declaration of sensor data and research of
optimal sensor for the defined service.
3) Integration solution configurator helps the creation and evaluation of phys-
ical links between pre-defined product and service components.
4) PSS Lifecycle modeler for the classification and analysis of different PSS
working situations. This is helpful for the PSS engineer making decision
about the best sensor and optimal integration solution.
5) CAx tools connection to support the management of CAD files and the
generation of light 3D representation of the PSS structure.

3.2. Knowledge repository structure

Several product models have been proposed and used along the recent years
[22]. These models should be extended to integrate new concepts necessary for the
definition of the associated service.
The architecture of the proposed PSS design support framework is based on a
central knowledge repository as a kernel component through which different busi-
ness applications are interconnected to provide technical assistance and collabora-
tion facilities to users (Fig. 2).
To define and implement the structure of this knowledge repository, domain
ontologies in PSS will be defined and connected to form the whole semantic mod-
el. This is based on a concurrent process grouping top down approach based on
recent findings in the literature survey and bottom up approach implementing the
pragmatic point of view gathered from industrial practices (Fig. 3).
Based on the analysis on the main functionalities of the PSS design in engineer-
ing phase, we have identified the main concepts of the semantic model. The archi-
tecture of the proposed PSS design support platform is based on a central
knowledge repository as a kernel component through which different business ap-
plications are interconnected to provide technical assistance and collaboration fa-
cilities to users (Fig. 2). To define and implement the structure of this knowledge
repository, domain ontologies will be defined and connected to form the whole
PSS semantic model.
144 E. Maleki et al.

Fig. 2. Global architecture of the proposed design support framework

Fig. 3. Methodological approach for Semantic model building

Considering the industrial context of PSS, domain ontologies are as follows:


1) Product Ontology: Supports the classification of main categories and fea-
tures of products (domestic appliances, machines, transport facilities, etc.).
This helps the identification of some standard technical constraints to be
respected for the definition of the technical solution.
2) Service Ontology helps the classification of main service categories with a
list of standard information and KPIs necessary to describe each service
Towards a new collaborative framework … 145

type. For example, monitoring the machine health requires environmental


data like humidity, temperature and dust.
3) Sensors ontology includes a classification of technical sensors according to
a set of standard indicators useful for search and selection of optimal sen-
sor to implement a specific service.
4) Connector ontology proposes a classification of main connection possibili-
ties and constraints according to sensors and product types. This will help
the definition of the integration solution between PSS items.
5) PSS Lifecycle taxonomy is used to classify all possible standard working
conditions for each PSS life stage connected to product and service fea-
tures.

4. Conclusion

Considering the complexity and multi-disciplinary nature of PSS development,


using a collaborative IT tool is critical for both provider and customer in the in-
dustrial projects. In this context, providing the common language to manage the
interfaces between various actors is the most complicated primary step. As a result
of this research, this is proposed to break down the system to the modules, not on-
ly in the engineering process but also in software design projects. Modular ontolo-
gy concept seems to be a feasible solution for the massive knowledge involved in
PSS development.
This paper presents a summary of the first specifications results of the future
design architecture, component of the collaborative framework. The future works
concern the specification of the proposed functions and the construction of the dif-
ferent ontology models. These developments will be connected to the whole col-
laborative framework and the related semantic model.

Acknowledgments The presented results were conducted within the project “ICP4Life” enti-
tled “An Integrated Collaborative Platform for Managing the Product-Service Engineering
Lifecycle”. This project has received funding from the European Union’s Horizon 2020 research
and innovation program. The authors would like to thank the academic and industrial partners
involved in this research.

References

1. Goedkoop, Mark J & Van Halen, Cees J G ét al. (1999), Product Service systems ,
Ecological and Economic Basics, Economic Affairs 1999
2. Tukker, Arnold (2004), Eight types of product-service system: eight ways to sustaina-
bility? Experience from Suspronet. , Business Strategy and the Environment, Bus.
Strat. Env. 13, 246–260 (2004)
146 E. Maleki et al.

3. Meier,Horst & Roy, Raj et al. (2010), Industrial Product-Service Systems—IPS2,


CIRP Annals - Manufacturing Technology 59 (2010) 607–627 Contents
4. Vezzoli, Carlo et al., (2014) Product-Service System Design for Sustainability, Learn-
ing Network on Sustainability, Greenleaf Publishing Limited, 2014
5. Mont, O.K. (2002), Clarifying the concept of product–service system, Journal of
Cleaner Production 10 (2002) 237–245
6. Schnürmachera, Christian & Haykab, Haygazun et al. (2015) Providing Product-
Service-Systems - The Long Way from a Product OEM towards an Original Solution
Provider (OSP), Procedia CIRP 30 ( 2015 ) 233 – 238
7. Schenkl, Sebastian A. (2014), A Technology-centered Framework for Product-Service
Systems, Procedia CIRP 16 (2014) 295 – 300
8. Reim, Wiebke et al (2014), Product-Service Systems (PSS) business models and tac-
tics - A systematic literature review, Journal of Cleaner Production, Volume 97, 15
June 2015, Pages 61-75
9. Trevisan, Lucile & Brissaud, Daniel (2016), Engineering models to support product–
service system integrated design, CIRP Journal of Manufacturing Science and Tech-
nology, Available online 8 April 2016
10. Linfu, S. Weizhi, L: Engineering Knowledge Application in Collaborative Design. 9 th
International Conference on Computer Supported Cooperative Work in Design, Cov-
entry, (2005) 722-727
11. Cavalieri, Sergio (2012), Product–Service Systems Engineering: State of the art and
research challenges, Computers in Industry 63 (2012) 278–288
12. Vasantha et al (2012), A review of product_service systems design methodology,
Journal of Engineering Design,Vol. 23, No. 9, September 2012, 635
13. Baines, T S et al. (2007) State-of-the-art in product service-systems, Proc. IMechE
Vol. 221 Part B: J. Engineering Manufacture
14. Adrodegaria, Federico et al. (2015), From ownership to service-oriented business
models: a survey in capital goods companies and a PSS typology, Procedia CIRP 30 (
2015 ) 245 – 250 (7th Industrial Product-Service Systems Conference - PSS, industry
transformation for sustainability and business)
15. Tukker, Arnold et al. (2006) New Business for Old Europe : Product-Service Devel-
opment, Competitiveness and Sustainability, Greenleaf Publishing, 2006
16. Aurich, J.C. et al (2006), Life cycle oriented design of technical Product-Service Sys-
tems, Journal of Cleaner Production 14 (2006) 1480-1494.
17. Tran, Tuan A. et al (2014), Development of integrated design methodology for various
types of product – service systems, Journal of Computational Design and Engineering,
Vol. 1, No. 1 (2014) 37-47.
18. Wiesnera, Stefan (2015), Interactions between Service and Product Lifecycle Man-
agement, Procedia CIRP 30 ( 2015 ) 36 – 41 (7th Industrial Product-Service Systems
Conference - PSS, industry transformation for sustainability and business)
19. Wang, P.P. , Ming, X.G. et al. (2011) Status review and research strategies on product-
service systems, International Journal of Production Research, 49:22, 6863-6883.
20. Manzini, E. (2003) A strategic design approach to develop sustainable product service
systems: examples taken from the ‘environmentally friendly innovation’ Italian prize,
Journal of Cleaner Production 11 (2003) 851–857
21. Belvedere, Valeria et al. (2013), A quantitative investigation of the role of information
and communication technologies in the implementation of a product-service system,
International Journal of Production Research, Vol. 51, No. 2, 15 Jan. 2013, 410-426
22. Sudarsan, R, Fenves, S.J. et al. (2005), A product information modeling framework for
product lifecycle management. Computer-Aided Design 37 (2005) 1399–1411.
Information model for tracelinks building in
early design stages

David RÍOS-ZAPATA 1,2,∗ , Jérôme PAILHÈS2 and Ricardo


MEJÍA-GUTIÉRREZ1
1 Universidad EAFIT, Design Engineering Research Group (GRID), Carrera 49 #
7 Sur - 50, Medellı́n, Colombia
2 Arts et Metiers ParisTech, I2M-IMC, UMR 5295. F-33400 Talence, France
∗ Corresponding author. Tel.: (+57) 4 261-9500, Ext. 9059 e-mail:

drioszap@eafit.edu.co

Abstract. Over the last decades many efforts are being made into either both, cre-
ating better products or improving processes, yet, generating more information, and
usually leaving behind how to manage whole information that already exist and
using it to improve the decision making process. This article is centred into the de-
velopment of an information model that allows to have a multilevel traceability in
early design stages, by definition of tracelinks of information at the design stages,
where information evolves between from linguistic requirements into design vari-
ables. Regarding of the information the should be analysed the research is focused
into the setting up of a graphic environment that will allow to determine relation-
ships between different variables that exist in conceptual design, granting designers
teams the opportunity to use that information in decision making situations in terms
of knowing how changing one variable upsets any requirement. Finally, this article
presents a case study of a design of a portable cooler in order to clarify the usage
and opportunities present by the usage of the traceability model.

Key words: Early design processes, traceability in design, information manage-


ment model, decision-making in design

1 Introduction

The success of a design solution is normally based in the quality of the justifica-
tion at decision making processes, where different elements can participate in this
justification: time to make a decision, the level of satisfaction of the solution, and
inevitably experience the designer who makes the decision. These factors have a
significant influence on the result of the developed product.
To support these decision-making processes, there are different tools and method-
ologies that help design teams in this process of transformation of the design need
into a concrete and successful final product. Over the last 30 years, with the increase
© Springer International Publishing AG 2017 147
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_16
148 D. Rı́os-Zapata et al.

of computers usage, many of the new digital tools had arrived; at the same time, the
automation of many tasks was made thanks to the aid of these tools [1].
The use of these computer aids creates a positive impact, both at technical and
organisational level. Through the use of these digital tools, coupled with the use of
simultaneous working methodologies, new product development time decreased by
60% [2]; also, the success rate of new products increased from 31% in the late 80’s
[3], up to 60% in the last decade, which finally offers the benefits of saving time
and money [?]. Even so, it is important to recall the lack of computer tools at early
design stages [4] and the importance to develop tools that allows to track changes at
those stages [5].
This research is centred in studying how design activities in early design stages
can be supported by the use of an information model that helps to monitor the infor-
mation by the creation of a traceability model between requirements and variables.
The structure of the article is composed by a State of the art at section 2,; section 3
that will explain the information model proposal and section 4 that will explain its
usage in the early design of a portable cooler.

2 State of the art

Product design processes can be divided into four principal phases: clarification of
the task, conceptual design, embodiment design and detail design [6], where the first
three can be considered as early design stages. In terms of knowledge management
in early design, many design approaches had fail to fully support conceptual design.
The reason is the the lack of connection between the external requirements and the
design variables [7].
Early design is normally conducted by a need analysis, then a functional anal-
ysis that allows to write specifications in terms of the relationship of the product
with the environment [8]. Afterwards, design could be conducted by following the
FBS Framework (Function-Behaviour-Structure) which allows to transform specifi-
cations (in terms of functions) into equations [9].
Also, it is important to recall that at early design stages, about 80% of the de-
cisions are made, even if the help of computer tools at those stages is quite low
[4]. The type of decisions includes the evolution of the information. For instance, in
the design process of a glass for hot beverages, the first step is the research of the
user’s needs, where it might be found requirements such as the glass must be big
and light. The design team starts to analyse these information, and make decisions,
whether based on their experience or further information (such as benchmarking).
Eventually, designers define the specifications that fulfil those requirements, i.e.,
write specifications in terms of the volume and the weight of the product.
Afterwards, designers define several technical aspects of the product, in terms
of the behaviour that determines the aspects that can be characterised into design
variables (equations that determines the volume and weight of the product). This
leads to the appearance of secondary variables (diameter, height and thickness).
Information model for tracelinks building in early design stages 149

These variables will allow to the product to fulfil the design specifications. Finally,
by arranging different possibilities, designers determine their final values and as-
sign them into different geometric features. This determines the end of early design
stages design and represents the beginning of the detailed design. As this informa-
tion undergoes from linguistic to numeric data, it is clear that the imprecision of
design decreases, which finally allows to designers to arrive to consisting solutions
[10]. All the information evolution process in detail can be watched in Figure 1.

REQUIREMENTS SPECIFICATIONS BEHAVIOUR VARIABLES

¨To be big¨ The volume of the product should be 0.5l   V=Fn(D,h)


  V  h
¨To be light¨ The product weight should be less than 0.05 Kg W=Fn(V,t,material) D t
  

Fig. 1 Information evolution in early design processes

All this information evolution process in early design stands several questions,
such as how is all information stored?, is there stored any connection-links between
those kinds of information? and how designers took their decisions?.
It is inferable that there must exist any relationship between these deliverables,
a relation that helps to recognise the evolution of the product between the task and
the final solution; according to the IEEE, the degree where relationships between
two or more items can be established, specially where one item is the predecessor
of other items, can be called traceability [11].
The importance of having a high detail of information will determine the level
of integration of the traceability of the model, which allows the granularity of the
relationships between the different kinds of information [5]. In this connection, a
traceability tool must identify items that are potentially affected by a change in
function of their tracelinks. Finally, it is important to underline how the dependence
is determined in design, which is measured in three variables. Variability: how are
the requirements set? Sensitivity: which is the risk in the design when a change
occurs? Integrity: knowledge is required to achieve the task? [12].
Regarding to traceability models in product design, CATIA V6’s RFLP1 Module
is able to stock in the whole information in the same platform. Nevertheless, the
way information is stored and processesed is not interactive[13], so, requirements
and logical inputs are not necessary connected to CAD model, but stored in the same
file [14]. Also, it is important to recall that many product management models deal
with poor data traceability specially at the exploration of the requirements definition
[15].
Finally, traceability models are supporting knowledge reuse in early design
stages. For instance, Baxter et al. had defined a traceability framework centred in
the performance analysis of specific requirements and the use of that information in
order to optimise design solutions [16].

1 RFLP for Requirements Functional Logical Physical acronym


150 D. Rı́os-Zapata et al.

3 Traceability model proposal

The model is centred in answering the question regarding the store and exploit of
the traceability information in early design stages, so it is important to consider the
importance of storing information linked to requirements, specifications, equations
and variables.
During the need analysis, the most important goal is to determine a list of re-
quirements (See ”I want the product to be big” in Fig 1). Mostly, this list is an input
to any design engineering process; nevertheless, the process is not limited to be per-
formed only by users’ specialist. This model is limited to work by the input of the
list or requirements and not by developing techniques to retrieve those requirements.
For the functional analysis, designers are called to analyse the interaction of the
product with the environment in order to address functions that allow to write the
Product Design Specifications. Then, a link between Requirements and Specifica-
tions can be established by using a correlation matrix (e.g. correlation matrix of the
QFD (Quality Function Deployment)). In Figure 2 is presented an example where
relations from a matrix are extracted for building a graphic relation between require-
ments (Rq) and specifications (Sp) is represented.

Rq1 Rq2
Sp1 Sp2 Sp3
Rq1 X Rq3
Rq2 X X Sp2
Rq3 X
Sp1
Sp3

Fig. 2 Requirements to specifications

At this point, the FBS framework is used to manipulate the evolution of the in-
formation. So, in the formulation stage is analysed by the definition of the main
function and its division into functional blocks (FBD). Since the Function Approach
defined the functions alongside the relationship of the product with the environment,
those functions will represent the fluxes that enter into the system. The analysis of
those fluxes (material, matter and information), whether internal or external, allows
determining the physical behaviour that rules each part product. This defines the
equations of the product, and the connection between Equations and Specification.
In order to develop this links, CPM/PDD (Characteristics- Properties Modelling;
Property-Driven Development) models are used. These models allows to designers
to establish connections between information, but also focusing in controlling the
design parameters Ci [17, 18].
Finally, at the synthesis stage, the designers select for each function box, from
the FBD, a suitable solution. Here, the designers complete the set of equations in
Information model for tracelinks building in early design stages 151

terms of the final solutions. At this point ends the early design process and the team
might proceed into detailed design, where they assign values to each variable. In
Figure 3 is presented the tracelinks model in the different levels over early design;
also is presented how the model is connected with FBS framework and how the
model extends is boundaries into requirements (semantic variables).

INTEGRATION AT LEVEL 0 FUNCTION BEHAVIOUR STRUCTURE

REQUIREMENTS SPECIFICATIONS EQUATIONS VARIABLES

Rq1
Rq2
V  D2h
Sp2

Rq3 Text Tint


Sp1
q
Sp3 t / KA

Fig. 3 Tracelinks representation

4 Case study

In order to validate the model, and to look forward several pitfalls, a portable cooler
design process was conducted. From need analysis, the input was defined in 9 re-
quirements. Regarding to the functional analysis, 5 functions were written for the
product to accomplish: The product must be easily carried by the user; the prod-
uct must resist solar radiation; the product must isolate food from the external air;
the product must incorporate ice; the product must isolate food from the solar heat.
These functions were interpreted as 11 specifications. Then, the construction of the
QFD correlation matrix allowed to determine the connection between Requirements
and Specifications. For instance, requirement 1, Keep things cool is associated to 8
specifications, including wall thickness, but it is also related to the cooler volume.
After the definition of Specifications, the design process undergoes with the for-
mulation stage and the construction of the FBD that can be watched in Figure 4.
Also, this figure represents the analysis of a selected function: hold. This function
represents the wall of the container, where its function is to hold the flux of heat that
is heading into the cooler; the behaviour of this wall can be described as a thermal
conduction process.
At the synthesis stage, an isolation principle is assigned to the wall in order to
be implemented into the design. The system will be described as sandwich wall
as: External Wall A - Thermal insulation B - Air C - Internal Wall D system. The
equation that represent this isolating system is represented in Equation 1 and is the
design parameter Ci to be implemented .
152 D. Rı́os-Zapata et al.

Solar radiation Solar radiation


Integrate
Ice Air
Food
C.C Water
Food
Stock
Ice
Heat Heat
Hold

Human force
Information
FUNCTION TO BEHAVIOUR
Information Integrate Allow

q @Text q @Tint
HOLD
STRUCTURE
T T q A B C D
q  ext int
t / KA
q
LB

Fig. 4 Function block diagram analysis and structure definition

Text − Tint
Qconv = LA LB LC
(1)
KA ∗AA + KA ∗B + KC ∗AC + KDL∗A
D
D

Finally, an entire traceability map can be build. In Figure 5a, the traceability tree
is shown. Here the requirements are represented at the bottom. In second level are
the specifications, which are connected to the equations. Finally the equations are
connected to the variables in top level of the tree. As a practical display propose,
only the links that connect the Equation 1 (Eq1: Heat flux in the wall) and the re-
quirement ”keep things cold” are shown.
In terms of how this traceability tree can be used to support designs’ decision-
making, an example can be presented. For instance, the development team had re-
alised that the cooling capability of the cooler is too short, so designers propose to
increase the thickness of the thermal insulation LB (See Structure in Figure 4). The
domain solution for this variable is defined as D(LB ) = [0.01 − 0.1], and designers
decide to set it up to its maximal value in order to increase the thermal insulation of
the cooler.
By the analysis of the equations, it is clear that no further specifications were
affected by modifying LB in order to reduce the heat flux, but the analysis of the
traceability tree reflects a different survey. Here, it is found that the LB thickness,
related to the requirement of keeping things cold, is also related to the volume of
the cooler. This represents that changing the thickness affects the volume, so both
variables are correlated. The graph that connects both variables can be seen in Figure
5b.
This kind of traceability information model, offers to designers the list of vari-
ables that correlated with each other. This allows to designers to take better decision
when they perform changes in the design; but also leads to new challenges. In this
situation, the design team find that there is a correlation that affects two variables,
and considering the limits established for the volume, the new domain solution is re-
defined as D(LB ) = [0.01 − 0.05]. The new constraint, seen thanks to the traceability
Information model for tracelinks building in early design stages 153

L V
A
K
D w
V
K LCL
LB
A
l B
K T xt h K LD
C
e
T nt B
i

4
3.5
Eq1 Eq2 Eq1
3
3
2.5

2 2
Wall thickness Wall thickness
1.5
Volume
1 1 0
0
0.5 0.2
0.2
0 0.4
0 Keep things cool 0.4
5
Keep things cool
5
4 0.8
4 0.8
3 3 1
1
2 2 1.2
1
1.2 1
1.4 0 1.4
0

(a) (b)

Fig. 5 Keep things cold specification relationship

tree, led the team to optimise the cooling capability without affecting the volume of
the cooler.
Certainly, a tool of this nature can empower the decision making process to be
performed using whole the information in product life-cycle, and not based in the
experience of the designers team, specially when correlations are not obvious. Also,
this tool is able to alert designers with early warnings when the modification of one
variable might affect other variables.

5 Conclusion and further research

One of the strongest contributions of this research is offering a model that allows the
interconnection of information at early design stages, linking precisely information
that is in linguistic form, to design variables, and further information in detailed
design. In the presented example, it was found that the correlation between both
variables lay at requirements level (linguistic level). This leads to have a widely
view in early design, because its usage allows to find correlations of variables as far
as requirements list.
Further, as a distinction with other traceability models, such as RFLP, the pre-
sented model allows an interactive manner to analyse the information (with graphs);
rather than be a competence with RFLP models, this kind of solutions can work as
a complement to it, and will allow to connect requirements with CAD models, in-
creasing and analysis within whole product life-cycle.
Finally, the exploitation of the information collected by the presented model re-
duces uncertainty in how the decisions are being taken. Nevertheless, for further
models two important things can be defined: i) develop a mechanism that allows to
define the level of correlation between each variable, including degrees of correla-
tion at different stages ii) develop a graph theory model that allows the analysis the
correlation between the design variables in a automatically manner.
154 D. Rı́os-Zapata et al.

References

1. BF Robertson and DF Radcliffe. Impact of CAD tools on creative problem solving in engi-
neering design. Computer-Aided Design, 41(3):136–146, 2009.
2. B. Prasad. Concurrent engineering fundamentals- Integrated product and process organiza-
tion. Upper Saddle River, NJ: Prentice Hall PT, 1996.
3. Elko J Kleinschmidt and Robert G Cooper. The impact of product innovativeness on perfor-
mance. Journal of product innovation management, 8(4):240–251, 1991.
4. L. Wang, W. Shen, H. Xie, J. Neelamkavil, and A. Pardasani. Collaborative conceptual design–
state of the art and future trends. Computer-Aided Design, 34(43):981–996, 2002.
5. Simon Frederick Königs, Grischa Beier, Asmus Figge, and Rainer Stark. Traceability in sys-
tems engineering–review of industrial practices, state-of-the-art technologies and new research
solutions. Advanced Engineering Informatics, 26(4):924–940, 2012.
6. G. Pahl, W. Beitz, J. Feldhusen, and H. Gote. Engineering design: A systematic approach.
Springer Verlag, 2007.
7. John S Gero and Udo Kannengiesser. The situated function–behaviour–structure framework.
Design studies, 25(4):373–391, 2004.
8. Dominique Scaravetti, Jean-Pierre Nadeau, Jérôme Pailhès, and Patrick Sebastian. Structur-
ing of embodiment design problem based on the product lifecycle. International Journal of
Product Development, 2(1):47–70, 2005.
9. John S Gero. Design prototypes: a knowledge representation schema for design. AI magazine,
11(4):26, 1990.
10. Ronald E Giachetti and Robert E Young. A parametric representation of fuzzy numbers and
their arithmetic operators. Fuzzy sets and systems, 91(2):185–202, 1997.
11. Approved September. Ieee standard glossary of software engineering terminology. Office,
121990(1):1, 1990.
12. Mohamed-Zied Ouertani, Salah Baı̈na, Lilia Gzara, and Gérard Morel. Traceability and man-
agement of dispersed product knowledge during design and manufacturing. Computer-Aided
Design, 43(5):546–562, 2011.
13. Ricardo Carvajal-Arango, Daniel Zuluaga-Holguı́n, and Ricardo Mejı́a-Gutiérrez. A systems-
engineering approach for virtual/real analysis and validation of an automated greenhouse ir-
rigation system. International Journal on Interactive Design and Manufacturing (IJIDeM),
pages 1–13, 2014.
14. Chen Zheng, Matthieu Bricogne, Julien Le Duigou, and Benoı̂t Eynard. Survey on mecha-
tronic engineering: A focus on design methods and product models. Advanced Engineering
Informatics, 28(3):241–257, 2014.
15. Joel Igba, Kazem Alemzadeh, Paul Martin Gibbons, and Keld Henningsen. A framework
for optimising product performance through feedback and reuse of in-service experience.
Robotics and Computer-Integrated Manufacturing, 36:2–12, 2015.
16. David Baxter, James Gao, Keith Case, Jenny Harding, Bob Young, Sean Cochrane, and Shilpa
Dani. A framework to integrate design knowledge reuse and requirements management in
engineering design. Robotics and Computer-Integrated Manufacturing, 24(4):585–593, 2008.
17. Christian Weber. Cpm/pdd–an extended theoretical approach to modelling products and prod-
uct development processes. In Proceedings of the 2nd German-Israeli Symposium on Ad-
vances in Methods and Systems for Development of Products and Processes, pages 159–179,
2005.
18. Chr Weber. Looking at dfx and product maturity from the perspective of a new approach to
modelling product and product development processes. In The Future of Product Develop-
ment, pages 85–104. Springer, 2007.
Section 1.3
Interactive Design
User-centered design of a Virtual Museum
system: a case study

Loris BARBIERI1*, Fabio BRUNO1, Fabrizio MOLLO2 and Maurizio


MUZZUPAPPA1
1
Università della Calabria - Dipartimento di Meccanica, Energetica e Gestionale (DIMEG)
2
Università di Messina
* Corresponding author. Tel.: +39-0984-494976; fax: +39-0984-0494673. E-mail address:
loris.barbieri@unical.it

Abstract
The paper describes a user-centered design (UCD) approach that has been adopted
in order to develop and build a virtual museum (VM) system for the “Museum of
the Bruttians and the Sea” of Cetraro (Italy). The main goal of the system was to
enrich the museum with a virtual exhibition able to make the visitors enjoy an
immersive and attractive experience, allowing them to observe 3D archaeological
finds, in their original context. The paper deals with several technical and techno-
logical issues commonly related to the design of virtual museum exhibits. The
proposed solutions, based on an UCD approach, can be efficiently adopted as
guidelines for the development of similar VM systems, especially when very low
budget and little free space are unavoidable design requirements.

Keywords: User-centered design, user interfaces design, human-computer inter-


action, virtual museum systems.

1 Introduction

Nowadays, museums need to combine the educational purpose [1] with the capa-
bility to involve their visitors through emotions [2]. In order to achieve these goals
and overcome the old principles of traditional museology, growing emerging
technologies such as Virtual Reality, Augmented Reality, and Web applications,
are increasingly popular in museums. This union has involved the development of
a large number of instruments and systems that allow users to enjoy a culturally
vivid and attractive experience. There are many examples of such systems that
have been efficiently applied to the museum field: projection systems that could
turn any surface into an interactive visual experience; multi-touch displays; devic-
es for gesture based experiences; Head Mounted Displays (HMDs) or 3D displays

© Springer International Publishing AG 2017 157


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_17
158 L. Barbieri et al.

that turn the visit into an immersive and attractive experience; [3,4,5,6,7]. Even if
all these systems are appealing and really appreciated by their users, many devices
present some limitations due to their expensive installation or maintenance, the
large volume of work, or a poor user-system interaction caused by an incomplete
maturity of that specific technology in museum applications. Starting from these
considerations, and taking into account that 90% of museums are small-sized and
with low budgets, there is an unmet need in the development and design of more
affordable systems able to offer a fascinating and memorable experience to muse-
um visitors. Since Virtual Museum (VM) systems aim to be immediate and easy to
use, enjoyable and educative, these applications represent a typical case study that
needs to be addressed through a user-centered design (UCD) approach. This ap-
proach can be efficiently used in museums [8, 9], but there aren’t works concern-
ing specifically the UCD development of VM system.
Therefore, this paper represents a first attempt to describe a UCD approach car-
ried out for the development of low-cost VM systems that rely on off-the-shelf
technologies to create 3D immersive user experiences. The paper, furthermore,
gives some guidelines to choose the key technical devices and presents a case
study for the development of the Virtual Museum system installed in “the Mu-
seum of the Bruttians and of the Sea” of Cetraro (Italy).

2 Virtual Museum system design

Prior to the design phase, it is fundamental to take into account the requirements
that are often specified by museum directors and are generally related to budget
reasons. In fact, the great majority of museums are small, with less than 10˙000
visitors per annum, and can rely on a very low budget [10]. Then the economic
concerns severely affect the development and modernisation plans that, in the era
of “experience economy” [1], all the museums have to be competitive and to at-
tract more visitors. Starting from these considerations, there are two fundamental
requirements that must be achieved: low-cost and usability. Then, a VM system
should be designed to be cheap and, at the same time, to inspire the visitor.
For these reasons, on the one hand it is almost impossible to adopt very expen-
sive technologies such as HMD and CAVE (Cave Automatic Virtual Environ-
ment) for the visualization or wearable haptics and gesture recognition devices for
the interaction. On the other hand, usability, intended as both affordance and us-
ers’ satisfaction, should be the key quality of the system. In addition, museum cu-
rators usually dictate other requirements that could affect the overall dimensions
of the systems and their aesthetics. Once all these data have been acquired, the de-
sign process can start in accordance to the recommendations (ISO 13407) for a
UCD project, that can be summarized in the following flow chart (fig.1):
User-centered design of a Virtual Museum system: a case study 159

Fig. 1. Main steps of the VM system development process based on an UCD approach.

3 Guidelines for selecting the visualization and interaction


device

In this section some guidelines have been defined for selecting the hardware to be
adopted for the VM system, considering the economic reasons and the types of in-
formation we want to offer to the visitors.
Among the different commercial devices, projectors and high definition (HD)
monitors have been selected as an alternative for the visualization of the VM ex-
hibit. The HD monitors can perform a 4K resolution with high brightness and con-
trast, on the contrary, the projector can achieve a full HD resolution with higher
maintenance costs. Among the most commonly device controllers that can be in-
cluded in a cheap VM system trackballs, touch-screen consoles and gesture recog-
nition devices (i.e. MS Kinect or Leap Motion) have been analyzed. The table 1
shows the synthesis of our analysis.

Table 1. Device controllers.

Trackball/mouse Touch screen Gesture recognition devices


Costs low high medium
Quality of interaction unattractive very intuitive intuitive
Devices’ integration low medium high
Training required no no yes

About the touch-screen consoles there are two design solutions: the adoption of
a touch-screen console for controlling the objects and data that are visualized on a
HD monitor (fig.2a), or the adoption of a unique touch-screen monitor that can be
used both for the visualization and interaction of the virtual exhibit (fig.2b).
160 L. Barbieri et al.

The pros and cons of the two different solutions, depicted in figure 2, have
been analysed taking into account also some ergonomic requirements that are fun-
damental in an UCD approach, in order to define some guidelines. Our considera-
tions are that in order to get the optimum immersive visual HD experience, view-
ers should be located at the theoretical spot known as optimum HD monitor
viewing distance [11]. These requirements can be satisfied only in the first case
(fig.2a): in fact the viewers can stand at the distance that they prefer for their op-
timal viewing experience, thanks to the displacement of the controls. On the con-
trary, the adoption of a touch-screen monitor (fig.2b) implies a viewer distance
that depends by anthropometric measurements [12] and it is lower than the rec-
ommended viewing distance. Based on 3D industry professionals’ experience, the
optimum seating distance for 3D monitor sets does not appear to be much differ-
ent than the optimum range for regular HD monitors. But the viewing distance is
affected by the type of stereoscopic projection adopted. In fact, a 3D passive pro-
jection uses glasses that cut the 1080p resolution of the HD monitor in half (540p)
to each eye. This means that the optimum viewing distance increases so that
touch-screen monitors (fig.2b) result to be inappropriate for the visualization and
interaction with 3D scenarios.

Fig. 2. System composed by HD monitor and touch-screen controller (a); touch-screen monitor
based system (b).

To sum up, the adoption of a touch-screen for the visualization and interaction
of the 3D virtual exhibit (fg.2b) should be irrevocably excluded. A further consid-
eration is that the touch-screen remote control for the interaction with the VM sys-
tem could be a handheld device, i.e., tablet, or fixed in a specific position. The
first solution can usually be adopted when there is an operator that stands over the
system and gives the controls to the visitors that want to enjoy the virtual exhibit.
Instead, the second solution can be employed when the system is intended for un-
attended operation and, since the console cannot be moved, it is possible to in-
crease the screen size of the touchscreen in order to enhance its legibility.
User-centered design of a Virtual Museum system: a case study 161

4 The Case Study

The VM system described in this paper was intended to be installed in a small


archaeological museum, the “Museum of the Bruttians and the Sea”, hosted in the
beautiful setting of the Palazzo del Trono of Cetraro (Italy). The VM system will
be surrounded by archaeological pieces, found in a small group of necropolis, and
housing facilities that were built by the Bruttian people. Among the archaeological
finds there are bronze and iron weapons, ceremonial vases, drinking cups, eating
dishes, pins and jewellery.

4.1 First prototype

As clearly expressed by the ISO9241-210:2010 (standards for human-centred de-


sign for interactive systems), in a UCD approach the design and evaluation stages
should be preceded by the gathering of requirements and specifications to better
define the context of use and the user requirements.
The VM exhibition should allow users to engage into an educational and fun
experience. In particular, as requested by the museum director, the VM system
should permit its visitors to experience two different 3D scenarios that realistically
reproduce: a tomb belonging to the necropoli of Treselle discovered in the terri-
tory of Cetraro and an underwater archaeological deposit, located 20 km away
from Cetraro, a few meters from the shore 2/4 m deep. In the first scenario the
visitors should be able to visit the virtual tomb, with its Bruttian burials, and visu-
alize and manipulate its contents, such as bronze and iron weapons (bronze belts,
spearheads, javelin), pottery, drinking cups (skyphoi, kylikes, bowls, cups) and
eating dishes (plates, paterae). In the second scenario the visitors can interact with
some remains and fragments of amphorae dating back to the middle of the III cen-
tury BC.

4.2 Selection of the visualization and interaction device

The configuration with an HD monitor and a touch-screen remote control has cho-
sen in accordance to the volume that the VM system can occupy in the museum
and to the specifications described in the previous section.
The volume requirements guided us toward the individuation of a 46” HD
monitor, that, in accordance to THX [13] standards, has an optimum viewing dis-
tance range of 1.5-2.5 meters. The minimum viewing distance is set to approxi-
mate a 40° view angle (considering the average human vision, the upper limit for
maximum field of view is around 70°, which corresponds to the maximum field of
162 L. Barbieri et al.

view inclusive of peripheral vision) and the maximum viewing distance is set to
28° approx. This range allows us to satisfy both the constraints on the volume and
the minimum distance necessary to perceive the stereoscopic experience that is
commonly considered to be 1.5 meters. It is worth to notice that, due to many ob-
jective and subjective factors, the user experience provided by the virtual exhibit
changes from person to person [14,15]. For example, the age affects 3D percep-
tion: children have a lower ocular distance if compared with adults. This means
that, if placed at the same distance from the monitor, children have a more immer-
sive 3D viewing experience than adults.
In this case, since the presence of a supervisor is not always assured, we have
preferred to fix a 23” touch-screen console into a specific position.

4.3 System architecture development

Once the devices for user interaction and visualization of the virtual museum ex-
hibition have been defined, the following step was the definition of the position of
these devices in space. In particular, the relative positions and distances of the HD
monitor and the touch-screen console should be identified, trying to take into ac-
count the ergonomic standards for a better experience of the VM system.
Since the virtual exhibit is intended to be used by many different audiences,
such as middle and high school students, college students, tourists, etc., ergonomic
studies have been performed in order to find the optimal positioning of the visu-
alization device and its control system. Also the grade of the touch-screen console
has been studied. For a comfortable experience of the VM system, we tried to
keep users’ movements as natural as possible, with particular attention to the most
repetitive ones, i.e. the neck and shoulder extension movements. As detailed in the
previous section, a 46”HD monitor allows for an optimum viewing distance range
of 1.5-2.5 meters. Therefore, the touch-screen console has been placed at a dis-
tance of 1.5 meters from the HD monitor, in order to take advantage of the full
range and enjoy an optimal immersion and visualization of the 3D contents.
Once the relative positioning of the monitor and controller has been done, we
focused on the design of the structure. As depicted in figure 3a various design al-
ternatives have been evaluated. As recommended by UCD standards, various vir-
tual prototypes of the VM system architecture have been designed which differ in
their materials, dimensions, and aesthetics. These prototypes have been subjected
to an iterative design process that allowed us to improve each version, but also to
exclude those ones that were less performing in terms of ergonomic and technical
requirements. Figure 3b shows the final virtual prototype realized with white and
orange folded panels made of PPMA (Polymethyl methacrylate). Aluminium
builtin elements were adopted to support and fasten the monitors.
User-centered design of a Virtual Museum system: a case study 163

Fig. 3. Alternative design solutions and rendering in the context of use (a). Final virtual proto-
type of the VM system architecture (b).

4.4 User interfaces design

Since the VM system will be used by a large variety and different types of visitors
the user interfaces (UIs) should clearly communicate its purpose, so that users
with no experience with technological devices should be able to understand im-
mediately what they should do. For this reason, the UI design process was firstly
focused on the development of minimalistic design of UIs to make the layout and
graphic features of the VM system as simple as possible. In the composition of the
graphical elements as a whole, UIs should provide the users all the essential fea-
tures to manipulate virtual objects, but also to get access to a database of media
contents, such as images, texts and sounds, so that the interaction could also have
an educational value. This kind of approach allowed us to define a first low-
fidelity prototype (paper prototyping) of the UIs. Prior to proceed with the devel-
opment of a fully operational software for the management of the VM system, the
first UI prototypes should be submitted to a user-centered evaluation in order to
drive and refine their design. The evaluation has been performed by means of a
Cognitive Walkthrough (CW) [16] usability inspection method. According to the
CW standards and recommendations [17] a group of experts performed an UI in-
spection going through a set of tasks and evaluating UI understandability and ease
of learning.
The results of the UI design and CW analyses was a “three level” user inter-
face. In the first level there is the “home screen” (fig.4a) where visitors can choose
the preferred language, but most important, he/she can select the experience. Once
the user has selected the desired option, he/she accesses to the second level.
Depending on the selected scenario, the second interface that appears to users
could be the Tomb of Treselle (fig.4b) or the underwater environment (fig.4c). In
particular most of the screen area is reserved to the visualization of the 3D sce-
nario while the rest of the screen is organized as follows: on the left side some ba-
164 L. Barbieri et al.

sic informations explain to visitors how to navigate through the 3D environment


and manipulate its 3D contents; on the lower section of the screen there is a text
field that gives historical and cultural information about what the user is going to
experience. In particular, the tomb of Treselle (fig.4b) featured a Bruttian burial
dating back to the IV century BC and contains: weapons (bronze belts, iron spear-
heads and javelin); pottery, such as ceremonial vases, drinking cups (skyphoi, ky-
likes, bowls, cups) and eating dishes (plates, paterae); a lead set used in meat ban-
quets and consisting of skewers, a grill and a pair of andirons made of iron or lead.
While the underwater site (fig.4c) contains a residual archaeological deposit, con-
creted to the seabed and large rocky blocks, that consists of a merchant vessel car-
rying a load of transport amphorae of the MGS V and VI types, dating back to the
middle of the III century BC. When the user selects one of the virtual objects pre-
sent in the two environments, he/she enters in the third level (fig.4d) in which it is
possible to manipulate, zoom-in and get specific information about the artwork.

Fig. 4. First interface of the VM system (a). Second UI levels that allow users to experience a 3D
immersive reconstruction of the tomb of Treselle (b) or an underwater environment (c). 3D mod-
els accessible through the third UI levels (d).

4.5 VM system evaluation

The final stage of the UI development consists in their assessment in order to


evaluate their usability. The user studies carried out were very important for the
design of the final VM system because these allowed us to gain many information
related to the user-experience and the interaction with different alternatives of the
virtual exhibition. In particular, we noticed that when the monitor is controlled
through a touchscreen remote control, the users may get confused, inattentive and
User-centered design of a Virtual Museum system: a case study 165

annoyed due to the information arrangement between the two screens. Then we
tested two different solutions. In the first solution, both the HD monitor and the
touch-screen console display the same kind of information and contents. In the
second solution, the HD monitor visualizes only the 3D contents, while all the text
data and information are accessible only by the touch-screen console.
Traditional metrics, such as the time and the number of errors and question-
naires, that allow to catch cognitive aspects related to user experience, have been
used to interpret the outcomes of the user study. The results of the comparative
testing show that, even if from an objective point of view, there is not a statistical
significant difference between the two configurations but, from a subjective point
of view, the satisfaction questionnaires demonstrate a preference for the second
solution. In particular, when the touch-screen duplicates the information present
on the main monitor it reduces misunderstanding problems since it prevents the
user from inquiring both screens to find the desired information, but it also re-
duces the perceived user experience of the virtual exhibition. On the contrary, a
full-screen visualization of 3D contents on the main monitor where all the menus
and texts are on the touchscreen device, increases the user’s immersion and the
contents appear more pleasant and attractive from an aesthetic point of view.

Fig. 5. Visitors while experiencing the VM system.

On the basis of these results, we decided to adopt the second solution for the
VM system interaction, as shown in fig.5. While the main monitor is dedicated to
a 3D visualization of the archaeological finds, the touch-screen console is used to
control the 3D objects, but also to display information and educational contents.

5 Conclusions

In this paper a user-centered design approach has been adopted for the develop-
ment of a VM system that has been realized for the “Museum of the Bruttians and
the Sea” of Cetraro.
The paper gives many technical and technological advices and suggestions,
166 L. Barbieri et al.

which can be adopted to overcome several typical and recurrent problems related
to the development of VM systems, especially when low budgets and space con-
straints are among the design requirements.
The results of user testing and the opinions gathered by visitors demonstrated
that the adoption of an UCD approach can efficiently improve the VM system de-
velopment, and gives birth to a product that offers a more efficient, satisfying, and
user-friendly experience for the users.

References

1. Pine II B.J., Gilmore J.H. The Experience Economy: Work is Theatre & Every Business a
Stage. Harvard. 2000.
2. Vergo P. New Museology. Reaktion books. London. 1989.
3. Blanchard E.G., Zanciu A.N., Mahmoud H., and Molloy J.S. Enhancing In-Museum Informal
Learning by Augmenting Artworks with Gesture Interactions and AIED Paradigms. In Artifi-
cial Intelligence in Education (pp. 649-652). Springer Berlin Heidelberg. 2013.
4. Pescarin S., Pietroni E., Rescic L., Wallergård M., Omar K., and Rufa C. NICH: a preliminary
theoretical study on Natural Interaction applied to Cultural Heritage contexts. Digital Herit-
age Inter. Congress, Marseille, V.2, p.355, 2013.
5. Wang C.S., Chiang D.J., Wei Y.C. Intuitional 3D Museum Navigation System Using Kinect.
In Information Technology Convergence, pp. 587-596. Springer Netherlands, 2013.
6. Bruno F., Bruno S., De Sensi G., Librandi C., Luchi M.L., Mancuso S., Muzzupappa M., Pina
M. MNEME: A transportable virtual exhibition system for Cultural Heritage. 36th Annual
Conf. on CAA 2008, Budapest, 2008.
7. Bruno F., Angilica A., Cosco F., Barbieri L., Muzzupappa M. Comparing Different Visuo-
Haptic Environments for Virtual Prototyping Applications. In ASME 2011 World Confer-
ence on Innovative Virtual Reality, pp. 183-191.
8. Barbieri L., Angilica A., Bruno F., Muzzupappa M. An Interactive Tool for the Participatory
Design of Product Interface. In IDETC/CIE 2012 Chicago (pp. 1437-1447). 2012.
9. Petrelli D., Not E. UCD of flexible hypermedia for a mobile guide: Reflections on the
HyperAudio experience. User Modeling and User-Adapted Interaction, 15(3-4), 303-338.
2005.
10. IFEL-Fondazione ANCI e Federculture. Le forme di PPP e il fondo per la progettualità in
campo culturale. 2013.
11. Craig J.C., Johnson K.O. The Two-Point Threshold Not a Measure of Tactile Spatial Resolu-
tion. Current Directions in Psychological Science, 9(1), 29-32. 2000.
12. Woodson W.E., Tillman B., Tillman P. Human factors design handbook, 2nd Ed. Woodson,
1992.
13. http://www.thx.com/
14. Barbieri L., Bruno F., Cosco F., Muzzupappa M. Effects of device obtrusion and tool-hand
misalignment on user performance and stiffness perception in visuo-haptic mixed reality. In-
ternational Journal of Human-Computer Studies, 72(12), 846-859, 2014.
15. Barbieri L., Angilica A., Bruno F., Muzzupappa, M. Mixed prototyping with configurable
physical archetype for usability evaluation of product interfaces. Computers in Industry,
64(3), 310-323, 2013.
16. Lewis C., Polson P., Wharton C., Rieman J. Testing a walkthrough methodology for theory-
based design of walk-up-and-use interfaces. ACM CHI’90, Seattle, WA, 235-242, 1990.
17. Wharton C., Rieman J., Lewis C., Polson P. The cognitive walkthrough method: A practi-
tioner’s guide. Usability Inspection Methods, John Wiley & Sons, New York, 79-104, 1994.
An integrated approach to customize the
packaging of heritage artefacts

G. Fatuzzo1, G. Sequenzia1, S. M. Oliveri1, R. Barbagallo1* and M. Calì1


1
University of Catania, Catania, Italy
*
Corresponding author – email: rbarbaga@dii.unict.it

Abstract The shipment of heritage artefacts for restoration or tempo-


rary/travelling exhibition has been virtually lacking in customised packaging.
Hitherto, packaging has been empirical and intuitive which has unnecessarily put
the artefacts at risk. So, this research arises from the need to identify a way of de-
signing and creating packaging for artefacts which takes into account structural
criticalities to deal with deteriorating weather, special morphology, constituent
materials and manufacturing techniques. The proposed methodology for semi-
automatically designing packaging for heritage artefacts includes the integrated
and interactive use of Reverse Engineering (RE), Finite Element Analysis (FEA)
and Rapid Prototyping (RP). The methodology presented has been applied to cre-
ate a customised packaging for a small C3rd BC bronze statue of Heracles (Museo
Civico “F.L. Belgiorno” di Modica -Italy). This methodology has highlighted how
the risk of damage to heritage artefacts can be reduced during shipping. Further-
more, this approach can identify each safety factor and the corresponding risk pa-
rameter to stipulate in the insurance policy.

Keywords: Packaging; cultural heritage; laser scanning; FEM; rapid prototyping

1 Introduction

Planning the transportation of heritage artefacts (HA) and designing appropriate


packaging for them are issues often faced by museums. Traditionally [1], the ap-
proach was manual, not systematic nor scientific and wasted time and money.
Given the complexity and irregular shapes of the artefacts, universal packaging so-
lutions are inappropriate. Ideal packaging should be able to provide certain pre-
requisites: correct artefact position, zone interface choice, materials choice, ease
of assembly/disassembly and finally recyclability. These requisites require a pre-

© Springer International Publishing AG 2017 167


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_18
168 G. Fatuzzo et al.

liminary study regarding morphology, materials, conservation state and an analy-


sis of the criticality of each artefact.
The evermore widespread use of laser scanning in the field of cultural heritage
[2-3] invites the use of more integrated methodologies, already widely used in in-
dustrial engineering, to beat time and costs as well as improving the security of
HA. A recent study [4] proposed Generative Modelling Technology (GMT) to de-
sign appropriate packaging for the size and shape of a specific artefact. Further-
more, this study provided for the use of rapid prototyping also increasingly used in
archaeology [5]. In another very recent study [6], an approach based on 3D acqui-
sition was proposed together with an interactive algorithm to produce customised
packaging for fragile archaeological artefacts using a low cost milling machine.
To date, there are no studies in the literature which integrate laser scanning with
finite element analysis for verifying packaging/artefact interaction. However,
various studies [7-8-9] have aimed at structurally verifying large statues using fi-
nite elements.
An integrated methodology is proposed in this work based on laser scanning,
Finite Element Analysis (FEA) and Rapid Prototyping (RP) to design and create a
customised packaging for a small bronze sculpture. As opposed to studies in the
literature, this approach includes a preliminary morphological and structural
analysis of the statue as well as a study of the interaction between statue and pack-
aging to verify analytically how secure they are during handling and transit. The
flow chart below (Fig.1) summarises the approach of this research. Future devel-
opments might apply the methodology to medium-large sculptures.

Fig. 1. Flow diagram of the proposed approach.

1.1 Case study

For this packaging project, the bronze statue of ‘Heracles of Cafeo’ kept at the
'F.L. Belgiorno' public museum in Modica was chosen. Dating back to 300BC, the
bronze-fusion cast statue is 220 mm high with a volume of 257 x 103 mm3. It is a
rare small bronze hellenistic sculpture discovered in Sicily. It had recently been
An integrated approach to customize the packaging of heritage artefacts 169

restored to inhibit the effects of carbonation and copper chloride. Likewise, there
was evidence of a much earlier restoration to reconstruct its right arm which is
larger than the left. As shown in Figure 2, Heracles is wearing an imposing cloak
in lion-skin from his head down to along his left side. His upright body is leaning
against the extended left leg, the right in repose is just ahead. The left hand is
holding a bow and arrows, the bow-strings between his fingers. The right hand is
resting on a club [10] as per the most common iconography.

Fig. 2. The ‘Heracles of Cafeo’ statue

2 Methods

Digitalising the surfaces of archaeological objects uses non-invasive method-


ologies to ensure their integrity. Computerised tomography (CT) is one of the
most versatile techniques for dealing with lathe produced work because it even
provides the dimensions of non-visible parts [11]. Since the Heracles statue was
made by bronze fusion, laser scanning techniques were chosen using NextEngine
Desktop 3D scanner, which is particularly versatile for acquiring the geometry of
small objects without contact and with suitable adjustments would also be precise
for large objects. The scanning took place in the museum where it is on display
(Fig. 3a). The sensor was set-up to deal with the complex morphology and surface
finish of the statue, as well as the unalterable environmental factors of the display
space, the limited work area and the rather dim artificial lighting [12]. Fifty-five
acquisitions were made in two sessions so as to have the greatest number of sur-
face samples possible resulting in nearly 90%. The files (total 2.54GB) were saved
with the .WRL extension.
In post-processing, Inus Technology's RapidForm software was used to align
and reconstruct an overall representation of the clouds of metrical points obtained
170 G. Fatuzzo et al.

from 15 shells (Fig. 3b). Through data merging and data reduction, the scans were
stored in one 3D model filtered to 185,741 polygons from the initial 1,406,250;
the outliers were eliminated although few were generated by the high redundancy
of the overlapping zones (Fig. 3c). Finally, the small unsampled areas were recon-
structed automatically.

Fig. 3. Digitalising the statue:(a) surface scans of the sculpture; (b) storing the shells; (c) final 3D
model.

The larger disconformities, due to laser inaccessibility because of the internal


plastic shape of the cloak and residual parts of the arrows held in the left hand
(Fig.4), were reconstructed by converting the RapidForm® format of .MDL
(14,435 kB) to .STL until all the ASCI data corresponded to the original geometry.
This data was then modified to .3DM (284 kB) to fit NURBS modelling (Rhinoc-
eros® software) of the specific reconstructions (Fig.4).

Fig. 4. Reconstructing the unsampled surfaces.

The packaging procedure was preceded by a morphological analysis to define


optimum orientation during handling and transit. This first evaluation was fol-
lowed by FE analysis to highlight any highly critical zones to protect compared to
the stronger zones which the packaging can touch directly. Given Heracles upright
position and given certain fragilities both longitudinally and horizontally, it was
decided to package him lengthwise. A structurally static study was carried out us-
ing the well-established FEA which has already been used to identify zone criti-
calities in monuments.
To characterise the statue's material composition, bibliographic searches
showed no scientifically certain data. The few available chemical analyses of the
An integrated approach to customize the packaging of heritage artefacts 171

alloy revealed that in the late Greek Empire bronze was made up of copper, tin
and lead with growing lead content up to 30-40% to facilitate fusion [13]. Because
of the difficulty of establishing the statue's mechanical properties, this study refers
to a work on the bronze statue of 'Giraldillo' [8].
To unequivocally characterise the statue structurally, FEM analyses were car-
ried out in MARC® environment, by subjecting it to a hydrostatic pressure of 0.1
MPa to qualitatively evaluate those zones of greatest and least criticality. This
type of simulation is well suited to cases where the load conditions cannot a priori
be established and furthermore, acting uniformly across the statue's surface, pro-
vides an overall view of its stress state and therefore of its critical zones. Given the
statue's surface complexity, a mesh was created in Hypermesh® with 63,518 tetra-
hedric elements. From the FEM study of the model, the zones at greatest risk of
breakage were those protruding parts with the smallest cross-sections. Figure 5
shows the statue's morphology and in particular the parts to exclude from contact
with the packaging: hand L, elbow R, foot R and the cloak. Analogously, the
analysis identified the strongest areas or those with lower Von Mises stress values,
from which the sections not to exclude from contact with the packaging could be
extrapolated. Moreover, said analysis highlighted that a solution of distributed
support for critical zones would not prevent contact with the packaging.

Fig. 5. Results of FEA on sculpture.

Once the statue/packaging interface zones have been defined, and having di-
vided the statue into 48 sections perpendicular to the main axis, a morphological
study was initiated on the sections most suited to packaging contact as shown in
fig. 6. In particular, four transverse (to the statue's axis) sections were identified at
different intervals. Section A-A at 25.5mm from the tip of the sculpture's head co-
incides with Heracles forehead and has a surface area of 868mm 2. The profile of
the frontal section is 50.57mm long and is more regular compared to the rear pro-
172 G. Fatuzzo et al.

file which includes the added complexity of his cloak and is 49.6mm long. Section
B-B at 33mm from section A-A coincides with the shoulder to which the cloak is
attached and has a surface area of 2193mm2. The profile of the frontal section is
83.97mm long. Even though the cloak's knot is sharp, it's a similar irregularity to
the cloak's fold at the back protruding by 78.66mm, slightly less than the other
profile. Section C-C at 63mm from section B-B coincides with the statue's pelvis,
hand L and the cloak. Excluding the hand and drape from touching the packaging
because they are protruding morphologies (weaker parts), the pelvis area has a
surface area of 1155mm2. Morphologically, front and back are on the whole quite
similar. The front profile 58.61mm long, whereas the back is 50.68mm. Section
D-D at 90mm from section C-C, coincides with the statue's ankle and has a sur-
face area of 246mm2. Both ankle contours have the same shape. Its front profile is
42.08mm long and at the back 35.37mm. From section analyses, their front pro-
files are smoother than at the back as well as being overall at 235.23mm long,
more lengthy than the back at 214.31mm. Given that the packaging should touch
either the front or back of the four sections, the most regular and lengthier front
profiles were identified as supports. So, with the best transit position being hori-
zontal, analysing sections the prone position was thus chosen.

Fig. 6. Morphologically/geometrically identifying and analysing the statue-packaging interface.

From this data the Heracles statue's packaging was designed to be a


170x150x300mm parallelepiped, its longest side parallel to the statues axis. Inter-
nally and perpendicular to the axis, eight sliding ribs were created at the four lev-
els identified in the morphological analysis. The ribs slide on guides so the statue
can be inserted or removed easily.
To simulate statue-packaging interaction, FEM analyses under acceleration in
the contact zones were applied considering the statue's weight and hypothesising
that the supports are infinitely rigid (fig.7). To evaluate real transit accelerations
for works of art, reference was made to the literature [14] regarding the monitor-
An integrated approach to customize the packaging of heritage artefacts 173

ing and experimental measurement of shock/vibration values subjected to the


packaging of paintings during actual overland, air and sea shipping. It was possi-
ble to extrapolate an acceleration of 9g which was verified while flying the paint-
ing 'The Consecration of Saint Nicholas' by Paolo Veronese from the Chrysler
Museum (Norfolk, Virginia) to the National Gallery.
The simulation results highlighted the following: the hypothesised packaging
provides protection for the critical more fragile zones (hand L, elbow R, foot R,
and cloak) where the packaging touches the statue revealing stress values accord-
ing to Von Mises such that the safety factor is not less than 10. This hypothesised
packaging would therefore provide ample safety margins in transit.

(a) (b)
Fig. 7. FEA of Statue-packaging interaction.

Having carried out the virtual tests described above, Rapid Prototyping (RP)
techniques were used to create prototypes of the sculpture and packaging. A Stra-
tasys 3D printer (Dimension 1200es model) was used to produce ABS prototypes
(fig.8) by way of FDM (Fusion Deposition Modelling). The sculpture and packag-
ing prototypes facilitated assembly/disassembly tests which could not have been
done on the original statue. To construct the prototypes on a 1:1 scale, 206.7cm 3 of
ABS was used for the statue and 76.2cm 3 for the packaging and took about 18h.
The ABS packaging prototype could be considered functional and so usable in the
future for transiting the original Heracles of Cafeo.
174 G. Fatuzzo et al.

Fig. 8. ABS prototypes by additive manufacturing.

3 Conclusions

This work has presented an integrated methodology based on laser scanning, finite
element analysis and rapid prototyping to design and build a customised packag-
ing for a small bronze sculpture. This methodology may be applied to different
goods of various sizes, materials and shapes. As opposed to studies in the litera-
ture, this approach carries out a preliminary study of the item from the point of
view of shape and structure, and a study of the item-packaging interaction to vir-
tually verify the degree of safety during handling and transit. The FEM results
confirm that the chosen variables provide ample safety margins for transit, and
furthermore provide a risk parameter for insurance policies. The sculpture and
packaging prototypes produced by additive manufacturing provided aesthetic,
functional and assembly evaluations. Future developments regard the study of
procedures based on automatic algorithms for choosing the orientation and sec-
tions which interface between artefact and packaging.

References

[1] Stolow, N. (1981). Procedures and conservation standards for museum collections in transit
and on exhibition. Unesco.
[2] Fatuzzo, G., Mussumeci, G., Oliveri, S. M., & Sequenzia, G. (2011). The “Guerriero di
Castiglione”: reconstructing missing elements with integrated non-destructive 3D modelling
techniques. Journal of Archaeological Science, 38 (12), 3533-3540.
[3] Fatuzzo, G., Mangiameli, M., Mussumeci, G., Zito, S., (2014). Laser scanner data processing
and 3D modeling using a free and open source software. In Proceedings of the International
Conference On Numerical Analysis And Applied Mathematics 2014 (ICNAAM-2014), Vol.
1648. AIP Publishing.
[4] Sá, A. M., Rodriguez-Echavarria, K., Griffin, M., Covill, D., Kaminski, J., & Arnold, D. B.
(2012, November). Parametric 3D-fitted Frames for Packaging Heritage Artefacts. In VAST
(pp. 105-112).
An integrated approach to customize the packaging of heritage artefacts 175

[5] Scopigno, R., Cignoni, P., Pietroni, N., Callieri, M., & Dellepiane, M. (2015, November).
Digital Fabrication Techniques for Cultural Heritage: A Survey. In Computer Graphics Fo-
rum.
[6] Sánchez-Belenguer, C., Vendrell-Vidal, E., Sánchez-López, M., Díaz-Marín, C., & Aura-
Castro, E. (2015). Automatic production of tailored packaging for fragile archaeological arti-
facts. Journal on Computing and Cultural Heritage (JOCCH), 8(3), 17.
[7] Borri, A., & Grazini, A. (2006). Diagnostic analysis of the lesions and stability of Michelan-
gelo's David. Journal of Cultural Heritage, 7(4), 273-285.
[8] Solís, M., Domínguez, J., & Pérez, L. (2012). Structural Analysis of La Giralda's 16th-
Century Sculpture/Weather Vane. International Journal of Architectural Heritage, 6(2), 147-
171.
[9] Berto, L., Favaretto, T., Saetta, A., Antonelli, F., & Lazzarini, L. (2012). Assessment of
seismic vulnerability of art objects: The “Galleria dei Prigioni” sculptures at the Accademia
Gallery in Florence. Journal of Cultural Heritage, 13(1), 7-21.
[10] Rizzone, V. G., Sammito, A. M., & Sirugo, S. (2009). Il museo civico di Modica" FL
Belgiorno": guida delle collezioni archeologiche (Vol. 2). Polimetrica sas.
[11] Bouzakis, K. D., Pantermalis, D., Efstathiou, K., Varitis, E., Paradisiadis, G., & Mavroudis,
I. (2011). An investigation of ceramic forming method using reverse engineering techniques:
the case of Oinochoai from Dion, Macedonia, Greece. Journal of Archaeological Method and
Theory, 18(2), 111-124.
[12] Gerbino, S., Del Giudice, D. M., Staiano, G., Lanzotti, A., & Martorelli, M. (2015). On the
influence of scanning factors on the laser scanner-based 3D inspection process. The Interna-
tional Journal of Advanced Manufacturing Technology, 1-13.
[13] Giardino, C. (1998). I metalli nel mondo antico: introduzione all'archeometallurgia. Laterza.
[14] Saunders, D. (1998). Monitoring shock and vibration during the transportation of paintings.
National Gallery Technical Bulletin, 19, 64-73.
Part II
Product Manufacturing and Additive
Manufacturing

This track focuses on the methods of Additive Manufacturing, a technology that


has enabled the building of parts with new shapes and geometrical features. As
this technology modifies the practices, new knowledge is required for designing
and manufacturing properly. Papers in this topic deal with the optimization of la t-
tice structures or the use of topological optimization as a concept design tool.
In this track some interesting experimental methods in product development are
also introduced. Various user centered design approaches are presented in detail.
The authors try to overcome the lack of detailed users’ requirements and the lack
of norms and guidelines for the ergonomic assessment of different kind of tools
and interactive digital mock-ups.
Finally, the Advanced manufacturing topic covers very specific manufacturing
techniques like the use of a collaborative robot for a fast, low price, auto mated and
reproducible repair of high performance fiber co mposite structures.

Antonio Bello - Univ. Oviedo

Emmanuel Duc – IFMA

Massimo Martorelli - Univ. Napoli ‘Federico II’


Section 2.1
Additive Manufacturing
Extraction of features for combined additive
manufacturing and machining processes in a
remanufacturing context

Van Thao LE1*, Henri PARIS1 and Guillaume MANDIL1


1
G-SCOP Laboratory, Grenoble-Alpes University, 46 avenue Félix Viallet, 38031 Grenoble
Cedex 1, France
* Corresponding author. Tel.: +33-476-575-055; E-mail address: Van-Thao.Le@g-scop.eu

Abstract The emergence of additive manufacturing (AM) techniques in the last


30 years allows to build complex part by adding material in a layer-based fashion
or spraying the material directly into the part or a substrate. Taking into account
performance of these techniques in a ‘new remanufacturing strategy’ can open
new ways to transform an end-of-life (EoL) part into a new part intended for an-
other product. The strategy might allow a considerable material proportion of ex-
isting parts to be reused directly for producing new parts without passing through
the recycling stage. In this work, the strategy enable the transformation of existing
parts into desired parts is first presented. The strategy uses an adequate sequence
of additive and subtractive operations, as well as inspection operations to achieve
the geometry and quality of final parts. This sequence will be designed from a set
of AM features and machining features, which are extracted from available tech-
nical information and the CAD models of existing part, and final part. The core of
the paper focuses on the feature extraction approach. The approach development is
based on the knowledge of AM processes and machining process, as well as the
specifications of final part.

Keywords: Feature extraction; Additive manufacturing feature; Machining fea-


ture; Additive manufacturing; Remanufacturing.

1 Introduction

To answer the issues of end-of-life (EoL) products, the industrial manufacturers


are looking for efficient strategies able to recover EoL products. Generally, the
used products are separated and recycled into raw material; and then, raw material
is used to produce workpiece. However, energy consumption of the recycling sys-
tems remains important. Moreover, added values and a considerable amount of

© Springer International Publishing AG 2017 181


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_19
182 V.T. Le et al.

energy used to produce original products are generally lost during the recycling
process [1]. Nowadays, remanufacturing is considered as a pertinent solution for
EoL products [1, 2]. Indeed, remanufacturing is an industrial process allowing the
conversion of worn-out products/ EoL products into products in a like-new condi-
tion (including warranty) [3, 4]. This process can potentially reduce the cost of
product manufacturing while minimizing environmental impacts by reducing re-
source consumption and waste [1, 5, 6].
In the last two decades, the emergence of additive manufacturing (AM) tech-
niques allows a complex part to be built directly from a CAD model without spe-
cial fixtures and cutting tools [7]. In comparison to conventional manufacturing
processes, such as machining, casting and forging, AM technologies are interest-
ing from the point of view that they have great potential for improving material
use efficiency, saving energy consumption, and reducing scrap generation and
greenhouse gas emissions [8]. Today, these techniques are efficiently used in au-
tomobile and aerospace industry, as well as in biomedical engineering [7].
Literature shows that the use of AM technologies (e.g., direct metal deposition
(DMD) or construction laser additive deposition (CLAD) and fused deposition
modelling (FDM)) has a significant efficiency in the remanufacturing field. Wil-
son et al. sated that laser direct deposition was efficient for remanufacturing tur-
bine blades [9]. Nan et al. presented a remanufacturing system based on the inte-
gration of reverse engineering and laser cladding. This method was able to extend
the life-time of aging dies, aircrafts, and vehicle components [10]. However, these
works only focused on the method of remanufacturing component, namely return-
ing EoL parts/components in a like-new condition, and extending their life-time.
Zhu et al. proposed different feasible strategies to produce new plastic parts from
existing plastic parts. The strategy uses CNC machining, additive manufacturing
process (i.e., FDM process) and inspection process interchangeably [11]. Never-
theless, the strategy was only efficient for producing prismatic plastic parts. In
some cases, the strategy is not time-effective and reducing tensile strength of ob-
tained parts. Recently, Navrotsky et al. showed that SLM technique has a signifi-
cant potential for creating new features on existing components [12]. Terrazas et
al. presented a method, which allows the fabrication of multi-material components
using discrete runs of EBM system (electron beam melting system) [13]. In this
work, the authors successfully built a copper entity on the top of an existing titani-
um part. Their results open the perspective of using EBM for remanufacturing.
The investigation on the build of new feature on existing part using EBM process
presented in our recent work [14] also confirms that EBM technique allows a new
part to be achieved from an existing part.
In this work, the performance of AM techniques is integrated in a ‘new reman-
ufacturing strategy’, which can give a new life to EoL parts by transforming them
into new parts intended for another product. The strategy consists of combining
machining process, AM processes, and inspection process. Namely, the desired
part is achieved from existing parts by a manufacturing sequence comprising sub-
tractive and additive operations, as well as inspection operations. The scope of this
Extraction of features for combined additive … 183

work focuses on a feature extraction approach, which allows achieving AM fea-


tures, machining features from the technical information and the CAD models of
existing part, and final part. These features will be considered as input data for de-
signing the manufacturing sequence compatible with the proposed strategy. This
paper is organized as follows: Section 2 presents the new remanufacturing strate-
gy. The novel feature extraction approach is described in Section 3. Conclusion
and future work are presented in Section 4.

2 New remanufacturing strategy

The objective of new remanufacturing strategy is to give a new life to an EoL part
or an existing part by transforming it into a new part intended for another product.
The strategy consists of combining machining process (i.e., CNC machining), me-
tallic AM processes and inspection process, and even the heat treatment process
[15]. This combination takes advantage and performance of AM and machining
processes (e.g., obtaining a complex part by AM techniques and achieving a high
precision by CNC machining), while minimizing the disadvantage of these pro-
cesses (poor dimensional and surface quality generated by AM processes and lim-
ited tool accessibility in machining process, for example).

Fig. 1. General process consistent with the proposed strategy.

The generation of a process adequate for the proposed strategy contains three
major steps (Figure 1), namely the pre-processing of existing EoL part, the pro-
cessing, and the post-processing. First, existing part is cleaned and evaluated; and
then, the actual shape and dimensions of existing part are achieved by a system of
measurement and scanning to generate the CAD model. The processing step refers
to define a manufacturing sequence containing subtractive and additive operations,
and inspection operations, and even heat treatment. The post-processing step con-
sists of final inspection operations, and additional operations, such as labeling, etc.
The major issue to solve is: how such a manufacturing sequence is defined? In
the next section, we present an approach to extract both machining features and
184 V.T. Le et al.

AM features from available technological information and the CAD models of ex-
isting part and final part. These extracted features will be used as input data to de-
sign the manufacturing sequence.

3 Novel approach of feature extraction

3.1 Definition of manufacturing features

In the following, manufacturing features refer to machining features and AM fea-


tures. The machining feature has been defined by GAMA group [16]. “A machin-
ing feature is defined by a geometrical form and a set of specifications for which a
machining process is known. This machining process is quasi-independent from
the processes of other machining features” [16]. A machining process is an or-
dered sequence of machining operations. Following this definition, the major at-
tributes of a machining feature consist of the geometrical characteristics; the in-
trinsic tolerance on the form and dimensions; the machining directions; and the
estimated material to remove from rough state [17]. Recently, Zhang et al. also
proposed a definition of AM features based on shape feature and consistent with
the characteristics of AM processing [18]. The definition in their work has an im-
portant role in optimization of build direction in AM process. In fact, the build di-
rection has an effect on roughness of obtained surfaces, and mechanical proper-
ties, as well as support volume in AM. However, the choice of build direction in
the current study depends on the starting surface on existing part (the build direc-
tion is the normal vector of the starting surface). Hence, the major attributes of
AM features that have a particular interest here contain the geometrical character-
istics of the expected shape, the build direction and the starting surface, the esti-
mated material volume to be added by AM processes and the roughness quality. In
this paper, all entities to be added into the part will be considered as AM features.
Thus, an AM feature is defined as follows: an AM feature is a geometrical form
and associated technological attributes for which it exists at least an AM process.
The AM feature is then built by adding material from a starting surface on exist-
ing part.

3.2 Approach proposition

Many works published in the literature focus on automatic manufacturing feature


extraction methods, in particular extracting machining features, as shown in [19,
20]. These methods are based on the information of design parts and the
Extraction of features for combined additive … 185

knowledge of machining process. The extracted features are then used for manu-
facturing process planning [17, 21]. However, these methods are only efficient in
machining field. In our work, an existing part will be transformed into a desired
part using a sequence of additive and subtractive operations, and inspection opera-
tions. This process is totally different compared to machining process, which gen-
erally removes material from a cylindrical or rectangular workpiece to achieve ge-
ometry and quality of final part. Consequently, the previous methods are not
effective in this case. Hence, we propose an extended feature extraction approach,
which is based on the knowledge of AM processes and machining process, as well
as the specifications of final part. Available technological information and the
CAD models of existing part and final part are considered as input data of the ap-
proach.

3.3 Knowledge of manufacturing processes

In this section, the knowledge of AM processes and machining process are ex-
ploited to identify and extract manufacturing features. In this study, we focus on
two types of ‘metal-based’ AM techniques – Powder Bed Fusion (e.g., EBM and
SLM), and Directed Energy Deposition (e.g., CLAD and DMD) [22]. The machin-
ing process is performed by a CNC machine. The CNC machines today have suf-
ficient performance to achieve the expected quality. The knowledge of AM pro-
cesses is outlined as follows:
Capabilities and limitations of AM processes: For EBM and SLM processes,
the build of parts is performed in vertical direction by depositing metallic powder
layer by layer on a flat surface. Hence, the machining stage should be realized on
existing part to obtain a flat surface for material deposition stages. In some cases,
existing parts should also be clamped on the build table by a fixture system to
achieve such configuration. Moreover, these processes are limited by the build en-
velope and single material per build. In comparison, CLAD and DMD processes
offer a larger build envelope and flexible build directions due to a 5-axis CNC
machine configuration. These techniques can also deposit multiple materials in a
single build. However, their ability for building internal structures and overhang
structures is limited.
Part accuracy and surface roughness of AM-built parts: Generally, the quality
and roughness of AM-built surfaces are not always adequate for the quality of fi-
nal part [15, 23]. Hence, machining stages are further performed to ensure the ex-
pected quality (of course, only surfaces generated by AM process having surface
roughness incompatible with expected precision, are further machined). The sur-
face roughness value of AM-built surfaces, as well as the geometric errors due to
thermal distortion, and residual stresses in AM processes should be taken into ac-
count for the generation of AM features.
186 V.T. Le et al.

Collision constraints: the collision constraints are very important to take into
account in feature identification and feature extraction. For CLAD and DMD pro-
cesses, the collision between the nozzle and the part during material deposition
stages must be avoided. In the EBM and SLM processes, to avoid collision be-
tween the rake and existing part, it is essential to start the build of part from a flat
build surface.
In machining, the accessibility of cutting tools is one of the major constraints to
be taken into account during the identification and the extraction of manufacturing
features. If the build of an AM feature cause an inaccessibility of cutting tool in
the next machining operation, it has to be built after the machining operation. The
constraints of part clamping in machining stages should also be considered.

3.4 Development of feature extraction process

The proposed feature extraction process contains five major steps as shown in
figure 2. The proposed extraction process is demonstrated using the case study
presented in figure 3. For this purpose, all steps were performed manually using a
CAD software. The pocket (P), the hole (H) and the surfaces (fS1 to fS7) of the fi-
nal part require a high surface precision. The roughness of the surfaces (eS1, eS2,
and eS3) of existing part satisfies the quality of the final surfaces (fS1, fS2 and
fS3). The steps of process are outlined as follows:

Fig. 2. Major steps of feature extraction process.

Local coordinate system definition and Positioning: The first step consists of
defining a local coordinate system for each CAD model of existing part and final
Extraction of features for combined additive … 187

part. Afterwards, two local coordinate systems - namely, two parts - are positioned
so that the common volume between existing part and final part is as big as possi-
ble (figure 4a). Moreover, for functional surfaces of final part (e.g., surfaces fS4,
fS5 and fS7), it is necessary to leave a sufficient over-thickness for finishing oper-
ations. This over-thickness should be integrated in the generation of the common
volume.

Fig. 3. Test parts: existing part and final part.

Extraction of the common volume, the removed volumes and the added vol-
umes: After the step A01, two parts are well positioned respecting the constraint
that common volume is as big as possible. From there three volumes are extracted
using Boolean operations (figure 4b). The common volume of two parts is ob-
tained using (Existing part) AND (Final Part). The added volumes are obtained by
subtracting the common volume to the Final part. Finally the removed volumes
are obtained by subtracting the common volume to the Existing part. In the fol-
lowing, the common volume is called as the common part.

Fig. 4. Illustrating the step A01 (a), the step A02 (b), and the step A03 (c).

Modification of common part geometry by talking into account the manufactur-


ing process constraints: the common part geometry is not generally adequate for
AM processes. Indeed, in EBM and SLM processes the build surface must be flat
to avoid collision between the rake and the part. In CLAD and DMD processes, it
is also very important to avoid collisions between the nozzle and the part. Hence,
it is necessary to modify the common part geometry. For example, in figure 4c,
taking into account the EBM or SLM process constraints, the volume located on
the plan (S1) must be removed; and the hole (H), which does not exist on existing
188 V.T. Le et al.

part, will be machined after AM stage. Moreover, for the surfaces of common part
requiring machining (e.g., the contour surface S2, and the surfaces S3 and S4 of
figure 4c), it should also leave a sufficient over-thickness for the finishing opera-
tions. The over-thickness is estimated based on the expected quality. The new ge-
ometry of common part after modification, denoted as CF, is further used to ex-
tract the AMFs and MFs. The CF is considered as an intermediate part in the
processing.
Extraction of machining features from existing part: from the CF and existing
part, the volumes to be removed from existing part to achieve the CF are extracted
using the Boolean operations. These extracted volumes and the associated attrib-
utes of geometrical form of the CF formulate machining features, denoted as MFe.
Figure 5a illustrates this step. In this case, we have two machining features ex-
tracted from existing part, MFe_1 and MFe_2. The machining processes of these
features, which allow achieving the top flat surface feature, on which the AM fea-
tures will be built, and the ‘irregular step’ feature.

Fig. 5. Illustrating the step A04 (a), and the step A05 (b).

Extraction of the AMFs and the MFs after AM stages: Similarly in the previous
step (A04), the volumes to be added into common part to achieve the geometry of
final part are extracted from final part and the CF (figure 5b). The AM features are
also defined from these extracted volumes, the specifications of final part, and the
associated technological attributes of AM processes. An AM feature can be either
a final feature of final part (e.g., AMF_1 and AMF_3), or rough state of a machin-
ing feature after AM processing (e.g., AMF_2). The relation of AM features can
be classified into three categories: independence, dependence, and grouped. For
example, AMF_1 is independent from AMF_2 and AMF_3; AMF_3 is dependent
on AMF_2; and AMF_3 are considered to be grouped.
Obviously, the independent AM features are built in different build directions,
and the dependent AM features are generally built in the same build direction. The
grouped AM feature can be built either in the same direction (for EBM and SLM
processes), or in the different build directions (for CLAD and DMD processes).
To identify AM features, it is also essential to take into account the machining
constraints, such as collision constraints. In certain cases, the dependent AM fea-
tures should be decomposed into different independent features, and built in dif-
Extraction of features for combined additive … 189

ferent AM stages to avoid the collision between the cutting tools and the part oc-
curring in the next machining stage. For example, if AMF_2 and AMF_3 are built
in the same AM stage, the drilling of the hole (H) or the finishing of the pocket (P)
on the AMF_2 may cause collision between cutting tools and AMF_3. Thus,
AMF_3 must be built after the machining of the hole (H) and the pocket (P) on the
AMF_2 feature.
Moreover, to ensure the quality of final part, functional surfaces have to be ma-
chined after AM processing. Thus, a sufficient over-thickness for the finishing
stages leaving on these surfaces is taken into account in generating the CAD mod-
el of AM features (e.g., AMF_2). It is estimated as a function of roughness of sur-
faces generated by AM process, the required quality of final surfaces, and the sur-
face quality achieving by machining. The over-thickness will become the rough
state attribute of machining features after AM process.
The machining features after AM process, denoted as MFa, are determined
from the functional features of final part (for example, the functional surfaces fS4
to fS7, the holes (H), and the pocket (P) in figure 3). The rough state attributes of
MFa features defined by the over-thickness integrated in AM features, or a plain
material state (particularly in the case of drilling holes). In figure 5b, the machin-
ing features, MFa_{1, 2, 3, 4, 6}, correspond to functional surfaces of final part;
and MFa_5 corresponds to the hole feature (H).

5 Conclusion and future work

The research focused on the feature extraction process in a context of remanufac-


turing. The proposed approach allows an effective extraction of both MFs and
AMFs features from the CAD model of existing part and final part, and the
knowledge about constraints of AM process and machining process. This has been
illustrated using a case study.
Future work will consist in designing a manufacturing operation sequence compat-
ible with the new remanufacturing strategy using the extracted features.

Acknowledgments The authors would like to thank Rhône-Alpes Region of France for its sup-
port in this project.

References

1. King A. M., Burgess S. C., Ijomah W., et al., Reducing Waste: Repair, Recondition,
Remanufacture or Recycle  ?, Sustainable Development, 2006, 267, 257–267.
2. Bashkite V., Karaulova T., and Starodubtseva O., Framework for innovation-oriented
product end-of-life strategies development, Procedia Engineering, 2014, 69, 526–535.
3. Gehin A., Zwolinski P., and Brissaud D., A tool to implement sustainable end-of-life
190 V.T. Le et al.

strategies in the product development phase, Journal of Cleaner Production, 2008, 16, 566–
576.
4. Aksoy H. K. and Gupta S. M., Buffer allocation plan for a remanufacturing cell, Computers
and Industrial Engineering, 2005, 48(3), 657–677.
5. Goodall P., Rosamond E., and Harding J., A review of the state of the art in tools and
techniques used to evaluate remanufacturing feasibility, Journal of Cleaner Production, Oct.
2014, 81, 1–15.
6. Östlin J., Sundin E., and Björkman M., Product life-cycle implications for remanufacturing
strategies, Journal of Cleaner Production, Jul. 2009, 17(11), 999–1009.
7. Guo N. and Leu M., Additive manufacturing: technology, applications and research needs,
Frontiers of Mechanical Engineering, 2013, 8(3), 215–243.
8. Huang R., Riddle M., Graziano D., et al., Energy and emissions saving potential of additive
manufacturing: the case of lightweight aircraft components, Journal of Cleaner Production,
May 2015.
9. Wilson J. M., Piya C., Shin Y. C., et al., Remanufacturing of turbine blades by laser direct
deposition with its energy and environmental impact analysis, Journal of Cleaner Production,
2014, 80, 170–178.
10. Nan L., Liu W., and Zhang K., Laser remanufacturing based on the integration of reverse
engineering and laser cladding, International Journal of Computer Applications in
Technology, 2010, 40(4), 254–262.
11. Zhu Z., Dhokia V., and Newman S. T., A novel decision-making logic for hybrid
manufacture of prismatic components based on existing parts, Journal of Intelligent
Manufacturing, Sep. 2014, 1–18.
12. Navrotsky V., Graichen A., and Brodin H., Industrialisation of 3D printing (additive
manufacturing) for gas turbine components repair and manufacturing, VGB PowerTech 12,
2015, 48–52.
13. Terrazas C. A., Gaytan S. M., Rodriguez E., et al., Multi-material metallic structure
fabrication using electron beam melting, The International Journal of Advanced
Manufacturing Technology, Mar. 2014, 71, 33–45.
14. Mandil G., Le V. T., Paris H. and Saurd M., Building new entities from existing titanium
part by electron beam melting: microstructures and mechanical properties, The International
Journal of Advanced Manufacturing Technology, 2015.
15. Le V. T., Paris H., and Mandil G., Using additive and subtractive manufacturing
technologies in a new remanufacturing strategy to produce new parts from End-of-Life parts,
22ème Congrès Français de Mécanique, 24 au 28 Août 2015, Lyon, France.
16. Groupe GAMA, La gamme automatique en usinage. Editions Hermès, Paris, 1990.
17. Paris H. and Brissaud D., Modelling for process planning: The links between process
planning entities, Robotics and Computer-Integrated Manufacturing, 2000, 16(4), 259–266.
18. Zhang Y., Bernard A., Gupta R. K., et al., Feature Based Building Orientation Optimization
for Additive Manufacturing, Rapid Prototyping Journal, 2016, 22(2).
19. Harik R. F., Derigent W. J. E., and Ris G., Computer aided process planning in aircraft
manufacturing, Computer-Aided Design and Applications, 2008, 5(6), 953–962.
20. Harik R., Capponi V., and Derigent W., Enhanced B-Rep Graph-based Feature Sequences
Recognition using Manufacturing Constraints, in The Future of Product Development:
Proceedings of the 17th CIRP Design Conference, F.-L. Krause, Ed. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2007, 617–628.
21. Liu Z. and Wang L., Sequencing of interacting prismatic machining features for process
planning, Computers in Industry, 2007, 58(4), 295–303.
22. Vayre B., Vignat F., and Villeneuve F., Metallic additive manufacturing: state-of-the-art
review and prospects, Mechanics & Industry, 2012, 13, 89–96.
23. Vayre B., Vignat F., and Villeneuve F., Designing for additive manufacturing, Procedia
CIRP, 2012, 3(1), 632–637.
Comparative Study for the Metrological
Characterization of Additive Manufacturing
artefacts

Charyar MEHDI-SOUZANIa*, Antonio PIRATELLI-FILHOb, Nabil


ANWERa
a
Université Paris 13, Sorbonne Paris Cité, LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-
Saclay, 94235 Cachan, France.
b
Universidade de Brasilia, UnB, Faculdade de Tecnologia, Depto. Engenharia Mecânica, 70910-900,
Brasilia, DF, Brazil

* Corresponding author. Tel.: +33 1 47 40 22 12; E-mail address: souzani@lurpa.ens-cachan.fr

Abstract Additive Manufacturing (AM), also known as 3D printing, has been in-
troduced since mid 90' but it begins to have a broader use along last ten years. The
first uses of AM process were for rapid prototyping or for 3D sample illustration
due to the weak performances of mechanical characteristics of the materials avail-
able. However, even if this technology can provide answers for mechanical re-
quirements, it will be largely used only if geometrical and dimensional character-
istics of generated parts are also at the required level. In this context, it is
necessary to investigate and identify any common dimensional and/or geometrical
specifications of the parts generated by AM process. Highlighting singularity of
AM systems should be based on the fabrication and measurement of standardized
artefacts. Even if those test parts allow assessing some important characteristics of
AM systems, there are still some challenges to characterize the capacity of gener-
ating freeform surfaces and features. In the literature, none of existing test parts
are proposing those kind of features even if the generation of free-form surfaces is
a significant benefit of AM systems. In this context, the aim of this paper is to
provide a metrological comparative study on the capacity of an AM system to
generate freeform parts based on an artefact.

Keywords: Additive manufacturing; measurement artefact; free form characteri-


zation; dimensional metrology

1 Introduction

Additive Manufacturing (AM) is the process used to build a physical part layer
by layer directly from a 3D model [1]. The first uses of AM process were for rapid

© Springer International Publishing AG 2017 191


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_20
192 C. Mehdi-Souzani et al.

prototyping and 3D sample [2] illustration due to the weak performances of me-
chanical characteristics of the materials available. Recent development and more
particularity with the use of metal and ceramics powder, broadens considerably
the field of use of AM. It is now reasonable to considerate the use of parts fabri-
cated by this process in such industries as aerospace or automotive. This technol-
ogy will be largely used only if geometrical and dimensional characteristics of
generated parts were also at the required level [3]. In this context, we believe that
an investigation is necessary to identify the common dimensional and geometrical
specifications of the parts generated by AM process. The knowledge of the capac-
ity of AM process generating parts with dimensional and/or geometrical require-
ments could allow to take into the account a correction factor at the design step.
By this way printed parts specifications will increase. This study can be based on
the design of an artefact. The artefact should be representative of the complex
forms and geometry that can be built by an AM system but they must also, reflect
metrology characterizations.
In the literature, only few studies focus on these topics. Moylan & all from
NIST, start their work by noting that even if different test parts have been intro-
duced by the past, there are no any current standard parts for AM systems [4].
They summarized the existing parts by studying the important features and charac-
teristics found on those parts and propose a new artefact intended for standardiza-
tion. The part is composed by various canonical geometries: staircases, holes,
pins, fine pins and holes, negative and positive cubes, vertical surfaces, ramp and
cylinders. Yang & all, proposed an assessment of the design efficiency of the test
artefact introduced by NIST team and based on their analysis they provided a re-
designed artefact [5]. They analysed in more details seven characteristics:
straightness, parallelism, perpendicularity, roundness, concentricity, true position
for z plane and true position for pin. They concluded that some geometrical char-
acteristics are redundant and some dimensions have relevant effects on the parts
build. Based on their conclusion, they introduced a new part using the same kind
of geometrical forms but they provided different orientation and features dimen-
sions in order to analyse the capacity of the AM system to generate the same fea-
tures in different sizes and directions. Islam & all, [6] provide an experimental in-
vestigation to quantify the dimensional error of powder-blinder 3D printer. They
use a test part defined by superposition of concentric cylinders with descendant
radii from down to top and a central cylindrical hole.
In this context, we provide an experimental comparative study on the capacity
of an AM system to generate freeform parts. A complex geometry artefact was de-
signed and produced and in order to provide an independent study, three different
measuring instruments were used to characterize the dimensions and geometry of
the test part. Conclusions of this study and future works are also highlighted.
Comparative Study for the Metrological ... 193

2 Artefact design and experimental context

In the literature, many artefacts have been used to study AM systems, but they are
only designed with regular surfaces [4,5,6]. In this context we introduce a new ar-
tefact designed with freeform and regular surfaces. The NPL (National Physical
Laboratory-UK) provide a freeform artefact called "FreeForm Reference Stan-
dard". But it has been designed to aid the assessment of contactless coordinate
measurement system such as laser scanner [7,8] and not to assess the dimensional
and geometrical characteristics of parts manufactured by AM systems. The NPL
artefact is defined by a single part built by blending several geometrical forms.
The analysis of this part let us conclude that it is not enough appropriated to char-
acterize an AM system. However, some of its forms can be used. Based on this
conclusion, a new artefact is designed with the following regular geometries:
plane; cylinder; sphere; extruded ellipse; cone and torus; and an axisymmetric
aspherical shape (lens) and a Bézier surface for freeform geometries. A Computer-
Aided Design (CAD) model was generated using CATIA V5 software, with basis
dimensions of 240 x 240 mm. Figure 1 presents the designed artefact with respec-
tive geometries.

Fig. 1. Free-form artefact designed to evaluate the AM system.

The part has been manufactured with a ZPrinter 450 from Zcorporation, a powder-
binder process machine [9] with part tolerances of ±1% or ±130 μm according to
the manufacturer [10]. The CAD model was implemented in this machine and the
artefact was produced with zp150 (gypsum) material.
The artefact was measured with three different instruments, a Cantilever type Co-
ordinate Measuring Machine (CMM), an Articulated Arm CMM (AACMM) and a
laser scanner. The Cantilever CMM is a Mitutoyo and has a work volume of 300 x
400 x 500 mm, with a standard combined uncertainty of 0.003 mm. The AACMM
is a Romer arm and has a spherical work volume of 2.5 m in diameter, with stan-
dard combined uncertainty of 0.03 mm. The laser scanner is a NextEngine system
and has an accuracy of 0.26 mm. Figure 2 presents the measuring instruments. As
a part of the study process, the measurement system can introduce variations and
194 C. Mehdi-Souzani et al.

influences the study's conclusion. This is why we used three different systems to
take into account this potential variation source that is not related to AM system.

Fig. 2. Measuring instruments: a) Laser scanner ; b) Articulated Arm CMM ; c) Cantilver type
CMM.
Each characteristic has been measured five times in order to compute the average,
standard deviation, and other statistical characteristics. Two-dimensional charac-
teristics have been measured for the regular surfaces: diameters and height (dis-
tance between two nominal surfaces), as well as flatness, parallelism and perpen-
dicularity between situation features. For the freeform surfaces, the deviation of
the geometries in respect to the theoretical CAD model has been measured. A
graphical analysis with the means and the error bar, determined with t-Student dis-
tribution and 95% probability, complete the study.

3 Results and discussion

3.1 Dimension characteristics of regular surfaces

For the measurement of regular surfaces, CMM and AACMM with two different
contact probes have been used: a point contact stylus probe with 0mm ball diame-
ter (AACMM0) and a 6mm ball diameter stylus (AACMM6). Table 1 presents the
data analyses resulting from the measurements: deviation (d), standard deviation
(s) and the standard deviation of the mean (sm95).

sm95 = (t.s)/√n (1)

with t= 2,776 : the t Student parameter for 95% and n= 5: the sample size
Comparative Study for the Metrological ... 195

sm95 is used to present the standard deviation of the mean associating 95% proba-
bility to the result.
In table 1, "D" means diameter; "H" means the height of the given feature and "L"
means the distance between two given plane surfaces.

Table 1. Data analyse for regular surfaces measurements in mm.

AACMM =6mm AACMM d=0 CM


d s sm95 d s sm95 d s sm95
1 D cylinder -0.186 0.048 0.060 -0.490 0.029 0.036 -0.198 0.030 0.038
2 H cylinder 0.112 0.015 0.018 0.124 0.053 0.065 0.114 0.006 0.007
3 D sphere -0.382 0.398 0.494 -0.656 0.081 0.101 -0.044 0.097 0.121
4 L plane 5-9 -0.284 0.015 0.019 -0.422 0.019 0.024 0.188 0.356 0.441
5 L plane 3-7 -0.028 0.013 0.016 -0.198 0.020 0.025 0.254 0.465 0.578
6 L plane 6-10 0.058 0.011 0.014 -0.246 0.019 0.023 0.828 0.285 0.354
7 L plane 4-8 0.352 0.086 0.107 -0.118 0.151 0.188 0.407 0.104 0.129
8 L plane 1-2 0.040 0.012 0.015 0.010 0.010 0.012 0.048 0.012 0.015
9 H Bézier 0.098 0.048 0.059 0.244 0.048 0.060 0.203 0.019 0.024
10 H ellipse 0.146 0.050 0.062 0.136 0.059 0.073 0.100 0.096 0.119

Figure 3 presents a graphical analyse of the deviation value summarized in table 1.


For instance, the fourth column of x-axis of figure 3 represents the fourth line of
table 1, namely the deviation "d" computed on the data for each measurement sys-
tem. This graphical analyse shows that for half of the features the deviation val-
ues are similar regardless of the measurement system (1, 2 8, 9 and 10). For the
second half the values depend on the measurement system used, but we can notice
a constant variation for all the systems: the CMM gives a positive deviation, the
AACMM0 a negative deviation and the AACMM6 has an approximately constant
gap. The values summarized in table 1 do not allow concluding on a general trend
of oversizing or undersized. A complementary study should be provided to explain
this variation.
196 C. Mehdi-Souzani et al.

Fig. 3. Graphical analyse of the deviation presented in Table 1.

3.2 Free-form surfaces and features

For the measurement of freeform surfaces, CMM, scanner and AACMM0 (The
AACMM6 does not allow free-form measurement) have been used. All the fea-
tures in this paragraph have been measured as cloud of points without any geome-
try association process or criteria. In a second step, the set of points have been
processed in Rhinoceros software [11] as illustrated in figure 4.

Fig. 4. Analyses of deviations between data points and CAD model in Rhenoceros.

Table 2 presents the deviations of points to the CAD model in the same terms than
table 1: d, s and sm95.
Comparative Study for the Metrological ... 197

Table 2. Data analyse of freeform geometries measurement in mm.

AACMMd=0 CMM Scanner


d s sm95 d s sm95 d s sm95
1 Bézier 0.331 0.224 0.278 0.541 0.427 0.530 0.714 0.619 0.768
2 Torus 0.617 0.434 0.539 0.731 0.500 0.621 0.155 0.102 0.127
3 Lens 0.258 0.135 0.168 0.416 0.363 0.451 0.158 0.115 0.143
4 Ellipse 1.107 0.492 0.611 0.563 0.380 0.472 0.885 0.638 0.792
5 Cone 0.649 0.459 0.570 0.487 0.360 0.447 0.944 0.519 0.644

Figure 5 presents a graphical analyse of the deviation for each line of table 2. As
shown in figure 5, for freeform features, the values are more scattered but the
analysis shows that all the deviations are positive. In other terms, for those free-
form features a volumetric expansion has been identified. This expansion is coher-
ent regarding the literature. Especially if we take into account the material used
[12]. This conclusion may be related to some previous work [6] although in that
case it was on dimensional errors on regular forms. Even if this seems to be in op-
position with previous section, as the computation methods used in both sections
are different it is not possible to conclude.

Fig. 5. Graphical analyse of the deviation summarized in Table 2.

Using the same method of computation and study the influence of size variation
on the deviation for a given feature could bring an answer. However, it seems rea-
sonable to conclude that in this case a correction parameter could be used in the
CAD model to generate a manufactured part in concordance with the nominal di-
mensional requirements.
198 C. Mehdi-Souzani et al.

3.3 Geometric deviations

Figure 6 shows the parallelism deviation, in mm, between planes 1 and 2, planes 4
and 8, planes 6 and 10 (Please refer to figure 1 for surface numeration).

Fig. 6. Parallelism deviation.

Figure 7 shows the perpendicularity deviation, in mm, between planes 3 and 5,


planes 3 and 9, planes 5 and 7, planes 5 and 9. (Please refer to figure 1 for surface
numeration).

Fig. 7. Perpendicularity deviation.

Figure 8 shows the flatness of the plane surfaces of the artefact. Note that "Bézier,
Ellipse and Cylinder", are referred to the planes at the top of the mentioned fea-
tures: the top plane of the Bezier feature; the top plane of the ellipse feature; the
top plane of the cylinder.
Comparative Study for the Metrological ... 199

Fig. 8. Features Flatness.

According to figure 6, parallelism deviations in all major directions are similar


even if the maximum deviation (between planes 1 and 2: 0,21 mm) is twice the
minimum deviation (between planes 5 and 9: 0,11 mm). At this stage no explica-
tion can be given.
For perpendicularity, we can also observe (figure 7) a similar deviation in all ma-
jor directions except between plane 5 and 9, where the deviation is almost 3 times
higher than in other cases.
For the flatness, according to figure 8, we can conclude that in the major cases,
when the planes have the same orientation, the flatness is similar: planes 1 and 2;
planes 3 and 7; planes 4 and 8. When the planes have different orientations, the
flatness is also different for instance in between planes 6 and 9. We can assume
that orientation of the generated surface in the AM manufacturing space has an in-
fluence on the flatness of the generated parts.

4 Conclusions

There is only few works on the dimensional accuracy assessment of AM systems


to manufacture freeform shapes while the generation of those surfaces is one of
the major advantages of AM process. To address this weakness, we developed a
new geometric artefact designed to characterize dimensional and geometrical ca-
pabilities of an AM system to generate freeform parts. The artefact has been built
using a powder-binder AM system and a comparative measurement study has
been performed. Based on the measurements, we can conclude that the volumetric
expansion on free-form features has a considerable impact on the geometrical
characteristics. As a perspective of this work, it will be interesting to study the
possibility to introduce a correction factor here. A second conclusion can be
drawn regarding the variation of the orientation ant its influence on the flatness
200 C. Mehdi-Souzani et al.

while the parallelism and perpendicularity seems independent of orientation. Fu-


ture research efforts will concentrate on establishing more knowledge about cor-
rection parameters when considering features of size and the relative positioning
of the surfaces regarding the build direction. Another issue is the measurement of
internal features using CT scanner.

5 References

1. M.N Islam, B Broswell, A.Pramanik, "An Investigation of Dimensional Accuracy of Parts


Produced by Three-Dimensional Printing", Proceedings of the World Congress on Engineer-
ing 2013 Vol I, WCE 2013, July 3 - 5, 2013, London, U.K
2. P.F Jacobs, "Rapid prototyping and Manufacturing: Fundamentals of stereolithography", So-
ciety of Manufacturing Engineers, Dearborn MI (1992)..
3. NIST "Measurement Science Roadmap for Metal-Based Additive Manufacturing", Additive
Manufacturing Final Report, 2013.
4. S. Moylan, J. Slotwinski, A. Cooke, K. Jurrens, M. A. Donmez, "Proposal for a Standardized
Test Artefact for Additive Manufacturing Machines and Processes," Proceeding of the Solid
Free Form Fabrication Symposium, August 6-8 2012, Austin, Texas, USA.
5. Li Yang, Md Ashabul Anam "An investigation of standard test part design for additive manu-
facturing", Proceeding of the Solid Free Form Fabrication Symposium, Agust 2014, Austin,
Texas, USA.
6. M.N Islam, S. Sacks, "An experimental investigation into the dimensional error of powder-
binder three-dimensional printing", The International Journal of Advanced Manufacturing
Technology, February 2016, Volume 82, Issue 5, pp 1371-1380
7. M B. McCarthy, S B. Brown; A. Evenden; A D. Robinson," NPL freeform artefact for verifi-
cation of non-contact measuring systems",Proc. SPIE 7864, Three-Dimensional Imaging, In-
teraction, and Measurement, 78640K (27 January 2011); doi: 10.1117/12.876705
8. http://www.npl.co.uk/news/new-freeform-standards-to-support-scanning-cmms
9. Gibson I, Rosen D, Stucker B (2015) Additive manufacturing technologies, chapter 8. Binder
jetting, 2nd ed. ISBN 978-1-4939-2112- 䯠 6, New York: Springer Science and Business Media
10. 3D systems, Z printer 450, Technical specifications: http://www.zcorp.com/fr/Products/3D-
Printers/ZPrinter-450/spage.aspx
11. https://www.rhino3d.com/fr/
12. Michalakis KX, Stratos A, Hirayama H, Pissiotis AL, Touloumi F (2009) Delayed setting
and hygroscopic linear expansion of three gypsum products used for cast articulation. J Pros-
thet Dent 102(5): 313–318
Flatness, circularity and cylindricity errors in
3D printed models associated to size and
position on the working plane

Massimo MARTORELLI1*, Salvatore GERBINO2, Antonio LANZOTTI1,


Stanislao PATALANO1 and Ferdinando VITOLO1
1
Fraunhofer JL IDEAS - Dept. of Industrial Engineering, University of Naples Federico II,
P.le Tecchio, 80 - 80125 Naples – Italy
2
DiBT Dep't - Engineering Division, Univ. of Molise, Campobasso, Via De Sanctis snc -
86100 Campobasso (CB) - Italy
* Corresponding author. Tel.: +390817682470; fax: +390817682470. E-mail address:
massimo.martorelli@unina.it

Abstract The purpose of this paper is to assess the main effects on the geometric
errors in terms of flatness, circularity and cylindricity based on the size of the
printed benchmarks and according to the position of the working plane of the 3D
printer. Three benchmark models of different sizes, with a parallelepiped and cyl-
inder shape placed in five different positions on the working plane are considered.
The sizes of models are chosen from the Renard series R40. Benchmark models
are fabricated in ABS (Acrylonitrile Butadiene Styrene) using Zortrax M200 3D
printer. A sample of five parts for each geometric category, as defined from the
R40 geometric series of numbers, is printed close to each corner of the plate, and
in the plate center position. Absolute Digimatic Height Gauge 0-450mm with an
accuracy of ±0.03mm by Mitutoyo is used to perform all measurements: flatness
on box faces, and circularity/cylindricity on cylinders. Results show that the best
performances, in terms of form accuracy, are reached in the area center printable
while they decrease with the sample size. Being quality a critical factor for a suc-
cessful industrial application of the AM processes, the results discussed in this pa-
per can provide the AM community with additional scientific data useful to under-
stand how to improve the quality of parts which may be obtained through new
generations of 3D printer.

Keywords: Additive Manufacturing, Fused Deposition Modelling, Geometric Er-


rors.

© Springer International Publishing AG 2017 201


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_21
202 M. Martorelli et al.

1 Introduction

According to ISO/ASTM 52915 [1], Additive Manufacturing (AM) is defined as


the process of joining materials to make objects from 3D model data, usually layer
upon layer, as opposed to sub-tractive manufacturing methodologies.
Until a few years ago, manufacturing physical parts required very expensive AM
processes and investments in tooling and sophisticated specific software. This
posed a barrier to the widespread deployment of such techniques.
Today a new generation of AM techniques has rapidly become available to the
public, due to the expiration of some AM patents and to open-source movements,
which allowed significant cost reductions. Nowadays, there are many low-cost 3D
printers available on the market (< €2000).
AM processes offer several technical and economic benefits compared to tradi-
tional manufacturing processes. They have the capability to produce complex and
intricate shapes that are not feasible with traditional manufacturing processes.
The geometric freedoms associated with AM provide new possibilities for the part
design. Associated to topology optimization techniques and other methods able to
generate complex shapes, AM processes, potentially, allow to save time, material
and costs. In economic terms, AM permits decoupling manufacturing costs from
the component complexity (Fig. 1).

Fig. 1. Comparison between AM (dashed line) and traditional (continuous line) manufacturing
techniques

In order to profit from the benefits offered by AM, it is necessary to consider the
manufacturing limits and restrictions. This applies in particular to the geometrical
accuracy, being quality a critical factor for a successful industrial application of
AM techniques [2]. Therefore, the implications of AM processes on current geo-
metric dimensioning and tolerancing (GD&T) practices, need to be investigated,
in particular for the new generations of low-cost 3D printer where there is a signif-
icant lack of scientific data related to their performances.
Flatness, circularity and cylindricity errors … 203

In this paper, considering a low-cost 3D printer, the main effects on the geometric
errors of flatness, circularity (or roundness) and cylindricity based on the size of
the printed benchmarks and according to each position of the working plane are
described. Flatness and cylindricity errors, in fact, induce substantial effects on
system functionalities in relevant applications [3, 4]. The study was carried out at
the Fraunhofer Joint Lab IDEAS-CREAMI (Interactive DEsign and Simulation –
Center of Reverse Engineering and Additive Manufacturing Innovation) of the
University of Naples Federico II.

2 GD&T and Additive Manufacturing

GD&T standards, although rigorous, have been developed based on the capabili-
ties of traditional manufacturing processes and there are no specific references to
the AM processes.
Although the current increasing interest of industry in AM processes led to the de-
velopment, through ASTM International and ISO, of news standards [1, 5–8],
however standard methods for the assessment of the geometric accuracy of AM
systems have not been actually defined yet.
Dimensional and micro- and macro-geometric errors in the manufacturing of an
AM part depends on several factors:
- Machine resolution – dependent upon machine design and control.
Every AM system has inherent capabilities due to its design and control (e.g. the
resolution of the stepper motors used to move the print-head and platform in
Fused Deposition Modeling systems or the diameter of the laser spot in laser-
based systems).
- Material resolution – dependent upon the material format that is used.
The material is delivered in several different formats in AM: sheet, powder, ex-
truded bead, liquid vat. Extruded bead width will determine the minimum X and Y
direction resolution, sheet thickness will determine the minimum Z direction reso-
lution, powder particle size will affect X, Y and Z direction dimensional accuracy.
- Distortion – usually caused by thermal gradients.
The distortion is usually a result of internal stresses caused by different rates of
cooling in 3D printed parts (thermal gradients). This can happen during the build
process or when the part is cooled after removal from the machine. It can happen
with both metals and polymers. The impact upon accuracy can be very severe with
several millimeters of distortion sometimes seen.
- Process parameters.
The process parameters play an important role in defining the final part quality
and part accuracy of a product [8, 9].
Layer thickness, build orientation, hatching pattern and support structures are the
main AM parameters which directly cause dimensional and micro- and macro-
geometric errors in the manufacturing of an AM part [10-13]. Layered nature of
204 M. Martorelli et al.

AM introduces a staircase effect in a part [14-16]. Increased layer thickness results


in more pronounced staircase error, as shown in Fig. 2.

a) b)
Fig. 2. Effect of layer thickness on staircase error in a spherical part: a) layer thickness of 0.1
mm, b) layer thickness of 0.05 mm
The build orientation of the part being manufactured has to be decided in advance
according to the quality to achieve (specifically related to the functional surfaces)
and also taking into account the placement of support structures [17]. A support
structure is additional material attached to a part during the build process in order
to support features such as overhangs and cavities that have insufficient strength in
a partially manufactured state. After the manufacturing of the part is completed,
support structures can be manually removed or dissolved away. It is essential to
minimize the use of these supports as reduced contact area between the part and
these structures will result in better part quality and also reduce the post pro-
cessing efforts [18].
The effect of build orientation on flatness error was investigated in [19] and the
authors concluded that the staircase error due to layer thickness and build orienta-
tion is the cause of the flatness error on the part and established a mathematical re-
lation between them.
Fig. 3 shows the effect of build orientation (angle between the surface and the hor-
izontal direction [20]) on staircase error for a flat face manufactured using an AM
process.

Fig. 3. Effect of build orientation on staircase error


The effect of build orientation on cylindricity error has been investigated in [21]
and an optimization model to obtain the part orientation while minimizing support
structures and form errors has been developed.
Flatness, circularity and cylindricity errors … 205

3 Materials and Methods

For this study, three benchmark models of three different sizes (small, medium
and large), made of one parallelepiped and one cylinder (in vertical position)
placed in five different positions on the working plane are considered. The nomi-
nal diameters of the cylinders are of 20, 30 and 40 mm; same value for the nomi-
nal sizes of the cube’s sides. Same height, equal to 20 mm, for all workpieces. We
chose a simple geometry in order to make measurements easier at a later stage.
The 3D printer Zortrax M200 is used to fabricate the benchmark models in ABS
(Acrylonitrile Butadiene Styrene) with 0.14 mm of layer thickness.
A sample of five parts for each of the three geometric category, as defined from
the R40 geometric series of numbers, is printed on each of the angles of the plate,
and in the center of the plate. Each model is identified with a number from 1 to 5,
which matches them to their printing position, as shown in Fig. 4. X and Y direc-
tions of printing are also reported on each benchmark.

Fig. 4. Sample of five parts printed and identified with a number from 1 to 5

3.1 Errors measurement

Flatness, circularity and cylindricity are measured using Absolute Digimatic


Height Gauge 0-450 mm with an accuracy of ±0.03 mm by Mitutoyo (Fig. 5).

3.1.1 Flatness error

Flatness error is measured on top and two lateral surfaces of workpieces in XZ and
YZ direction, as depicted in Fig. 6.
206 M. Martorelli et al.

Fig. 5. Measurement equipment

Fig. 6. Layout of the workpieces for flatness measurement. Highlighted bold edges of the meas-
ured surfaces along XZ and YZ, together with the measurement grid, referred to the large size
(40 mm) of the workpiece

Firstly, the height gauge is set to 0 value by making the pointer touching the sup-
port table (which the workpiece was put on), then it is elevated and put onto the
opposite side, so getting the digital measurement. For each face the measurement
is repeated for several positions. In order to obtain a representative set of points of
the workpieces, a rectangular grid is drawn on the surfaces (according to ISO
12781-2). A grid of 5x5 mm is set, so for example a data set of 8x8 measurements
are collected for the top face of the “large” (size 40 mm) workpiece (Fig. 5). Same
procedure for the other faces and for the “medium” (size 30 mm) and “small” (size
20 mm) workpieces. According to ISO 12781-1, the least squares reference plane
method (LSPL) is adopted to generate the flatness tolerance range. Starting from
Flatness, circularity and cylindricity errors … 207

LSPL plane, the maximum positive local flatness deviation (FLT P) and the maxi-
mum negative local flatness deviation (FLTv) are measured to calculate the peak-
to-valley flatness deviation (FLTt).

Fig. 7. XY Flatness – tolerance range

3.1.2 Circularity error

Circularity (or roundness) measurements are realized using the same height gauge
used for flatness, plus using a magnetic base V-Block, in which the cylinder is
blocked. The magnetic base assures that the block will not move during the meas-
urements. The height gauge is then set to 0 value when the pointer touches the
workpiece surface. After that, three 90° clockwise rotations are applied to the cyl-
inder, measuring each time the variation.
Circularity error value is calculated using the least square circle (LSCi) method,
which evaluates the best fitted circle by minimizing the square error. LSCi is the
reference to evaluate the circularity which is calculated as the difference between
the maximum and the minimum distance between the LSCi and the real profile.
The maximum positive local circularity deviation (RONP) and the maximum nega-
tive local circularity deviation (RONv) are measured to calculate the peak-to-
valley circularity deviation (RONt). Then, mean and standard deviation are com-
puted based on eight different sections per cylinder. For a part perfectly round the
pointer of the height gauge will not move. This V-block (3-point) method is the
simplest way to measure circularity. For more accurate measurement, also able to
capture spacing and phase of profile irregularities, a spindle should be adopted,
which provides a circular datum.
208 M. Martorelli et al.

Fig. 8. XZ Flatness – tolerance range

Fig. 9. YZ Flatness – tolerance range

3.1.3 Cylindricity error

Extending the circularity measurement to the whole surface of the cylinder is


the way to measure cylindricity error. Once the pointer of the height gauge is set
to 0 as in the previous measurement, it is moved along the cylinder axis, measur-
ing variations of the radius in eight different points, just like in the circularity
measurements for multiple sections.
Flatness, circularity and cylindricity errors … 209

According to the method adopted for calculating the circularity error, the least
square cylinder (LSCy) method is evaluated by best fitting a cylinder to measured
data, after providing an initial guess for the axis direction, the axis center and the
cylinder radius. Then, deviations of points from that cylinder are calculated and
the maximum positive deviation and maximum negative deviations are recorded;
they correspond to peak deviation and valley deviation, respectively. Peak-to-
valley cylindricity deviation is the measure of the cylindricity error.
Same considerations about the limits of V-block measurement method made
for circularity apply for cylindricity error evaluation.

Fig. 10. Circularity – tolerance range

Fig. 11. Cylindricity – tolerance range


210 M. Martorelli et al.

4 Results and discussion

Figures 7, 8 and 9 show FLTt flatness errors expressed in terms of LSPL (least
squares reference plane method) mean value (black dot) and FLPp and FLPv peak
and valley values (red dots), respectively, related to five positions on the working
plane of the printer and different sizes (small, medium, large) of parallelepipeds.
The measures are related to planes XY, XZ and YZ, respectively. Figures 10 and
11 show results for the circularity and cylindricity errors, respectively.

Flatness
Results turn out that there are no significant differences in the position on the
working plane as the flatness error is very similar, excepting for local spot larger
variability measured in particular in YZ plane on positions 2 and 5. Generally
speaking, the XY top face presents very similar variability for small and medium
workpieces in position 1, 3 and 4, whereas workpieces of medium and large size
present larger flatness error on position 2 and 5. The latter consideration applies to
all measured faces. Workpieces of small and medium size are the ones with the
lowest flatness variability.

Circularity and cylindricity


The tolerance ranges are comparable for each sample size and for each position.
The analysis does not show a clear pattern for the standard deviation even if it
seems that the average error increases as sample size.
We can generally claim that the best printer performances, in terms of form accu-
racy, are reached in the area center printable (position 3).

5 Conclusions

Today low-cost 3D printers are considered systems with great potential for the fu-
ture of manufacturing. However currently there is a significant lack of scientific
data for these systems.
In this paper a preliminary study on the main effects of the geometric errors, in
terms of flatness, circularity and cylindricity based on the size of the printed
benchmarks and according to the position of the working plane of the 3D printer,
were assessed.
Taking into account the limits of the present investigation, the results show that
there is no difference in the workpiece size and position on the working place for
flatness error; instead, in terms of circularity and cylindricity errors, the best per-
formances are reached in the central area of the plate and that they decrease with
the sample size. Some local larger variabilities can be ascribed on manufacturing
process and measurement procedure.
Flatness, circularity and cylindricity errors … 211

The results discussed in this paper can give useful additional scientific data to un-
derstand how to improve the quality of AM parts obtained using new generations
of 3D printers. Further tests and measurements, accomplished on multiple sam-
ples, through several benchmark prototypes, could assure a better evaluation of
statistical variations from both ideal forms and positions in order to provide a se-
ries of charts to be used during the designing aimed to rapid manufacturing sys-
tems.

Acknowledgments The authors gratefully acknowledge “Costruzioni Meccaniche s.n.c." facto-


ry in Sant'Anastasia (NA).

References

1. ISO/ASTM 52921, 2013, Standard Terminology for Additive Manufacturing-Coordinate Sys-


tems and Test Methodologies.
2. ISO 17296-1, 2014, Additive Manufacturing—General—Part 1: Terminology.
3. Calì M., et al. Meshing angles evaluation of silent chain drive by numerical analysis and ex-
perimental test, Meccanica, 51(3), 2016, pp. 475-489.
4. Sequenzia G., Oliveri S.M., Calì M., Experimental methodology for the tappet characteriza-
tion of timing system in ICE, Meccanica 48(3), 2013, pp. 753-764.
5. ISO 17296-4, 2014, Additive Manufacturing—General Principles—Part 4: Overview of Data
Processing Technologies, ASTM Fact Sheet.
6. ISO 17296-3, 2014, Additive Manufacturing—General Principles—Part 3: Main Characteris-
tics and Corresponding Test Methods.
7. ISO 17296-2, 2015, Additive Manufacturing—General Principles—Part 2: Overview of Pro-
cess Categories and Feedstock.
8. Lanzotti A., Martorelli M., Staiano G., Understanding Process Parameter Effects of RepRap
Open-Source Three-Dimensional Printers through a Design of Experiments Approach, Jour-
nal of Manufacturing Science and Engineering, 2015, 137(1), pp. 1-7, ISSN: 1087-1357,
Transactions of the ASME.
9. Lanzotti A., Del Giudice D.M., Lepore A., Staiano G., Martorelli M., On the geometric accu-
racy of RepRap open-source three-dimensional printer, Journal of Mechanical Design, Trans-
actions of the ASME, 2015, 137(10).
10. Ratnadeep P., Anand S., Optimal part orientation in Rapid Manufacturing process for achiev-
ing geometric tolerances, Journal of Manufacturing Systems, 2011, 30(4), pp. 214-222.
11. Paul R., Anand S., Optimal part orientation in Rapid Manufacturing process for achieving
geometric tolerances, Journal of Manufacturing Systems, Volume 30, S. 214– 222, 2011.
12. Taufik M., Jain P. K., Role of build orientation in layered manufacturing: a review, Int. J.
Manufacturing Technology and Management, Volume 27, 2013.
13. Lieneke T., Adam G.A.O., Leuders S., Knoop F., Josupeit S., Delfs P., Funke N., Zimmer D.,
Systematical Determination of Tolerances for Additive Manufacturing by Measuring Linear
Dimensions, 26th Annual International Solid Freeform Fabrication Symposium, Austin, Au-
gust 10-12, 2015.
14. Masood S. H., Rattanawong W., A generic part orientation system based on volumetric error
in rapid prototyping, The International Journal of Advanced Manufacturing Technology
2002, 19(3), pp. 209-216.
15. Pandey, Pulak Mohan, N. Venkata Reddy, and Sanjay G. Dhande, Slicing procedures in lay-
ered manufacturing: a review, Rapid Prototyping Journal, 2003, 9(5), pp. 274-288.
212 M. Martorelli et al.

16. Paul, Ratnadeep and Sam Anand., Optimal part orientation in Rapid Manufacturing process
for achieving geometric tolerances, Journal of Manufacturing Systems, 2011, 30(4), pp. 214-
222.
17. Kulkarni, Prashant, Anne Marsan, and Debasish Dutta., A review of process planning tech-
niques in layered manufacturing, Rapid Prototyping Journal, 2000, 6(1), pp. 18-35.
18. Das P., Chandran R., Samant R., Anand S., Optimum Part Build Orientation in Additive
Manufacturing for Minimizing Part Errors and Support Structures, 43rd Proceedings of the
North American Manufacturing Research Institution of SME, Procedia Manufacturing, 2015.
19. Arni R., Gupta S.K., Manufacturability analysis of flatness tolerances in solid freeform fabri-
cation, Journal of Mechanical Design, 2001, 123(1), pp. 148-156.
20. Campbell R.I., Martorelli M., Lee H.S., Surface Roughness Visualisation for Rapid Prototyp-
ing Models, Computer Aided Design, Vol. 34, Issue: 10, 2002, pp. 717-725, ISSN 0010-
4485.
21. Paul R., Anand S., Optimization of layered manufacturing process for reducing form errors
with minimal support structures. doi:10.1016/j.jmsy.2014.06.014, Journal of Manufacturing
Systems, 2014.
Optimization of lattice structures
for Additive Manufacturing Technologies

Gianpaolo SAVIO1*, Roberto MENEGHELLO2 and Gianmaria


CONCHERI1
1
University of Padova - Department of Civil, Environmental and Architectural Engineering -
Laboratory of Design Tools and Methods in Industrial Engineering
2
University of Padova - Department of Management and Engineering -
Laboratory of Design Tools and Methods in Industrial Engineering
* Corresponding author. Tel.: +39-049-827-6735; fax: +39-049-827-6738. E-mail address:
gianpaolo.savio@unipd.it

Abstract Additive manufacturing technologies enable the fabrication of parts


characterized by shape complexity and therefore allow the design of optimized
components based on minimal material usage and weight. In the literature two ap-
proaches are available to reach this goal: adoption of lattice structures and topolo-
gy optimization. In a recent work a Computer-Aided method for generative de-
sign and optimization of regular lattice structures was proposed. The method was
investigated in few configurations of a cantilever beam, considering six different
cell types and two load conditions. In order to strengthen the method, in this paper
a number of test cases have been carried out. Results explain the behavior of the
method during the iterations, and the effects of the load and of the cell dimension.
Moreover, a visual comparison between the proposed method and the results
achieved by topology optimization is shown.

Keywords: Cellular Structure, Lattice Structures, Additive Manufacturing, De-


sign Methods, Computer-Aided Design (CAD).

1 Introduction

Additive manufacturing (AM) technologies enable the fabrication of innovative


parts not achievable by other technologies, characterized by shape complexity,
multiscale structures and material complexity. Moreover, fully functional assem-
blies and mechanisms can be directly fabricated [1]. These technologies need spe-
cific design tools and methods to take full advantage of their unique capabilities,
which currently have only limited support by commercial CAD software.
Reduction in material usage and weight could be a fundamental step in the dif-
fusion of AM as demonstrated in industrial applications (e.g. in design of brackets

© Springer International Publishing AG 2017 213


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_22
214 G. Savio et al.

for aerospace industry). To reach this goal, commercial CAD software applica-
tions exist, that are able to create a skin model and an internal lattice structure.
Unfortunately it is very difficult to perform structural analysis on cellular geomet-
ric models. Alternatively, other commercial tools support topology optimization,
which modifies material layout within a given design space, for a given set of
loads and boundary conditions such that the resulting layout meets a prescribed set
of performance targets, obtaining an optimized concept design.
Today, interest in cellular materials is being driven by transport industry, aimed
at new vehicles, which need to be lighter than ever (to reduce fuel usage and iner-
tia) but also stiff, strong and capable of absorbing mechanical energy (e.g. in vehi-
cle collision or in helmet design) [2-3]. This explains the number of papers dealing
with homogenous lattice structures and related mechanical properties. Otherwise
conformal or random cellular structures were studied in literature and optimization
criteria were proposed. For instance, recent research proposed methods for opti-
mizing cellular structures, where the goal is to reach an established deflection and
a target volume, ensuring structural strength [4]. The approach was extended to
conformal lattice structures, in which the cellular structures are not regular, but
follow the shape of curved surfaces in order to increase its stiffness or strength [5].
Another optimization method of conformal lattice structure use the relative
density available from the topology optimization to assign a thickness to the
beams [6]. A Bidirectional Evolutionary Structural Optimization approach based
on topology optimization was recently proposed. This method takes into account
the orientation of cells in the design stage, and considers solid volume and skin in
addition to beam elements [7].
In a recent work the authors [8] proposed a Computer-Aided method for gener-
ative design and optimization of regular cellular structures, obtained by repeating
a unit cell inside a volume, where the elements are cylinders having different radii.
The approach is based on the iterative variation of the radius of each element in
order to obtain the optimal design. Target of the optimization is the achievement
of a required level of utilization, that specifies the level of usage of the material
for each element (utilization is equal to zero when the maximum stress inside an
element is null and is equal to 1 when the maximum stress is the maximum admis-
sible, e.g. equal to yield stress).
The method was investigated in few configurations of a cantilever beam, con-
sidering six different cell types and two load conditions. As a result, cell types
were classified as a function of the relative density and compliance/stiffness in the
different load conditions. The main limit of the study concerns the limited number
of tests performed and the absence of case studies and experimental tests.
In this work, a number of test cases has been assessed in order to evaluate the
behavior of the method during the iterations, and the effects of different loads and
cell dimensions. These results will be the basis for the development of guidelines
in parameters setup as a function of load/constraint configuration and compli-
ance/stiffness requirements. Finally, a visual comparison between the proposed
method and topology optimization approach is shown.
Optimization of lattice structures … 215

2 Design Method

The proposed design method (fig. 1) is aimed at the substitution of a solid model
with cellular structures, obtaining a wire model computed by a generative model-
ing approach [9]. A finite element (FE) model is built on the wire model and then
analyzed [10]. A dedicated iterative optimization procedure was developed in Py-
thon [11] in order to obtain an optimized geometric model.
Repeating side by side a regular unit cell of specified dimension, a wire model
is obtained. Each type of unit cell is defined by a number of edges and conse-
quently the wire model is a collection of lines connected at vertices called nodes.
Each edge of the wire model is a beam with circular sections in the FE model.
The initial radius is the same for all the beams, and is computed in order to ensure
a desired value of utilization index for the most stressed beam. This index speci-
fies the level of usage of the material for an element according to EN 1993-1-1
[12]. To complete the FE model, material, load and constraints must be defined
according to functional requirements of the solid model.
The most important result of FE analysis is the computation of the utilization of
each beam (Ui = utilization of i-th beam), needed in the optimization step. Goal of
the optimization is to obtain Ui of all beams close to a target utilization Ut. In or-
der to consider the AM process features, a minimum radius (Rmin) for each beam
must be defined; moreover a max radius (Rmax) is computed considering the cell
dimension.
More in detail, the optimization procedure consists of an iterative modification
of the radius Ri of each beam (therefore defining a new FE model) and involves
new results of the FE analysis. Each new radius Rn i is defined as:

Rni Ri ˜ Ui Ut (1)

if Rni > Rmax then Rni = Rmax (2)

if Rni < Rmin then Rni = Rmin (3)

The iterative procedure continues until Ui of each beam satisfies the following
equation:

Ut ‒ x˜Ut < Ui < Ut + x˜Ut (Ut > 0, x < 1), (4)

where x defines the range of admissible utilizations Ui (e.g. x=0.1 means


Ut±10%).
Finally, the optimized geometrical model is computed: a cylinder having the
optimized radius and spherical caps is constructed around each line of the wire
model. Then, a Boolean union is carried out over all cylinders. Spherical caps are
adopted in order to reduce stress concentrations and to avoid non-manifold entities
216 G. Savio et al.

at the nodes, where several beams having different radii converge together. A sim-
ilar approach was proposed by Wang et al. [13].
This modeling procedure shows limits especially in Boolean operations, file
dimensions and fillets. To overcome these restrictions a specific modeling proce-
dure was developed for cubic cell. Starting from the results of the optimization
procedure, a simple mesh was modeled and then the Catmull-Clark subdivision
surface [14] was adopted to obtain a smooth mesh using Weaverbird [15]. This
approach can be extended to other cell types, defining specific methods for creat-
ing a simple mesh model of the cell.

Solid model

Cell type

Cell dimension Wire model

Cross-sections

Material FE model

Loads

Constrains FE analysis

N
Optimized? New radii

Y
Optimized
model

Mesh
model

Fig. 1. The proposed method for modeling and optimize lattice structures.

3 Test cases

A cantilever beam with dimensions 30x30x80 mm was studied. 6 types of cells


(fig. 2) were studied: simple cubic (SC) [16], body center cubic (BCC) [16], rein-
forced body center cubic (RBCC) [16], octet truss (OT) [17], modified Gibson-
Optimization of lattice structures … 217

Ashby (GAM) [18], modified Wallach-Gibson (WG) [19]. Polyamide 12 (PA


2200 by EOS GmbH) mechanical properties were adopted: tensile modulus
E=1700 MPa, yield strength=48 MPa, shear modulus G=630 MPa, density
930kg/m3 (Amado-Becker et al. 2008).

a) b) c) d) e) f)
Fig. 2. Cell types: a) SC, b) BCC, c) RBCC, d) OT, e) GAM, f) WG.

The behavior of the method during the iterations has been investigated on the 6
cell type, adopting 5 mm of cell dimension and 50 N of flexural load.
The effect of the load has been studied on a 5 mm BCC subjected to a flexural
load ranging between 10 N to 200 N, with step of 10 N. The cell dimension effect
has been investigated on a BCC cell with edge length 2.5 mm,5 mm,10 mm.
Comparison between our method and topological optimization has been per-
formed on SC cell with edge length 2.5 mm and 5 mm on 50 N of flexural load.
The topology optimization problem has been solved using Millipede, an add-on
for Grasshopper [20].
The convergence conditions adopted are: Ut=0.5, x=0.10 (0.45<U i<0.55), Rmin
= 0.25 mm, Rmax = 5 mm.
Relative density U is assumed as:

ρ=Vo/Vc (5)

where Vo is the volume of the optimized structure and Vc is the volume of the
cantilever (Vc = 30x30x80 = 72000 mm3). Volume has been computed without
considering the beams ends overlapping (i.e. without performing any boolean un-
ion).

4 Results and discussion

The behavior of the method during the iterations is shown in fig. 3 and summa-
rized in tab. 1 for the convergence conditions. The maximum and minimum utili-
zations of the beams show that convergence can be obtained with a low number of
iterations for the BCC and RBCC cell. These 2 cells show a clear trend in conver-
gence, while the other cells have an irregular trend in the maximum and minimum
utilizations values and consequently on the method convergence (fig. 3a). Number
of beams and nodes show the problem complexity: the simplest cells are CS, WG
and BCC (tab. 1). BCC shows the lowest relative density in the studied conditions,
218 G. Savio et al.

while the GAM the highest (fig. 3b). It should be underlined that the relative den-
sity has a contribution linked to the beams with minimum radius, and consequent-
ly, in other configurations, different types of cell may produce lower relative den-
sity. WG, RBCC and BCC show the higher stiffness, while the GAM has the
higher compliance (fig 3c).
Generally it is possible to see that higher density is related to a lower compli-
ance in the same topological configuration (fig 3b,c). This aspect is evident for the
WG cell, in which it is possible to see 2 configurations: the first around 20th itera-
tion and the second beyond 60th iteration.
Other convergence criteria could be adopted in order to obtain a lower number
of iterations, with no significant difference in relative density and displacement.
For example, using as convergence criteria Ui<0.55 and a variation of displace-
ment between two consecutive iterations less than 0.1%, the GAM cell converges
within 12 iterations. Similarly, using as convergence criteria Ui<0.55 and a varia-
tion of relative density between two consecutive iterations less than 0.1%, the WG
cell converges within 24 iterations.
Results relevant to the load variation, studied on a BCC cell, are summarized in
fig. 4. The proposed method found a solution until a load of 140 N. In order to in-
crease the maximum load value, it is possible to change the convergence criteria
or modify the approach for the new radii computation. In this range the relative
density is almost proportional to the load, while the displacement has a quadratic
behavior for load ranging between 20 and 140N. The number of iterations for the
convergence is between 16 and 25 for loads less than or equal to 120 N and in-
crease until 40-50 iterations for higher loads.
Adopting different cell dimensions (fig. 5), higher stiffness could be obtained
with smaller cell dimensions. This could be related to the increased number of
beams with minimum radius. Due to the same reason, increasing the cell dimen-
sion, the relative density shows a decreasing trend. For the given conditions the it-
erations needed increase together with the cell dimension. Finally, it is possible to
see a strong reduction of the problem complexity (number of beams) increasing
the cell dimension.
Fig. 6 shows a visual comparison between our approach and topology optimi-
zation. A similar behavior can be seen particularly in the portion close to the con-
strain (left) and to the loaded surface (right). These similarities could be related to
the homogenization procedure that occurred in the topology optimization problem
[21], in which the design space is filled with an artificial composite material made
of cells with holes.
In brief, the results shown in this paper and in [8] can be used to derive guide-
lines in cell selection and parameters setup: when the stiffness is the design target,
RBCC and BCC cells structures are recommended. CS shows the lowest complex-
ity. Increasing the load, the relative density and displacement increase. Reducing
the cell dimension, the relative density, stiffness and number of beams increase.
BCC are suggested when the goal is a low relative density and low iterations.
Higher values of the range of admissible utilization allow faster convergence.
Optimization of lattice structures … 219

1
CS
BCC
GAM
0.75 WG
OT
RBCC
Utilization

0.5

0.25

0
0 20 40 60 80 100
a) Iterations
0.12

0.1
CS BCC GAM
Relative density

WG OT RBCC
0.08

0.06

0.04
0 20 40 60 80 100
b) Iterations

c)
Fig. 3. Behavior of the proposed method: a) utilization, b) relative density and c) displacement as
a function of the iterations under flexural load.
220 G. Savio et al.

0.1 4

Displacement [mm]
Relative density

0.075 3

0.05
Relative 2
density

0.025 1
0 50 Load [N] 100 150

Fig. 4. Relative density as a function of the flexural load for the BCC cell.

0.1 5

0.08 4

Displacement [mm]
Relative density

0.06 3

0.04 2
Relative density
Displacement
0.02 1

0 0
0 Cell dimension [mm] 5 10
a)
60000 30

50000 25
Number of beams

40000 20
Iterations

30000 15
Number of Beams
Iterations
20000 10

10000 5

0 0
0 Cell dimension [mm] 5 10
b)
Fig. 5. Behavior of the proposed method: a) relative density and displacement and b) number of
beams and iterations as a function of the cell dimension under flexural load for the BCC cell.

Future work will be addressed in the evaluation of further configurations, in the


investigation of methods for simplifying the geometric modeling procedure, and in
Optimization of lattice structures … 221

the experimental testing of the method to components of practical interest. More-


over different optimization criteria will be studied.

Table 1. Model configuration and convergence conditions under flexural load.

Cell type CS BCC RBCC OT GAM WG


Beams 2212 6820 10276 14736 17280 4890
Nodes 833 1409 3365 2789 12324 1193
Iterations 57 19 18 77 98 71
Displacement [mm] 4.422 3.311 3.22 4.196 6.474 2.685
Relative Density 0.0766 0.0473 0.0547 0.0816 0.1005 0.0624

a)

b)

c)
Fig. 6. Proposed method on a cubic cell (a,b), vs topology optimization (c).
222 G. Savio et al.

References

1. Gibson I. Rosen D. and Stucker B. Additive Manufacturing Technologies: 3D Printing, Rapid


Prototyping, and Direct Digital Manufacturing, 2015 (Springer-Verlag New York).
2. Gibson L.J. and Ashby M.F. Cellular solids: structure and properties, 1997 (Cambridge uni-
versity press).
3. Ultralight Cellular Materials,
http://www.virginia.edu/ms/research/wadley/celluar-materials.html (access 2016/04/26)
4. Chu J. Engelbrecht S. Graf G. and Rosen D.W. A comparison of synthesis methods for cellu-
lar structures with application to additive Manufacturing. Rapid Prototyping Journal, 2010,
16(4), 275-283.
5. Nguyen J. Park S.I. Rosen D.W. Folgar L. and Williams J. Conformal Lattice Structure De-
sign and Fabrication. In International Solid Freeform Fabrication Symposium, Austin, August
2012, 138-161.
6. M. Alzahrani, S.K. Choi and D. W. Rosen. Design of Truss-like Cellular Structures Using
Relative Density Mapping Method. Materials and Design, 2015, 85, 349-360.
7. Tang Y. Kurtz A. and Zhao Y.F. Bidirectional Evolutionary Structural Optimization (BESO)
based design method for lattice structure to be fabricated by additive manufacturing. Com-
puter-Aided Design, 2015, 69, 91-101.
8. Savio G. Gaggi F. Meneghello R. and Concheri G. (2015). Design method and taxonomy of
optimized regular cellular structures for additive manufacturing technologies. In International
Conference on Engineering Design, ICED’15, Vol 4, Milan, July 2015, pp.235-244 (Design
Society, Glasgow, Scotland).
9. Grasshopper, http://www.grasshopper3d.com/ (access 2016/04/26).
10. Karamba, http://www.karamba3d.com/ (access 2016/04/26).
11. Rhino Developer Docs,
http://developer.rhino3d.com/guides/rhinopython/what_is_rhinopython/ (access 2016/04/26).
12. Preisinger C. Karamba User Manual for Version 1.1.0. 2015.
13. Wang H. Cheng Y. and Rosen D.W. A Hybrid Geometric Modeling Method for Large Scale
Conformal Cellular Structures. In ASME Computers and Information in Engineering Confer-
ence, Long Beach, California, September 2005, pp. 421-427.
14. Catmull E. and Clark J. Recursively generated B-spline surfaces on arbitrary topological
meshes. Computer-Aided Design, 1978, 10(6), 350-355.
15. Piacentino G. Weaverbird Beta 0.9.0.1. http://www.giuliopiacentino.com/weaverbird/ (access
26-04-2016).
16. Luxner M.H. Stampfl J. and Pettermann H.E. Finite element modeling concepts and linear
analyses of 3D regular open cell structures. Journal of Materials Science, 2005, 40, 5859-
5866.
17. Deshpande V.S. Fleck N.A. and Ashby M.F. Effective properties of the octect-truss lattice
material. Journal of the Mechanics and Physics of Solids, 2001, 49, 1747-1769.
18. Roberts A.P. and Garboczi E.J. Elastic properties of model random three-dimensional open-
cell solids. Journal of the Mechanics and Physics of Solids, 2002, 50, 33-50.
19. Wallach J.C. and Gibson L.J. Mechanical behavior of a three-dimensional truss material. In-
ternational Journal of Solids and Structures, 2001, 38(40-41), 7181-7196.
20. Panagiotis M. and Sawako K. Millipede http://www.sawapan.eu/ (access 2016/05/02).
21. Hassani B. and Hinton E. Homogenization and structural topology optimization: theory,
practice and software, 1999 (Springer-Verlag London).
Standardisation Focus on Process Planning and
Operations Management for Additive
Manufacturing

Jinhua XIAO1, Nabil ANWER2, Alexandre DURUPT1, Julien LE


DUIGOU1 and Benoît EYNARD1,*
1
Sorbonne Universités, Université de Technologie de Compiègne, Department of Mechanical
Systems Engineering, UMR UTC/CNRS 7337 Roberval, CS 60319, 60203 Compiègne
Cedex, France
2
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 94235 Cachan, France
* Corresponding author. Tel.: +33 (0)3 44 23 79 67; fax: +33 (0) 3 44 23 52 29. E-mail
address: benoit.eynard@utc.fr

Abstract The work presented in this paper has focused on process planning and
operations management of Additive Manufacturing (AM) through the hereafter
mentioned standards such as ISO 10303 and ISO 14649, ISO 15531, ISO/CD
18828 and Unified Manufacturing Resource Model (UMRM). We have com-
bined these standards to integrate process implementations, manufacturing man-
agement and control and information flows. The objective of this work is to stan-
dardize manufacturing process for AM. Similarly, the UMRM is introduced to
develop a unified manufacturing resource service platform, which can provide
the required information regarding machine tools to automate the decision mak-
ing in process planning and operations management.

Keywords: Additive Manufacturing, Process Planning, Operations Management,


Unified Manufacturing Resource Model, Standardisation

1 Introduction

Additive Manufacturing (AM) processes are different from the traditional


manufacturing processes. These differences can be considered in two main points,
the first one is related to material’s processing (additive versus subtractive) and
the second relies on the integrated digital chain and the related data models includ-
ing not only geometric data but also manufacturing information, such as process
planning and operations management, which still remain poor on standardized in-
formation for process specification and planning that would allow the develop-

© Springer International Publishing AG 2017 223


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_23
224 J. Xiao et al.

ment of advanced AM operations management [1, 2]. However, the objective of


standards for process planning and operations management allows efficient data
exchange with a better verification and validation, simulation, optimization and
more feedbacks [3, 4]. Hence our work presented in the paper is based on three
important ISO standards whose primary objective is to enhance data exchange for
process planning and operations management. These standards are ISO 10303
(STEP) [5] and ISO 14649 (STEP-NC) [4], ISO 15531 (MANDATE) [6] and
ISO/CD 18828 [7], respectively.
The combination of international standards are a good way of enhancing the in-
teroperability of information systems used in manufacturing system, which makes
information exchange easier and more efficient. ISO 10303 is a standard for the
computer-interpretable representation, which covers a wide variety of product
types and describes standardized data models in several application protocols. The
ISO 15531 MANDATE standard for exchanges of manufacturing management
data has been developed to solve information and knowledge management during
the overall product’s life cycle. The ISO/CD 18828, known also as Production
Planning Standardized Procedure for Production Systems Engineering, is a new
international emerging standard for industrial data and manufacturing interface.
These standards mainly include manufacturing engineering, numerically con-
trolled machine and process measurement and control. Moreover, AM processes
and operations management could be analysed by unified manufacturing resource
model (UMRM), which provides the required information regarding AM machine
tools and auxiliary devices to automate process planning decisions. This work will
further solve process management by standardizing integrated manufacturing
process and operation management.
Next sections will highlight process standards for manufacturing information
system, which mainly include ISO 10303 and ISO 14649, ISO 15531 and ISO
18828 respectively. In section 3, in order to enhance operations management re-
garding AM machine tools, the unified manufacturing resource model (UMRM) is
analysed. In section 4, a conclusion and perspectives are given.

2 Process standards for manufacturing information system

In manufacturing information system, it is a compelling solution to enhance in-


teroperability for computer aided system applications because they share and ex-
change manufacturing data in additive manufacturing process [8, 9]. When a com-
puter aided process planning (CAPP) application creates a process planning it
ideally must match machining operations with an appropriate machine tool.
In order to elaborate the product planning process, the relative information of
machine tools should be known before sending its processing instruction to per-
form. However, the International Standards Organisation (ISO) standard is one of
the current solutions to facilitate data sharing and exchanging for manufacturing
Standardisation Focus on Process Planning ... 225

process. Our research is under the context of product lifecycle management. The
objective of process planning is to generate process sequences and parameters
minimizing hazards. So the significant work concerning data sharing and ex-
change is being done by ISO, which has launched a number of standardized data
models. Table 1 presents the standards for process planning in manufacturing sys-
tem. Although every standard has specific usage, all of them use EXPRESS mod-
elling language and permit building neutral data repositories [10].

Table 1. Standards for process planning in manufacturing system

ISO standards Acronym Objectives Usage


10303 and STEP-NC To develop standardized data models Process implementa-
14649 tion
15531 MANDATE To create normalized data models for Manufacturing man-
improving manufacturing manage- agement and control
ment and information exchange
18828 No To standardize procedure for produc- Information flows for
tion systems engineering planning processes

2.1 ISO 10303-238 and ISO 14649 (STEP-NC)

ISO 10303-238 and ISO 14649 is a new data model that links CAD/CAM sys-
tems and CNC machines, which remedies the shortcomings of ISO 6983 by speci-
fying machining processes rather than machine tool motion [4]. As ISO 14649
provides a comprehensive model of the manufacturing process, it can also be used
as multi-directional data exchange between all other information technology sys-
tems. The advantage of ISO 14649 could supersede data reduction to simple
switching instructions, such as linear and circular movement. It can describe the
machining operations to execute on the workpiece and run on different machine
tools or controllers [8]. However, the usage of the standard is to implement manu-
facturing process under complicated environment. It has also been extended to
other standards for representing geometry, features, process plans, and manufac-
turing data using NC machine tools. However, the ISO and ASME standards are
emerging as results of international effort for representing manufacturing re-
sources. STEP-NC provides a full description of the part and the manufacturing
process [11, 12, 13].

2.2 ISO 15531 (MANDATE)

ISO 15531 is an international standard for the computer-interpretable represen-


tation and exchange of industrial manufacturing management data. The objective
226 J. Xiao et al.

is to provide a neutral mechanism capable of describing industrial manufacturing


management data throughout the production process. Its description makes it suit-
able not only for neutral file exchange but also as a basis for implementing and
sharing manufacturing management databases and archiving [14, 15].
ISO 15531 does not standardise the manufacturing process. The aim of ISO
15531 is to provide standardised data models. However, the objectives of
MANDATE are slightly different from the objectives of the model. MANDATE
model is within the manufacturing system: machines, workers, tools, etc. So
MANDATE includes neither the modelling of the resource shape nor the descrip-
tion of its usage.

Fig. 1 MANDATE process information model in manufacturing planning

Ideally, it addresses operational planning and has a simulation capability. It is


made up of a variety of processes. It links together: business planning, production
planning, production scheduling, material requirements planning, capacity re-
quirements planning, and the execution support systems for capacity and material.
Fig. 1 presents MANDATE process information model in manufacturing plan-
ning. The MANDATE contains the information structure and the data exchange at
the interfaces between different functions.

2.3 ISO 18828

ISO 18828 makes use of the following terms defined in ISO 15531: manufac-
turing, process, process planning, production control and resource. ISO 18828-3
provides additional information that focuses on information flows. The main in-
formation flow for an operation or a process plan includes two information ob-
jects. The first object is referred to as ‘preliminary information for the operation
list’ and includes prerequisites for process planning. The second information ob-
ject is referred to as an ‘operation list. As shown in Fig. 2, based on preliminary
information that has been compiled from various relevant sources, the concept
planning phase generates an initial conceptual operation list. To start the concept
Standardisation Focus on Process Planning ... 227

planning phase, various preliminary information is required for defining an opera-


tion list. During the rough planning phase, the conceptual operation list is detailed
further in accordance with relevant continuing planning process steps and ulti-
mately results in a rough operation list. During the detailed planning phase, the
rough operation list is completely detailed and developed further into an elabo-
rated complete operation list. In the detailed planning phase, the rough operation
list is used to develop the details of manufacturing process steps and work content.

Fig. 2 Information flow of planning process and operation [7]

3 Operations management of AM machines

By analysing the functionality and usages of three standards above, the high in-
tegration and interoperability of manufacturing process and operations for additive
manufacturing (AM) could be fundamentally realised under complex environ-
228 J. Xiao et al.

ment. However, the current AM process includes many categories as shown in


Table 2 [16]. Industry has predominantly focused on powder bed fusion (PBD)
and directed energy deposition (DED). These technologies utilise high-localised
temperatures to deposit melt materials on the build surface [17, 18, 19].
The additive manufacturing machining system representation can be classified
into various resource domains as shown in Fig. 3. However, one of the important
objectives of additive manufacturing machining is to represent process capability.
A unified manufacturing resource model (UMRM) is introduced to provide the re-
quired information regarding AM machine tools and auxiliary devices to automate
process planning decisions [20].

Table 2 standard terminologies of additive manufacturing process

ASTM Additive Manufactur- Descriptions of operations


ing Process
Material Extrusion Material is selectively dispensed through a nozzle of orifice.
Material Jetting Droplets of material are selectively deposited.
Binder Jetting A liquid bonding agent is selectively deposited to join powder
materials.
Sheet Lamination Material sheets are bonded to form an object.
Vat Photopolymerisation Liquid photopolymer in a vat is selectively cured by light-
activated polymerization.
Powder Bed Fusion Thermal energy selectively fused regions of a powder bed
Directed Energy Deposition Focused thermal energy is used to fuse materials by melting as
the material is deposited

Fig. 3 Additive manufacturing machining system resource


Standardisation Focus on Process Planning ... 229

3.1 AM machine elements

An important point of modern design and manufacturing is to ensure effective


manufacturing decisions are made as early as possible in process planning for the
purpose of eliminating or reducing decision mistakes. Many efforts have been
made towards a computer aided design (CAD) -computer-aided process planning
(CAPP)-computer aided manufacturing (CAM) integration platform [21]. Auto-
matic process planning methodologies and manufacturing resource modelling
have largely improved the advance of computer-integration manufacturing, which
includes the determination of raw materials, available machine tools and the selec-
tion of the operation sequence. And the machine tool model describes the configu-
ration of the overall machine tool structure, the geometric shape of mechanical
units and the kinematic relationship between mechanical units [22]. But the repre-
sentation of machine tool cannot be supported by the information model of STEP-
NC. So it is necessary to propose a modified UMRM to describe machine tool
functionality and process capability of AM machine.

3.2 AM operations

AM operations can be considered as an accumulation of operations to add de-


sired materials into final product. There exist many similar operations for fabricat-
ing a part by depositing material, such as welding [23]. AM machine tool has been
used to execute the desired manufacturing operations in manufacturing process.
However, the unified manufacturing resource model (UMRM) has a data model
to provide machine specific data in the form of an EXPRESS schema and act as a
complementary part to represent various machine tools in a standardised form.
Fig. 4 presents the UMRM of machine tool (a) and mechanical machine elements
(b) for AM systems. However, machine tool model is a conceptual representation
for machine tool and provides a logical framework for representing its functional-
ity and sequential process operations in manufacturing systems [24]. So this ma-
chine tool representation can be utilised to represent machine tool functionality,
planning process and sequential operations. The machine tool is considered as an
assembly of various mechanical machine elements and auxiliary devices linked
with each other. The purpose of each mechanical element in machine tool is to al-
leviate machine capability and make entire manufacturing system higher integra-
tion and interoperability by standardising process planning and operations.
230 J. Xiao et al.

Fig. 4 The UMRM of AM system for machine tool (a) and mechanical machine element (b)

4 Conclusions and future work

This paper has focused on research issues in planning process and operation
managements dedicated to additive manufacturing. The work presented is based
on three important ISO standards whose primary objective is to enhance data ex-
change for process planning and operations management. These standards are ISO
10303 ISO 14649, ISO 15531 and ISO/CD 18828 respectively. We analysed fun-
damental characteristics of these standards to solve process implementations,
manufacturing management and control and information flows for planning proc-
ess. Then the unified manufacturing resource model is used to provide machine
specific data in the form of an EXPRESS schema and to represent various ma-
chine tools in a standardised form. So this machine tool representation is used to
represent machine tool functionality, planning process and sequential operations.
The future developments and applications of this work will promote the integra-
tion and standardisation of hybrid manufacturing systems (additive/subtractive) in
industrial context.
Standardisation Focus on Process Planning ... 231

Acknowledgments This work has been supported by the Doctoral Program of Chinese Scholar-
ship Council.

References

1. Krishnan, N., and Sheng, P. S. Environmental versus conventional planning for machined
components. CIRP Annals-Manufacturing Technology, 2000, 49(1), 363-366.
2. Kim, D. B., Witherell, P., Lipman, R., and Feng, S. C. Streamlining the additive manufactur-
ing digital spectrum: A systems approach. Additive Manufacturing, 2015, 5(1), 20-30.
3. Danjou, C., Le Duigou, J., and Eynard, B. Closed-loop Manufacturing, a STEP-NC Process
for Data Feedback: A Case Study. Procedia CIRP, 2016, 41(1), 852-857.
4. Danjou, C., Le Duigou, J., & Eynard, B. Closed-loop manufacturing process based on STEP-
NC. International Journal on Interactive Design and Manufacturing, 2015, 1(1), 1-13.
5. Pratt, M. J. Introduction to ISO 10303—the STEP standard for product data exchange. Journal
of Computing and Information Science in Engineering, 2001, 1(1), 102-103.
6. Cutting-Decelle, A. F., Young, R. I., Michel, J. J., Grangel, R., Le Cardinal, J., and Bourey, J.
P. ISO 15531 MANDATE: a product-process-resource based approach for managing modu-
larity in production management. Concurrent Engineering, 2007, 15(2), 217-235.
7. Industrial automation systems and integration — Standardized procedure for production sys-
tems engineering — Part 2: Reference process for seamless production planning, DRAFT
INTERNATIONAL STANDARD ISO / DIS 18828-2, 2016 (ISO).
8. Eynard B., Bosch-Mauchand M., Integrated Design and Smart Manufacturing, Concurrent
Engineering: Research and Applications, 2015, 23(4), 281-283.
9. Cutting-Decelle, A. F., Barraud, J. L., Veenendaal, B., and Young, R. I. Production informa-
tion interoperability over the Internet: A standardised data acquisition tool developed for in-
dustrial enterprises. Computers in Industry, 2012, 63(8), 824-834.
10. López-Ortega, O., and Moramay, R. A STEP-based manufacturing information system to
share flexible manufacturing resources data. Journal of Intelligent Manufacturing, 2005,
16(3), 287-301.
11. Rauch, M., Laguionie, R., Hascoet, J. Y., and Suh, S. H. An advanced STEP-NC controller
for intelligent machining processes. Robotics and Computer-Integrated Manufacturing, 2012,
28(3), 375-384.
12. Amaitik, S. M., and Kiliç, S. E. An intelligent process planning system for prismatic parts us-
ing STEP features. The International Journal of Advanced Manufacturing Technology, 2007,
31(9-10), 978-993.
13. Srinivasan, V. Standardizing the specification, verification, and exchange of product geome-
try: Research, status and trends. Computer-Aided Design, 2008, 40(7), 738-749.
14. Industrial Automation Systems and Integration – Industrial Manufacturing Management Data
– General Over- view: Part 1, ISO TC184/SC4, ISO IS 15531-1, 2004 (ISO).
15. Ray, S. R., and Jones, A. T. Manufacturing interoperability. Journal of Intelligent Manufac-
turing, 2006, 17(6), 681-688.
16. Standard, A. S. T. M. Standard terminology for additive manufacturing technologies. PA:
ASTM International F, vol. 2792, Pennsylvania, December 2015, pp.1-3 (ASMT Interna-
tional, West Conshohocken).
17. Flynn, J. M., Shokrani, A., Newman, S. T., and Dhokia, V. Hybrid additive and subtractive
machine tools–Research and industrial developments. International Journal of Machine Tools
and Manufacture, 2016, 101(1), 79-101.
18. Simchi, A., Petzoldt, F., and Pohl, H. On the development of direct metal laser sintering for
rapid tooling. Journal of Materials Processing Technology, 2003, 141(3), 319-328.
232 J. Xiao et al.

19. Brown, C., Lubell, J., and Lipman, R. Additive manufacturing technical workshop summary
report. NIST Technical Note 1823, 2013.
20. Vichare, P., Nassehi, A., Kumar, S., and Newman, S. T. A unified manufacturing resource
model for representing CNC machining systems. Robotics and Computer-Integrated Manu-
facturing, 2009, 25(6), 999-1007.
21. Xu, X. W., and He, Q. Striving for a total integration of CAD, CAPP, CAM and CNC. Ro-
botics and Computer-Integrated Manufacturing, 2004, 20(2), 101-109.
22. Amaitik, S. M., and Kiliç, S. E. An intelligent process planning system for prismatic parts us-
ing STEP features. The International Journal of Advanced Manufacturing Technology, 2007,
31(9-10), 978-993.
23. Eiamsa-ard, K., Nair, H. J., Ren, L., Ruan, J., Sparks, T., and Liou, F. W. Part repair using a
hybrid manufacturing system. In Proceedings of the Sixteenth Annual Solid Freeform Fabri-
cation Symposium, vol. 1, Missouri, August 2005, pp.425-433 (SFF Symposium Proceed-
ings, Austin).
24. Liu, Y., Guo, X., Li, W., Yamazaki, K., Kashihara, K., and Fujishima, M. An intelligent NC
program processor for CNC system of machine tool. Robotics and Computer-Integrated
Manufacturing, 2007, 23(2), 160-169.
Comparison of some approaches to define a
CAD model from topological optimization in
design for additive manufacturing.
DOUTRE Pierre-Thomas1,2, MORRETTON Elodie1,3, VO Thanh Hoang1, MARIN
Philippe1*, POURROY Franck1, PRUDHOMME Guy1, VIGNAT Frederic1
1
Univ. Grenoble Alpes, G-SCOP, F-38000 Grenoble, France
CNRS, G-SCOP, F-38000 Grenoble, France
2
POLY-SHAPE, 235 rue des Canesteu ZI La Gandonne 13300 Salon de Provence – France
3
Zodiac Seat France, ZI La Limoise, Rue Robert Maréchal, 36100 Issoudun, France

* Corresponding author. E-mail address: philippe.marin@grenoble-inp.fr

Abstract: Topological optimization is often used in the design of light-weight


structures. Additive manufacturing allows to manufacture complex shapes and ex-
ploits the full potential of this tool. However, topology optimization results is a
discrete representation of the optimal topology, requiring the designers to ‘manu-
ally’ create a CAD model. This process can be very time consuming, and hardly
penalizes the design process efficiency. In this paper, several possible approaches
to get a CAD model from the topological optimization results are proposed. From
case studies, benefits and drawbacks of these approaches are discussed in order to
help engineers in the choice of their approach.

Keywords: Surface reconstruction; additive manufacturing; topological optimi-


zation; polygonal model.

1 Introduction

Topological optimization is a method for finding the best distribution of material


within a defined spatial design domain, and under specified constraints. This dis-
tribution is associated with a specific objective [1]. This often results in very com-
plex geometries which are hard to produce by conventional manufacturing pro-
cesses. In the same time, the recent development of additive manufacturing
technologies leads to radical changes in the design activities. It is often claimed
that manufacturing of complex shapes becomes possible with virtually no extra
cost. As a consequence, topological optimization might be considered as a tool of
major interest in the design for additive manufacturing process [2].

(a) (b) (c)

© Springer International Publishing AG 2017 233


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_24
234 D. Pierre-Thomas et al.

Fig.1. Topological optimization process, (a) The initial model, (b) The density map, (c) The to-
pology after threshold choice.

Since topological optimization involves computations on a spatially discretized


geometrical domain, the resulting optimal geometry is proposed as a polygonal
model (Figure 1). To meet customers’ requirements and facilitate manufacturing,
there is a need for transforming this tessellated model into a new geometric model
(referred in this paper as “CAD model”) giving higher surface continuity. Current-
ly, few existing CAD systems incorporate such a conversion tool.
The aim of this paper is to compare different possible approaches to generate a
CAD model from topological optimization results.
To answer this question, a state of the art study was first conducted in order to
identify from scientific literature and from engineering practices the main possible
approaches. Section 2 presents a short synthesis of this study. These different ap-
proaches are then compared through a case study in section 3, and section 4 draws
a comparison of the approaches against several criteria, showing benefits and
drawbacks of each approach for the designers.

2 State of art

CAD model reconstruction has been the focus of many research works in different
domains. Main examples are reverse engineering, which typically involves gener-
ating a CAD parametric model from a set of points issued from 3D scanning of an
existing part, and finite elements analysis, which may also require sometimes to
generate a CAD model of a deformed mesh. Hence, a variety of approaches and
tools have been identified, that can be classified into two different philosophies:
for the first one the designer uses the polygonal model issued from the optimiza-
tion software as a background layer. The CAD model is then generated by hand,
tracing the background layer with different CAD features. In the second philoso-
phy, the user works with automatic surface or solid generation tools directly ap-
plied on the polygonal model.
In the first philosophy, there are two main approaches. The first one is to use
conventional CAD features, namely, based on classical extrusion operations. The
created geometries are quite simple. So, creating a complex shape becomes a time
consuming process [3]. In the second one, freeform surface manipulations are
used to create the CAD model [4]. These operations handle NURBS technology.
Some CAD software natively based on this mathematical models recently
emerged (i.e. Evolve1 and Spaceclaim [3]). Using these manual approaches de-
signers have the opportunity to freely interpret the computed optimal topology,
and to make any adjustment they want for integrating additional design or manu-
facturing constraints. But these approaches are very time consuming, which is

1 www.altair.com
Comparison of some approaches to define … 235
their major drawbacks even if it highly depends on the designer’s skills in using
the CAD tools.
Considering the second philosophy, three main methods were identified, each
of them involving some automatic tools. For geometry reconstruction from 3D
scanning, some steps are necessary: cleaning the points cloud, creating the polyg-
onal model and automatically fitting surfaces to this one. To create the polygonal
model, the “shrink wrapping” algorithm [5] can be used. This is a way to build a
new polyhedral shell as a skin around the original mesh. This technique simulta-
neously smooths the shape and closes small holes. It uses the principle of a plastic
membrane composed of triangles that is wrapped around the original object. This
membrane is deformed to be close from the object with the objective of reducing
the local energy distortion. From the mesh [6], as [7] and [8], propose to cut it by
patches of surfaces. Then an algorithm of surface creation is applied to each patch.
Finally, these surfaces are merged. Volpin [9] follows the same process, but a
mesh simplification is done beforehand. In the other hand, this reconstruction can
be done from other methods as suggested by [10]. This method proposes to do a
surface interpolation for each patch, then to fit NURBS surface. This method was
also applied to CAD reconstruction from topological optimization results by [11].
An interesting idea comes from the work of Koch [12] who suggests to manually
create afterwards the functional surfaces.
In addition to these different approaches, this review of literature raised some
important criteria for comparing the approaches, and especially the execution
speed and the capability to produce a complex shape [4], or the geometry accura-
cy, the ease of use, and the repeatability of the result [13].
Through this state of art analysis, we can see that most articles describe a gen-
eral approach. They use different examples which don’t allow us to make a com-
parison between the different approaches. Moreover, only algorithms and methods
are described but there are not any development on approach performance, use,
and implementation in design phase as well as on associated tools. So, in order to
draw more explicit comparison between those methods, they will now be imple-
mented within a common case study: the CAD reconstruction of a mounting
bracket from topology optimization results.

3 Case study

The part on which the different reconstruction methods are applied is a mounting
bracket [14]. A 10kN uniformly distributed load is applied normal to the surface
(Figure 2.a). Topology optimization was made with INSPIRE2 software with the
objective of maximizing stiffness while respecting a maximum mass constraint.
The bracket dimensions are 95 x 29 x 27 mm.

2 www.altair.com
236 D. Pierre-Thomas et al.

3.1 First approach: taking inspiration from topology

The optimization result (Figure 2.b) is used for the proposed approaches as a
background layer. This means that the shape coming from topological computa-
tion is just considered as an inspiration source to provide ideas on where the de-
signer should put material when designing the reconstructed part geometry.

ƒ Using Standard CAD features (App. 1a)


In this section, a conventional CAD tool (Catia3) is used to redesign the part. First,
an assembly is created to enable the superposition between the topological result
and the new geometry. Then the designer uses classical protrusion and cut opera-
tions to build up the model. Finally sharp edges are rounded to improve the con-
nections between the various volumes that constitute the part. It can be observed
big differences between the topological result and the final geometry on Figure 2.c
and Figure 3. If the designer wants to stay closer from the polygonal model, he
will need more conventional operations and more time. It may become a really
complex and time-consuming operation.

(a) (b) (c)


Fig.2. (a) Loads and constraints applied on the bracket, (b) Topological optimization result, (c)
Superposition of STL and CAD model

This result can be different depending on the designer and the design strategy.
Figure 4 shows the part model obtained with a different strategy from a different
designer using the same software.

Fig.3. CAD model with CATIA Fig.4. CAD model with CATIA Generative shape design

In this last case, a surface sweep function is used to build on the part model
and the designer decided to integrate manufacturing constraints like minimum di-
ameter and minimum angle to manufacture part without supports. The two CAD
models are very different from each other and from the topology optimization re-
sult. With this design method it is difficult to generate complex shapes because
conventional tools are not appropriate.

3 http://www.3ds.com/fr/produits-et-services/catia/
Comparison of some approaches to define … 237

Using Direct Modeling (App. 1b)


In this section, a PolyNURBS technology is used to design a CAD part from the
polygonal model. PolyNURBS is a “direct modeling” approach that allows user to
“push and pull” a 3-dimensional surface in order to virtually sculpt the shape of
the part without having to manage parameters or feature tree of the shape. The
principle of this tool is moving points of the control polygon of the surface, by
handling vertices, edges and faces (Figure 5).
This CAD model result (Figure 6) is very different from the previous ones. The
tool used for this approach allowing more flexibility in shape design, it gives the
designer more freedom in shape interpretation. Such a tool significantly changes
the way mechanical engineers design parts. The part is redesigned with less basic
features which leads to non-common shapes.

Fig.5. Manipulation in Evolve Fig.6. CAD Model with Evolve 2015

ƒ Synthesis of the first approach


This approach is based on topological optimization results as a background layer.
Depending on the designer interpretation and the proposed tools, the final CAD
shape can be more or less close to the topological optimization result. Also manu-
facturing constraints can be integrated and after reconstruction, finite element
analysis on STEP or CAD native file can be made easily. After this shape recon-
struction, a shape or a size optimization can be performed.

3.2 Second approach: polyhedron smoothing and automatic


surface reconstruction

In this second approach, the final shape of the part is no more build “freely” by the
designer. The polygonal model from topological optimization is used as a basis for
automatic algorithms to build a set of surface patches. These surfaces fit to the
cloud of points built on triangle nodes coming from previous steps. In the current
case study, this surface fitting process is performed automatically with surface re-
construction algorithm (i.e. CATIA software). But as the polyhedral model com-
ing from topological optimization is generally greatly rough and coarse, it cannot
be used directly to automatically build a fitting surface by a CAD modeler. In this
approach a variety of mesh manipulation tools can be used to decrease the rough
aspect of the polygonal model. To do this, three variants are presented.
238 D. Pierre-Thomas et al.

In the first variant (App. 2a), the shrink wrapping approach [5] is used (i.e.
Magic’s software4). The new CAD model (Figure 7 (a)) is very close to the topol-
ogy optimization result. But the result is not smooth enough from a perception
point of view.
The idea of the second variant (App. 2b) is to generate a uniform density point
cloud from the STL file, and then to remesh this cloud using an appropriate
smoothing algorithm. In our example, these manipulations were carried out in
MeshLab5 using Poisson algorithm (Figure 7 (b)).
In the third approach (App. 2c), smoothing is performed with the STL han-
dling module of the CAD software (i.e. Catia) driven by a deviation parameter.
(Figure 7 (c)).
(a) (b)

(c)

Fig.7. Final CAD model (a) with App. 2a (b) with App. 2b (c) with App. 2c

ƒ Synthesis of this second approach


Automatic surface reconstruction does not allow shape interpretation by designer.
Time for the overall operation process depends on the algorithm efficiency more
than on the strategy chosen by the designer. In the case study presented here, the
result is manufacturable, but integrating manufacturing constraints would need
additional operations. Also, the large number of elementary surfaces generated by
the automatic surface reconstruction makes the realization of a finite elements
analysis difficult. Moreover, a parametric optimization is not possible as no pa-
rameters are available from this approach.

4 Analysis and discussion

The previous case study makes it possible to compare the 5 implemented ap-
proaches against several criteria (see Table 1). Some of those criteria were sug-

4 http://software.materialise.com/magics
5 http://meshlab.sourceforge.net/
Comparison of some approaches to define … 239

gested by the literature (identified by an asterisk in Table 1), and some others
emerged from the case study.
Table 1. Comparison of 5 CAD reconstruction approaches from the previous case study.

Criteria App.1a App.1b App.2a App.2b App.2c


Execution time (in hours) * 1.5 to 5 1.5 1.5 2.5 2
Sensitivity on the execution time (in hours) +/- 5 +/- 3 +/- 1 +/- 1 +/- 1
Time control ++ ++ - - -
Result shape quality ++ ++ -- - +
Repeatability of the result * -- -- + ++ ++
Capability to create complex surfaces easily* - + ++ ++ ++
Capability to create basic surfaces (plan, cylinder) ++ + -- -- --
Capability to be accurate in the part dimensions* ++ - -- -- --
Capability to integrate client requirements easily ++ + -- -- --
Capability to integrate manufacturing constraints ++ - -- -- --
Number of needed software 1 1 2 3 2
Number of elementary surfaces created * 795 520 6900 7200 6200
File size in Mo 3 8 64 66 70

We can point out that the execution time highly depends on the designer’s exper-
tise on CAD software or mesh manipulation software. Choosing one of the ap-
proaches depends on client requirement and on the project time. Regarding the
part shape, some approaches enable more or less complex surfaces but with differ-
ent quality levels. Approach 1b can create complex and smooth surfaces but it is
not the case of approaches #2 which create complex shapes but with a lower quali-
ty: with more surfaces. However, these second type approaches which enable to
create complex surfaces have some difficulties to create more basic surfaces, as
planes or cylinders when needed, and especially on functional surfaces. In such
cases, we recommend to manually merge basic volumes with the semi-
automatically created geometry. Another point is that when a part is designed, the
aim is to meet client requirements. But to integrate these, it is necessary to steer
the model creation which is allowed by approaches of type 1 rather than ap-
proaches of type 2. The two last criteria, the number of elementary surface patches
and the generated file size, are both related to the ease of handling the resulting
CAD model. Obviously, the smaller the CAD file, the higher the ease of handling.
But it is also to be noticed that too many surfaces often lead to more expensive
FEA simulation phase since auto-meshing will generate more or less unnecessary
elements. Type 1 approaches prove to be highly preferable to type 2 ones against
these last criteria.
240 D. Pierre-Thomas et al.

5 Conclusion and perspectives

In this paper, several possible approaches to generate a CAD model from the topo-
logical optimization results are identified, formalized, and tested within the typical
case study of designing a lightweight structure. But there is no universal approach
for CAD part reconstruction. Moreover, this operation is time consuming, even if
that depends on designers expertise on the different tools, and on the part com-
plexity. However, although many topological optimization loops are often neces-
sary in the design process for additive manufacturing of a part, this task of geome-
try reconstruction doesn’t have to be performed after each one of these loops. A
single reconstruction at the end of the optimization process may be sufficient, and
our results can be used as a first guideline for the designer in order to choose and
to implement a reconstruction approach.
Finally, these results raise a fundamental question: knowing that additive man-
ufacturing machines currently use input data as STL files (polyhedral geometry),
can we avoid this time consuming step of building CAD surfaces? While from a
technical point of view the answer is probably yes in most situations, emancipat-
ing design processes from a traditional CAD model would be a considerable
change in people’s mind and in industrial practices.

References

[1] D. Brackett, I. Ashcroft, and R. Hague, Topology optimization for additive manufacturing, in
Proceedings of the Solid Freeform Fabrication Symposium, 2011, 348–362.
[2] P.T. Doutre et al., Optimisation topologique  : outil clé pour la conception des pièces
produites par fabrication additive, in AIP PRIMECA La Plagne conference, 2015, 73, 1–6.
[3] S. Yang and Y. F. Zhao, Additive manufacturing-enabled design theory and methodology  : a
critical review, Int J Adv Manuf Technol, 2015, 80, 327–342.
[4] H. K. Ault and A. D. Phillips, Direct Modeling  : Easy Changes in CAD  ?, in 70th-
MIDYEAR, 2016, 99–106.
[5] L. P. Kobbelt, A Shrink Wrapping Approach to Remeshing Polygonal Surfaces, 1999, 18(3).
[6] R. R. Martin, Reverse engineering geometric models-an introduction, Comput. Des., 1997,
29(4), 255–268.
[7] M. Vieira and K. Shimada, Surface mesh segmentation and smooth surface extraction
through region growing, Comput. Aided Geom. Des., 2005, 22,771–792.
[8] R. R. Martin, T. Varady, and P. Benko, Algorithms for reverse engineering boundary
representation models, Comput. Des., 2001, 33, 839–851.
[9] O. Volpin, A. Sheffer, M. Bercovier, and L. Joskowicz, Mesh simplification with smooth
surface reconstruction, Comput. Aided Geom. Des., 1998, 30(11), 875–882..
[10] W. Ma and P. He, B-spline surface local updating with unorganized points, 1998, 30(11),
853–862.
[11] P. Tang and K. Chang, Integration of topology and shape optimization for design of
structural components, in Struct Multidisc Optim 22, 2001, 65–82.
[12] P. R. Koch, FE-optimization and design of additive manufactured structural metallic parts for
telecommunication satellites, in Paris Space week conference, 2015.
[13] M. Berger et al., State of the Art in Surface Reconstruction from Point Clouds, State art
Rep., 2014.
[14] B. Vayre, “Conception pour la fabrication additive , application à la technologie EBM,”
Université Grenoble Alpes, 2014.
Review of Shape Deviation Modeling for
Additive Manufacturing

Zuowei ZHU1, Safa KEIMASI1, Nabil ANWER1*, Luc MATHIEU1and


Lihong QIAO2
1
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 94235 Cachan, France
2
School of Mechanical Engineering and Automation, Beihang University, Beijing 100191,
China
*Corresponding author. Tel.: +33-(0)147402413; fax: +33-(0)147402220. E-mail address:
anwer@lurpa.ens-cachan.fr

Abstract Additive Manufacturing (AM) is becoming a promising technology ca-


pable of building complex customized parts with internal geometries and graded
material by stacking up thin individual layers. However, a comprehensive geomet-
ric model for Additive Manufacturing is not mature yet. Dimensional and form
accuracy and surface finish are still a bottleneck for AM regarding quality control.
In this paper, an up-to-date review is drawn on methods and approaches that have
been developed to model and predict shape deviations in AM and to improve ge-
ometric quality of AM processes. A number of concluding remarks are made and
the Skin Model Shapes Paradigm is introduced to be a promising framework for
integration of shape deviations in product development, and the digital thread in
AM.

Keywords: Additive manufacturing, Geometric deviation, Skin model shapes,


Geometric modeling.

1 Introduction

Additive Manufacturing (AM), as a most frequently used method for rapid proto-
typing nowadays, was first explored and applied in the automotive, aerospace and
medical industries and it is considered to be one of the pillars of the fourth indus-
trial revolution. Different from traditional machining, in which parts are made by
removing materials from a larger stock through different processes, AM fabricates
volumes layer by layer from their three-dimensional CAD model data.
Additive Manufacturing and its different technologies have been reviewed com-
prehensively by many authors. A classification of additive manufacturing technol-
ogies is shown in Table 1 according to their characteristics.
© Springer International Publishing AG 2017 241
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_25
242 Z. Zhu et al.

Table 1. Process categories of additive manufacturing as classified by ASTM [1].

Process type Description Related Technologies


Binder jetting Liquid bonding agent selective- Powder bed and inkjet head (PBIH),
ly deposited to join powder plaster-based 3D printing (PP)
Material jetting Droplets of build material se- Multi-jet modeling (MJM)
lectively deposited
Powder bed fu- Thermal energy selectively fus- Electron beam melting (EBM), selective
sion es regions of powder bed laser sintering (SLS)
Directed energy Focused thermal energy melts Laser metal deposition (LMD)
deposition materials as deposited
Sheet lamination Sheets of material bonded Laminated object manufacturing (LOM),
together ultrasonic consolidation(UC)
Vat Liquid photopolymer selective- Stereolithography (SLA), digital light
photopolymeriza ly cured by light activation processing (DLP)
tion
Material extru- Material selectively dispensed Fused deposition modeling (FDM)
sion through nozzle or orifice

A brief illustration of the digital chain of an AM process is shown in Figure 1. The


process can be divided into an input phase, a build phase and an output phase. In
the input phase, the CAD model of the part is designed and converted as a
STereoLithography (STL) file format that is readable for AM machines and pro-
vides the geometry information. This STL file is basically an approximation of the
designed part achieved by triangulation, which causes deviation in the final part.
In the build phase, process parameters like energy source, layer thickness, build
direction, supports and material constraints are set in the machine and the part is
fabricated layer by layer. The output phase is an imperative part of the process, in
which procedures like support removal, cleaning, heat treatment and NC machin-
ing are executed to ensure the final quality of the part.
File
STL Machine Post Applica-
CAD Transfer to Build Remove
Convert Machine Setup Process tion

Input Phase Build Phase Output Phase

Fig. 1. Typical digital chain of an AM process.

Factors arising from each phase may introduce geometric deviations of the final
part, including quality of input file, machine errors, build orientation, process pa-
rameters, material shrinkage and staircase effects due to layer thickness. The con-
trol of geometrical accuracy remains a major bottleneck in the application of AM.
The Skin Model Shapes (SMS) paradigm, which stems from the theoretical foun-
dations of Geometrical Product Specification and Verification, is proposed as a
comprehensive framework to capture product shape variability in different phases
of product lifecycle [2]. It enables the consideration of geometric deviations that
are expected, predicted or already observed in real manufacturing processes. In-
Review of Shape Deviation Modeling for Additive Manufacturing 243

depth researches into the integration of thermal effects in its application on toler-
ance analysis have paved the way for its adaptation to deviation modeling in the
context of additive manufacturing [3].
In this paper, a review will be done on the methods developed to model and pre-
dict shape deviations in AM and to improve geometric quality of AM processes.
Some concluding remarks will be made and the Skin Model Shapes paradigm will
be introduced to be a promising framework for modeling shape deviations in AM.

2 Review of shape deviation modeling for AM

In order to model the shape deviations and improve the geometrical accuracy in
AM processes, in recent years, numbers of researchers have proposed different
models and approaches regarding to specific error sources as well as AM process-
es. Since the AM process is almost automatic, most of these studies have been de-
voted to improving the design, among which two major categories of approaches
can be distinguished. One category focuses on changing the CAD design by com-
pensating the shape deviations based on predictive deviation models. The other
category focuses on modifying the input files or improving the slicing techniques
of AM processes.

2.1 Compensation of shape deviations

The researches with respect to deviation compensation are conducted for different
purposes. The compensation models can be briefly classified into machine error
compensation and part shrinkage compensation.
Tong et al. [4,5] proposes a parametric error model, which models all the repeata-
ble errors of SLA (Stereolithography) machine and FDM (Fused Deposition Mod-
eling) machine with generic parametric error functions. The coefficients of the
functions can be estimated through regression of the measurement data gathered
from given points. The model can then be used to analyze individual deviations in
the Cartesian Coordinate System (CCS), based on which appropriate compensa-
tions can be made on the input files to minimize the shape deviations of the final
products. However, since the deviations are estimated from individual points, the
continuity of product geometry is not considered and the accuracy of estimation
depends largely on the selected AM machine and measurement points, which
lacks generality.
A series of studies conducted by Huang [6,7,8,9,10] and his research team have
been dedicated to developing a predictive model of geometric deviations that is
able to learn from the deviation information obtained from a certain number of
tested product shapes and derive compensation plans for new and untested prod-
244 Z. Zhu et al.

ucts. They aim at establishing a generic methodology that is independent of shape


complexity and specific AM processes. Their first attempt [6,7] initiates in model-
ing the in-plane shape deviations induced by specific influential factors in the
MIP-SLA and FDM processes, based on which they develop a generic approach to
model and predict in-plane deviation caused by shape shrinkages along product
boundary and derive optimal compensation plans [8]. The proposed shrinkage
model consists of two main sections known as systematic shrinkage that is consid-
ered constant and random shrinkage that can be predicted from experimented
products using statistical approaches. To develop this model, first, Polar Coordi-
nate System (PCS) is used to represent shape and shrinkage is defined as a para-
metric function denoting the difference between the nominal shape and the actual
shape in different angles. Secondly, experiments are conducted and deformations
are observed to derive the statistical distribution of the parameters in the function,
based on which the shrinkage function can be defined and therefore compensation
plans can be made. Figure 2 shows an illustration of the shrinkage model, a point
P on the nominal shape is represented in PCS as r (T , r0 , z) , with r and T de-
noting its radius and angle, and z denoting the z-coordinate of the 2D plane where
PCS lies in. P’, as the final position of P after shrinkage, can then be easily repre-
sented by reducing r with a certain 'r , which is quite difficult to be identified
under CCS. Compared with the work of Tong et al., this model reduces the com-
plexity of deviation modeling by transforming the in-plane geometric deviations in
CCS into a functional profile defined in PCS.

r (T , r0 , z )
r (T ) P : r (T , r0 , z )
T 'r
P'
Actual Shape
z
Nominal Shape
T
O

Fig. 2. In-plane shrinkage model in the Polar Coordinate System.

In a close study, Huang et al. [11] extends this approach from cylindrical shape to
polyhedrons. They propose to treat an in-plane polyhedron as part of its circum-
scribed circle that can be carved out. A novel cookie-cutter function is proposed to
be integrated in the cylindrical basis model to determine how the polyhedron can
be trimmed from its circumscribed circle. This function is defined as a periodic
waveform that can be modified according to the polyhedron shape. Later, this
model is further extended to freeform shapes [12]. The freeform shape is approx-
imated either by a polygon using the Polygon Approximation with Local Compen-
sation (PALC) strategy or by addition of circular sectors using the Circular Ap-
proximation with Selective Cornering (CASC) strategy. Both of the strategies can
Review of Shape Deviation Modeling for Additive Manufacturing 245

be easily implemented based on previous models. Moreover, in [13] they propose


a novel spatial deviation model under the Spherical Coordinate System (SCS), in
which both in-plane and out-of-plane errors are incorporated in a consistent math-
ematical framework. Based on the above-mentioned shape deviation models, due
compensation plans are made through experimentation using a stereolithography
process. Overviewing all the publications of Huang and his co-authors, we can
conclude that their research on the shape deviation modeling of AM process is ge-
neric and comprehensive, covering both 2D and 3D deviations, and shapes of dif-
ferent complexities. The methodologies have also been validated through exten-
sive experiments, showing effective predictability of the shape shrinkage
deviations. However, in order to deduce the exact functions of these models for
compensation, measurements have to be made on certain number of experimental
parts to obtain the shape deviation information. This means that these models can
only be applicable when manufactured products are available. The models lack a
consideration of the overall digital chain of the AM process and the ability to pre-
dict possible deviations without experimental data.

2.2 Modification of input files

Despite the error compensation approaches which seek to reduce geometric errors
by learning from experimental data, a number of researchers have proposed to
eliminate the errors in input files of AM processes. The modification of STL files
and the improvement of slicing techniques are two mainstream research topics.
These researches are motivated by the fact that AM does not work on the original
CAD model, while uses the STL file in which the nominal part surface is approx-
imated into a triangular mesh representation. A “chordal error” interpreted as the
Euclidean distance between the STL facet and the CAD surface is introduced dur-
ing the translation from CAD to STL, as can be seen from Figure 3(a). Besides, a
“staircase error” occurs due to the slicing of the STL file in building the part layer-
by-layer, as shown in Figure 3(b). The maximum cusp height has been adopted as
an accuracy measurement parameter for AM processes.
Design Surface CAD Model
Boundary
PCAD Cusp Height

STL Surface
PSTL
Additive Manufacturing
Part Boundary

Chordal Error

(a) (b)

Fig. 3. (a) 2D illustration of the chordal error (b) 2D illustration of the staircase effect.
246 Z. Zhu et al.

A notable breakthrough in reducing chordal error is the Vertex Translation Algo-


rithm (VTA) proposed by Navangul et al. [14], in which multiple points are se-
lected on an STL facet and the chordal error is computed as the distance between
each point and its corresponding point on the NURBS patch of the CAD surface.
The point with the maximum chordal error is then identified and translated to the
CAD surface, three new facets are generated by connecting the translated point
with the vertices of the facet and then added to the STL file, while the original
facet is deleted. Figure 4 gives an illustration of this algorithm. A facet isolation
algorithm (FAI) is also introduced to determine the points to be modified by ex-
tracting the STL facets corresponding to the features of the part. This algorithm
improves the STL file quality by iteratively modifying the STL facets until choral
errors are minimized. However, numbers of iterations are usually required to satis-
fy the specified tolerance parameters and each iteration consumes significant
amount of computation time and enlarges the file size. Similarly, the Surface-
based Modification Algorithm (SMA) proposed by Zha et al. [15] modifies the
STL file by adaptively and locally increasing the facet density until the geomet-
rical accuracy is satisfied. The individual part surfaces to be modified are selected
by estimating the average chordal error and cusp height error of the surface. The
modification is also applied on certain points of each STL facet of the selected
surface according to predefined rules. However, SMA will likely increase the STL
file size of a surface exponentially, so it is only preferable for high accuracy part
models with complex features and multiple surfaces as a result of the tradeoff be-
tween smaller file size and higher accuracy.

Translated vertex with


Design surface
maximum chordal error

New facets (Added)

Triangular facet in STL file


(Deleted)

Fig. 4. Illustration of the VTA algorithm.

Instead of modifying the whole STL file, Kunal [16] proposes to minimize the er-
rors by modifying each 2D slice of the STL file. The STL contour and designed
NURBS contour on each slice are captured, and new points are generated on each
triangle chord of the STL contour to verify if the chordal error threshold is ex-
ceeded in this chord. If so, corresponding chord points need to be translated to the
NURBS contour until the chordal error is below the threshold. Thus new STL con-
tour is formed by connecting the new points. This approach can be seen as a 2D
version of the VTA algorithm that focuses on altering the part in manufacturing
level. The modification is done on each slice, thus calling for large computation
Review of Shape Deviation Modeling for Additive Manufacturing 247

effort. Moreover, if any changes are made to the slicing plan, the whole process
has to be repeated, which reduces its generality and constrains its application.
To reduce the staircase error, some researches on adaptive slicing have been con-
ducted. Instead of uniform slicing which slices the STL file with a constant slice
thickness, adaptive slicing seeks to slice the file using variable slice thicknesses to
achieve the desired surface quality and at the same time ensure a decreased build
time. The Octree data structures have been adopted by Siraskar et al. [17] to suc-
cessively accomplish the adaptive slicing of the object. A method termed as Modi-
fied Boundary Octree Data Structure is used to convert the STL file of an object to
an Octree data structure, by iteratively subdividing a universal cube enclosing the
STL file into small node cubes according to the defined subdivision conditions.
The height values of the final cubes can then be identified as slice thicknesses.
This approach proves to have ensured the geometrical accuracy of the manufac-
tured part through virtual manufacturing but is quite limited in real practice due to
the lack of proper support for adaptive slicing in mainstream AM machines. To
overcome this limitation, a clustered slice thickness approach is intuitively intro-
duced [18], in which clustered strips of varying layer thicknesses are calculated
manually using the minimum slice thickness, with each clustered band of uniform
slices considered as a separate part built on top of each other along the build direc-
tion. A KDtree data structure is also adopted in this study to subdivide the bound-
ing box of the STL file to determine the slice thickness. The cusp error threshold
is used to decide whether the cube should be further subdivided. The adaptive slic-
ing approaches are theoretically useful in reducing build time and increasing part
accuracy, but the main challenge is the lack of direct support in AM machines.

2.3 Evaluation and tolerance verification of shape deviations

The deviations resulting from the proposed models need to be verified with re-
spect to tolerance specifications, so as to provide significant feedback for the mod-
ification of design. Current studies focus on evaluating the deviations through
analysis of the STL file. The STL file is sliced and the points on each slice contour
are sampled for the evaluation. [19] proposes verification methods for both dimen-
sional and geometric tolerances in AM processes. For dimensional tolerance, the
Least Squares (LSQ) Fitting method is used to derive a substitute geometry of the
extracted points, the dimension of which is then measured and verified. While for
geometric tolerances, a Minimum Zone (MZ) Fitting method is used to derive the
two separating nominal features which enclose all the extracted points with a min-
imum distance, and their distance is compared with the tolerance value. But this
study overlooks the staircase effect and the MZ method is not realistic for shapes
with complex geometries. In a series of work [14,15,16], similar virtual manufac-
turing methods based on the STL file are adopted to evaluate shape deviations, in
which the contour points in each slice is offset in the build direction for one-layer
248 Z. Zhu et al.

thickness to form a virtual layer, thus considering the staircase effect. The profile
error of complex shapes is evaluated by calculating the maximum distance be-
tween contour points and their closest corresponding points on the design surface.

2.4 Discussion

In this section, two major categories of shape deviation modeling methods are re-
viewed and a comprehensive overview of these methods can be drawn as shown in
Table 2. The validity of these methods has been proved through experiments and
simulations. However, they could only cover certain phases of the AM process,
while lacking the ability to model deviations from an overall view of the product
lifecycle.

Table 2. Overview of the reviewed methods.

References Dimensionality Geometric Model Main Characteristics


[4,5] 2D, 3D discrete Machine errors of FDM and SLA
[6,7] 2D continuous Modeling with PCS
[8-12] 2D continuous Shape shrinkage, freeform shapes
[13] 2D, 3D continuous Modeling with SCS
[14,15] 3D discrete Modifying STL facets
[16] 2D discrete Modifying 2D slice contours
[17,18] 3D discrete Adaptive slicing

3 The Skin Model Shapes paradigm for AM

The main contributions of the Skin Model Shapes have been highlighted re-
cently in different applications, such as assembly, tolerance analysis, and motion
tolerancing. The generation of Skin Model Shapes can be divided in a prediction
and an observation stage depending on the available information in product design
and manufacturing processes [2]. Geometric deviations can be classified into sys-
tematic and random deviations, where systematic deviations originate from char-
acteristic errors of the manufacturing process, while random deviations occur due
to inevitable fluctuations of material and environmental conditions. Notably, in the
observation stage, geometric deviations are modeled by extracting the statistical
information of deviations from a training set of observations gathered from manu-
facturing process simulations or measurements, thus possible deviations in newly
manufactured parts can be effectively predicted [20]. Figure 5 shows the creation
of Skin Model Shapes in both prediction and observation stages.
Review of Shape Deviation Modeling for Additive Manufacturing 249

Prediction Stage Observation Stage

Nominal Model Observations

Systematic Deviations Apply (K)PCA

Random Deviations Sampling Scores

NO NO
Check for Specifications
YES

Skin Model Shapes

Fig. 5. The creation of Skin Model Shapes in prediction and observation stages[19].

Compared with the above mentioned approaches, the Skin Model Shapes para-
digm can model both the 2D and 3D deviations either by prediction based on as-
sumptions and virtual or real experiments, or by learning from observation data
gathered from manufactured samples. It covers the overall digital chain of the AM
process, though specific methodologies for its application in AM remain to be de-
veloped, it is a suitable and promising modeling framework for AM processes.

4 Conclusion

Geometrical accuracy is an important concern for AM processes. In this paper,


current research status of shape deviation modeling of AM processes is reviewed.
Major modeling methods, categorized as Deviation Compensation methods and
Input File Modification methods, have been discussed and their advantages and
limitations are highlighted. The Skin Model Shapes paradigm is introduced to be a
promising modeling method for AM. In further studies, authors will focus on
adapting the Skin Model Shapes paradigm to the AM process and come up with a
comprehensive deviation modeling framework to support the whole digital chain.

Acknowledgments This research has benefitted from the financial support of China Scholar-
ship Council (first author).

References

1. ASTM F2792-12a, Standard Terminology for Additive Manufacturing Technologies,


ASTM International, West Conshohocken, PA, 2012.
2. Anwer N., Ballu A., and Mathieu L. The skin model, a comprehensive geometric model
for engineering design. CIRP Annals-Manufacturing Technology, 2013, 62(1), 143-146.
250 Z. Zhu et al.

3. Garaizar, O. R., Qiao, L., Anwer, N., and Mathieu, L. Integration of Thermal Effects into
Tolerancing Using Skin Model Shapes. Procedia CIRP, 2016, 43, 196-201.
4. Tong K., Amine Lehtihet E., and Joshi S. Parametric error modeling and software error
compensation for rapid prototyping. Rapid Prototyping Journal, 2003, 9(5), 301-313.
5. Tong K., Joshi S., and Amine Lehtihet E. Error compensation for fused deposition model-
ing (FDM) machine by correcting slice files. Rapid Prototyping Journal, 2008, 14(1), 4-14.
6. Xu L., Huang Q., Sabbaghi A., and Dasgupta T. Shape deviation modeling for dimensional
quality control in additive manufacturing. In ASME 2013 International Mechanical Engi-
neering Congress and Exposition, IMECE 2013, San Diego, November 2013, pp.
V02AT02A018-V02AT02A018.
7. Song S., Wang A., Huang Q. and Tsung F. Shape deviation modeling for fused deposition
modeling processes. In IEEE International Conference on Automation Science and Engi-
neering, CASE 2014, Taipei, August 2014, pp.758-763.
8. Huang Q., Zhang J., Sabbaghi A., and Dasgupta T. Optimal offline compensation of shape
shrinkage for three-dimensional printing processes. IIE Transactions, 2015, 47(5),431-441.
9. Huang Q., Nouri H., Xu K., Chen Y., Sosina S., and Dasgupta T. Statistical Predictive
Modeling and Compensation of Geometric Deviations of Three-Dimensional Printed
Products. Journal of Manufacturing Science and Engineering, 2014, 136(6), 061008.
10. Sabbaghi A., Dasgupta T., Huang Q., and Zhang J. Inference for deformation and interfer-
ence in 3D printing. The Annals of Applied Statistics, 2014, 8(3), 1395-1415.
11. Huang Q., Nouri H., Xu K., Chen Y., Sosina S., and Dasgupta T. Predictive modeling of
geometric deviations of 3d printed products-a unified modeling approach for cylindrical
and polygon shapes. In IEEE International Conference on Automation Science and Engi-
neering, CASE 2014, Taipei, August 2014, pp.25-30.
12. Luan H. and Huang Q. Predictive modeling of in-plane geometric deviation for 3D printed
freeform products. In IEEE International Conference on Automation Science and Engi-
neering, CASE 2015, Gothenberg, August 2015, pp.912-917.
13. Jin Y., Qin S.J., and Huang Q. Out-of-plane geometric error prediction for additive manu-
facturing. In IEEE International Conference on Automation Science and Engineering,
CASE 2015, Gothenberg, August 2015, pp. 918-923.
14. Navangul G., Paul R., and Anand S. Error minimization in layered manufacturing parts by
stereolithography file modification using a vertex translation algorithm. Journal of Manu-
facturing Science and Engineering, 2013, 135(3), 031006.
15. Zha W. and Anand S. Geometric approaches to input file modification for part quality im-
provement in additive manufacturing. Journal of Manufacturing Processes, 2015, 20, 465-
477.
16. Sharma K. Slice Contour Modification in Additive Manufacturing for Minimizing Part Er-
rors. Electronic Thesis or Dissertation. University of Cincinnati, 2014. OhioLINK Elec-
tronic Theses and Dissertations Center. 03 April 2016.
17. Siraskar N., Paul R., and Anand S. Adaptive Slicing in Additive Manufacturing Process
Using a Modified Boundary Octree Data Structure. Journal of Manufacturing Science and
Engineering, 2015, 137(1), 011007.
18. Panhalkar N., Paul R., and Anand S. Increasing Part Accuracy in Additive Manufacturing
Processes Using a kd Tree Based Clustered Adaptive Layering. Journal of Manufacturing
Science and Engineering, 2014, 136(6), 061017.
19. Moroni G., Syam W.P., and Petrò S. Towards early estimation of part accuracy in additive
manufacturing. Procedia CIRP, 2014, 21, 300-305.
20. Anwer N., Schleich B., Mathieu L., and Wartzack, S. From solid modelling to skin model
shapes: Shifting paradigms in computer-aided tolerancing. CIRP Annals-Manufacturing
Technology, 2014, 63(1), 137-140.
Design for Additive Manufacturing of a non-
assembly robotic mechanism
F. De Crescenzio1 and F. Lucchi1
1
Department of Industrial Engineering, University of Bologna
x Corresponding author. Tel.: +39 0543 374447;
E-mail address: francesca.decrescenzio@unibo.it, f.lucchi@unibo.it

Abstract
The growing potential of additive manufacturing technologies is currently be-
ing boosted by their introduction in directly manufacturing of ready-to-use prod-
ucts or components, regardless of their shape complexity. Taking advantage from
this capability, a full set of new solutions to be explored is related to the possibil-
ity to directly manufacture joints or mechanisms as a unibody structure.
In this paper, the preliminary design of a robotic mechanism is presented. The
component is designed in order to be manufactured as a unibody structure by
means of an Additive Manufacturing technology. Fused Deposition Modelling
technique is used to print the mechanic arm as a single component, composed by
different functional parts already assembled in the CAD model. Soluble support
material is commonly used to support undercuts: in this case it is also deposited in
the space between two adjacent parts of the same component, in order to allow the
relative motion and the kinematic connection between them. The design process
considers component optimization in relation to both the specific manufacturing
technique and both the interaction between the different parts of the same compo-
nent, in order to guarantee the proper relative motions.
The conceived mechanism consists in a robotic structure in which the mechani-
cal arm is bounded to a base and connected to a plier on the opposite side.
The effect of clearance between all the kinematic parts is evaluated in order to
assess mechanism degree of mobility in relation to the manufacturing process and
components tolerances and geometry.

Keywords: Design Methods, Additive Manufacturing, unibody mechanism manu-


facturing, clearance assessment, Fused Deposition Modeling technique

1 Introduction

Additive Manufacturing (AM) and Rapid Prototyping (RP) technologies and tech-
niques concern layer-based additive fabrication of components and prototypes 1.

© Springer International Publishing AG 2017 251


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_26
252 F. De Crescenzio and F. Lucchi

Such techniques are mainly used in product development and manufacturing with
the aim of reducing time to market, optimizing costs and the whole design and
fabrication chain. 3D printing allows making parts, components or sub-
components and tools in a wide variety of materials, in a simple way, manufactur-
ing without tooling, assembly lines or supply chains. In this perspective, AM
changes the paradigms of classical industrial manufacturing and development, as
well as the design processes and procedures 2.
On the other hand, high customizations of products and services, delivery time
and green design are currently the new challenges to take into consideration in in-
novative product development. In this case 3D printers are being used to economi-
cally create custom, improved, ready to use products, independently to geometry
complexity. Such features make AM techniques really challenging for low fabri-
cations batches or low volume units.
3D printers technologies is evolving rapidly, with regards to manufacturing
time, improved materials, and reduced costs. The flexibility to build a wide range
of products and shapes that could be printed implies a serious change to supply
chains and business models.
The design procedures and requirements are changing as well since innovations
in product design are introduced in relation to AM paradigms. First of all toleranc-
es and dimensions are dependent to layer thickness, building machine properties,
the specific technology used and the process parameters. Furthermore, the possi-
bility to design and 3D print the internal structure is a key feature for optimizing
the balance between the volume of material used, component weight, final product
resistance and performances. In this perspective AM enables the designer precise
control of the deposition of the material, with the possibility to model layer-by-
layer component internal structures.
In this paper the authors explore the effect of clearance parameters in the de-
sign and manufacturing of a case study for which the unibody mechanisms is par-
ticularly relevant, due to the number and complexity of joints needed. In this case
if no more assembly phases are required, the production times and costs can be re-
duced. Furthermore, mechanism’s performances can be improved, reducing errors
in parts positioning and fixing. In the proposed case study the clearance is as-
sessed in order to determine the optimal play between the parts to make each cou-
ple posable and/or movable up to the force applied to the joint.

1.1 Joint Design for 3D Additive Manufacturing

Design For Manufacturing (DFM) approaches aim to integrate manufacturing


rules and aspects within product design, optimizing object performances and fea-
tures according not only to product requirements, but also to the selected manufac-
turing process. Many research activities aim to combine DFM principles with AM
Design for Additive Manufacturing … 253

techniques, defining new design approaches and methodology and manufactura-


bility evaluations to sensibly improve final product quality 3.
One of the most challenging innovations brought by the use of AM in robotics
and product manufacturing is in removing the need of component assembly 4.
This is demonstrated to reduce both time and costs in product manufacturing, even
if the design phases have to be improved is some aspects, such as increasing joints
accuracy and reducing coupling clearance that in 3D printed mechanism could be
too large reducing movements precision. Kinematic pairs clearance assessment is
proposed in 5, where empirical tests are performed with regards to revolute pairs,
prismatic pairs and cylindrical pairs. Functional relationships between extractive
force and clearance and between moment and clearance are developed. Many
studies explore novel joint design, reducing pairs’ clearance and increasing stabil-
ity. In this respect, in 6 a novel joint has been designed to with the introduction of
add-on markers on the surfaces, reducing joint clearance and improving stability
and rotation performance. Some authors test the impact of drum-shaped roller on
the dynamic behavior of mechanism, reducing instability effects [7].
Unibody 3D printed mechanical joints can find application as functional articu-
lation for biomedical case studies. Such articulated models should be posable: the
joint have to hold a defined position, the internal friction withstands gravity with-
out hand-on parts fusing during the deposition process. In this respective, 8 pro-
poses a design workflow of 3D printed posable articulated models, without any
assembly requirements.

2 Preliminary Design of a Robotic Mechanism

The proposed case study is related to the conceptual design of the main body of a
flexible robotic arm printed by means of FDM technique. In this paper we do not
yet consider the electronic actuation. The arm’s purpose is to support in moving
small objects, simulating the shoulder and elbow mechanism. The design require-
ments are mainly related to the small dimensions of the arm and the coupling parts
should give the capability to the arm extremity to reach every point within the
volume defined by arm length. Furthermore, the slenderness (1) is considered as
the ratio between the sum of the length of all arm components (li) and the average
area of arm sections (Am):

σ ௟೔
ܵ ൌ (1)
஺೘

The robotic arm is divided in two parts, whose movements permit the motion
of an object in the 3D space: the mechanical arm and the pliers. The focus of the
activity is to determine the optimal design of the structure of both parts, consider-
ing manufacturing them as unibody components. The optimal joints play is as-
254 F. De Crescenzio and F. Lucchi

sessed as a balance between the friction necessary to make a position as stable and
the needed clearance to allow movements.
Robotic mechanism movements are divided into sub-tasks: mechanical arm’s
movements, pliers’ movements, pliers’ opening and closing. A kinematical chain
has been defined as follows: 3 rotating pairs with parallel lines of actions are used
for the mechanical arm movements; 2 rotating pairs with perpendicular lines of ac-
tion are used for the pliers movements and 2 gearwheels are used for pliers open-
ing and closing. The main selection parameters are precision and ease positioning,
component resistance, reduced amount of necessary space, easy architecture.
The preliminary design of the mechanical arm and of the plier has been devel-
oped, introducing the selected kinematical chains (Figures 1 and Figure 2).
The aim of the research activity is to optimize the sizing of the rotating pairs to
be manufactured as unibody structures, in order to allow the movements within
the joint (clearance effect), with a proper positioning between all the parts and
with the feasibility to make all the possible positioning of the arm extremity as
stable equilibrium position (posable effect).
The final solution is prototyped with a FDM – Fused Deposition Modeling –
technique as a demonstration of the developed concept.

Fig. 1. Mechanical arm structure.

Fig. 2. Pliers structure.


Design for Additive Manufacturing … 255

3 Clearance Assessments

The preliminary design phase of the robotic mechanism leads to the general di-
mensioning of the components, defining the coupling types and their connections.
All the rotating pairs are sized by a nominal value corresponding to the shaft di-
ameter. Hole diameters are selected assessing the clearance effect within the sur-
faces.
An experimental test plan is provided and investigations are performed on the
basis of the reference joints. The nominal diameter of the shafts joints is consid-
ered as reference for each test and the diameter of the hole is increased by discrete
values, in order to determine the minimum play to allow the posable effect, in rela-
tion to the specific manufacturing technology.
All the defined test specimens are manufactured with the Stratasys Fortus 250
3D printer, based on the Fused Deposition Modeling (FDM) technique that builds
parts up layer-by-layer by heating and extruding thermoplastic filament (ABS).
Each tested joint (as shaft / hole coupling) is treated and prototyped as a single
part, the support material is used to positioning the two sub-components during
the manufacturing phase and then removed, with a basic solvent at 70°C.
The hole diameter is increased considering the minimum necessary clearance to
have two separate movable parts after the manufacturing and post processing pro-
cedures.

Table 1. Pliers joints: test set up

Nominal
Rotating Hole diame-
shaft diame- Clearance
Pairs ter
ter
13.2 mm 0.1 mm
13.3 mm 0.15 mm
Rotating pair
13 mm 13.4 mm 0.2 mm
RP1
13.6 mm 0.3 mm
13.8 mm 0.4 mm
15.2 mm 0.1 mm
15.3 mm 0.15 mm
Rotating pair
15 mm 15.4 mm 0.2 mm
RP2
15.6 mm 0.3 mm
15.8 mm 0.4 mm
3.2 mm 0.1 mm
3.3 mm 0.15 mm
Rotating pair
3 mm 3.4 mm 0.2 mm
RP3
3.6 mm 0.3 mm
3.8 mm 0.4 mm
256 F. De Crescenzio and F. Lucchi

The test plane is conceived as represented in Table 1 and Table 2, considering


pliers and mechanical arm dimensions respectively. Four specimens are designed
and manufactured for each reference nominal shaft diameter; the respective holes’
diameters are increased by 0.2 mm, 0.3 mm, 0.4 mm, 0.6 mm and 0.8 mm.
20 sample joints are designed and prototyped. The selected geometry and the
prototyped models are represented in Figure 3.

Table 2. Mechanical arm joints: test set up

Design shaft
Rotating Pairs Hole diameter Clearance
diameter
15.2 mm 0.1 mm
15.3 mm 0.15 mm
Rotating pair
15 mm 15.4 mm 0.2 mm
RA1
15.6 mm 0.3 mm
15.8 mm 0.4 mm
13.2 mm 0.1 mm
13.3 mm 0.15 mm
Rotating pair
13 mm 13.4 mm 0.2 mm
RA2
13.6 mm 0.3 mm
13.8 mm 0.4 mm
7.2 mm 0.1 mm
7.3 mm 0.15 mm
Rotating pair
7 mm 7.4 mm 0.2 mm
RA3
7.6 mm 0.3 mm
7.8 mm 0.4 mm

Fig. 3. Designed and prototyped joint samples.

The optimal clearance values are determined as a balance of both the possibil-
ity to move the parts of each joint, with the posable factor. Since the case study
has only a research purpose, the assessment is performed manually, for further de-
Design for Additive Manufacturing … 257

velopment on a real case study, the real forces and strains have to be considered
and applied to each specimen.
The selected clearance values are than used to optimize the robotic mechanism
sizing.

4 Results

Table 3. Clearance Assessment: results

Design
Joint Joint
Component shaft diam- Hole diameter Clearance
Movable Posable
eter
15.2 mm 0.1 mm No
15.3 mm 0.15 mm No
Rotating pair
15 mm 15.4 mm 0.2 mm Yes Yes
RA1 - RP2
15.6 mm 0.3 mm Yes No
15.8 mm 0.4 mm Yes No
13.2 mm 0.1 mm No
13.3 mm 0.15 mm No
Rotating pair
13 mm 13.4 mm 0.2 mm Yes Yes
RA2 - RP1
13.6 mm 0.3 mm Yes No
13.8 mm 0.4 mm Yes No
7.2 mm 0.1 mm No
7.3 mm 0.15 mm Yes Yes
Rotating pair
7 mm 7.4 mm 0.2 mm Yes No
RA3
7.6 mm 0.3 mm Yes No
7.8 mm 0.4 mm Yes No
3.2 mm 0.1 mm No
3.3 mm 0.15 mm Yes Yes
Rotating pair
3 mm 3.4 mm 0.2 mm Yes No
RP3
3.6 mm 0.3 mm Yes No
3.8 mm 0.4 mm Yes No
258 F. De Crescenzio and F. Lucchi

Fig. 4. Final prototyped model.

The manufactured sample joints are used to assess the necessary clearance. The
optimal values are selected as reference to design and manufacture the resulting
robotic mechanism. The results are represented in Table 3.
The selected diameters of holes joints are 15.4 mm, 13.4mm, 7.3 mm and 3.3 mm.
The preliminary design of the mechanical harm is optimized with the defined val-
ues and prototyped (Figure 4).

5 Discussion and further developments

In this paper, an experimental procedure is proposed to obtain posable structures


from parts assembled in a Computer Aided Design environment and manufactured
as unibody structures by means of Additive Manufacturing. The described ap-
proach is based on the use of a reference shaft diameter and a progressively reduc-
tion of holes diameters for each joint of the designed structure. The experiments
have been conducted through a Fused Deposition Modelling machine with soluble
support technology. Soluble or removable support is the main assumption for the
implementation of the proposed workflow. The same approach could be imple-
mented in case of powder based AM technologies, such as 3D printers or SLS (Se-
lective Laser Sintering). Further developments are related to a deeper investigation
on the connection between joint capability to sustain an active force and joint
clearance. Results show different behavior in relation to shaft nominal diameter:
investigations will be related to this aspect and to manufacturing processing pa-
rameters too.
This study is intended to contribute to the knowledge related to the creation of
non-assembly structures as this kind of structures may open the way toward sever-
al new and challenging applications of additive manufacturing, such as robotics
and reconfiguration of layout in product development.

References

1. Wong, Kaufui V., and Aldo Hernandez. A review of additive manufacturing, IRSN
Mechanical Engineering 2012
2. Wei Gao, Yunbo Zhang, Devarajan Ramanujan, Karthik Ramani, Yong Chen, Chris-
topher B. Williams, Charlie C.L. Wang, Yung C. Shin, Song Zhang, Pablo D.
Zavattieri. The status, challenges, and future of additive manufacturing in engineering.
Computer-Aided Design, Volume 69, December 2015, Pages 65-89, ISSN 0010-4485,
http://dx.doi.org/10.1016/j.cad.2015.04.001.
3. Olivier Kerbrat, Pascal Mognol, Jean-Yves Hascoët. A new DFM approach to com-
bine machining and additive manufacturing. Computers in Industry, Elsevier, 2011,
pp.684-692. doi:10.1016/j.compind.2011.04.003
Design for Additive Manufacturing … 259

4. A. Bruyas, F. Geiskopf and P. Renaud, Toward unibody robotic structures with inte-
grated functions using multimaterial additive manufacturing: Case study of an MRI-
compatible interventional device, Intelligent Robots and Systems (IROS), 2015
IEEE/RSJ International Conference on, Hamburg, 2015, pp. 1744-1750. doi:
10.1109/IROS.2015.7353603
5. Shrey Pareek, Vaibhav Sharma and Rahul Rai. Design for additive manufacturing of
kinematic pairs. The International Solid Freeform Fabrication Symposium, The Uni-
versity of Texas at Austin, Texas, USA, August 2014.
6. Xuan Song and Yong Chen. Joint design for 3-D printing non assembly mechanisms.
ASME 2012 International Design Engineering Technical Conferences and Computers
and Information in Engineering Conference. Volume 5; 6 th International Conference
on Micro- and Nanosystems; 17th Design for Manufacturing and the Life Cycle Con-
ference. Chicago, Illinois, USA, August 12–15, 2012. ISBN: 978-0-7918-4504-2.
doi:10.1115/DETC2012-71528.
7. Yonghua Chen, Chen Zhezheng. Joint analysis in rapid fabrication of non-assembly
mechanisms. Rapid Prototyping Journal, 2011, 17(6):408-417. doi:
10.1108/13552541111184134.
8. 3D-Printing of Non-Assembly, articulated models. ACM Transactions on Graphics
(TOG) - Proceedings of ACM SIGGRAPH, Asia 2012. Volume 31 Issue 6, November
2012. Article No. 130. doi>10.1145/2366145.2366149.
Process parameters influence in additive
manufacturing

T. Ingrassia*, Vincenzo Nigrelli, V. Ricotta, C. Tartamella

Università degli Studi di Palermo, Dipartimento di Ingegneria Chimica, Gestionale,


Informatica, Meccanica, Viale delle Scienze – 90128 Palermo, Italy
* Corresponding author. Tel.:+39 091 23897263; E-mail address:
tommaso.ingrassia@unipa.it

Abstract Additive manufacturing is a rapidly expanding technology. It allows


the creation of very complex 3D objects by adding layers of material, in spite of
the traditional production systems based on the removal of material. The
development of additive technology has produced initially a generation of additive
manufacturing techniques restricted to industrial applications, but their
extraordinary degree of innovation has allowed the spreading of household
systems. Nowadays, the most common domestic systems produce 3D parts
through a fused deposition modeling process. Such systems have low productivity
and make, usually, objects with no high accuracy and with unreliable mechanical
properties.
These side effects can depend on the process parameters.
Aim of this work is to study the influence of some typical parameters of the
additive manufacturing process on the prototypes characteristics. In particular, it
has been studied the influence of the layer thickness on the shape and dimensional
accuracy. Cylindrical specimens have been created with a 3D printer, the Da Vinci
1.0A by XYZprinting, using ABS filaments.
Dimensional and shape inspection of the printed components has been performed
following a typical reverse engineering approach. In particular, the point clouds of
the surfaces of the different specimens have been acquired through a 3D laser
scanner. After, the acquired point clouds have been post-processed, converted into
3D models and analysed to detect any shape or dimensional difference from the
initial CAD models. The obtained results may constitute a useful guideline to
choose the best set of the process parameters to obtain printed components of
good quality in a reasonable time and minimizing the waste of material.

Keywords: Additive Manufacturing, Reverse Engineering, Process Parameters,


3D Printing.

© Springer International Publishing AG 2017 261


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_27
262 T. Ingrassia et al.

1 Introduction

The Additive Manufacturing (AM) term groups all the technologies that allow
to obtain 3D objects by adding, sequentially, very thin material layers.
The development of additive technology has produced initially a generation of
AM systems restricted to industrial applications, but their extraordinary degree of
innovation has allowed the spreading to a large public.
Nowadays, the terms Additive Manufacturing or 3D Printing have become
common among very different users that are, quite often, not specialized in the
engineering field. The spreading of low-cost 3D printers, in fact, has allowed the
use of this technology also among domestic users that, with no particular technical
skill, can be able to produce in a short time (and reduced costs) 3D objects at
home.
The most common domestic systems produce 3D parts through a Fused
Deposition Modeling (FDM) [1-3] process. This technology is based on the
concept that any 3D object can be theoretically subdivided into multiple sections
or thin layers, through a slicing procedure. Consequently, any 3D object can be
created by superimposing many physical layers, one after the other. Using the
FDM methodology, a filament of thermoplastic material is led to the plastic state
through a heated extruder and, just after, it is deposited on a support plane
following a path that defines one of the multiple layers in which the object to be
produced has been virtually subdivided. To achieve the correct shape of each
single layer, usually, the extruder can move itself along a horizontal plane. The
superimposition of different layers can be obtained through vertical movements of
the support plane or of the extruder. Unfortunately, low-cost FDM 3D printers
have low productivity and, usually, make objects with low dimensional and
geometric accuracy [3-6]. These limits, of course, are due to the hardware and
software solutions but, also, to the mechanical components that, because of
reduced cost of the printer, cannot have high-performances. Anyway, some of
these side effects can be effectively limited by choosing the suitable printing
process parameters [7-10].
Aim of this work is to investigate the dimensional and geometric accuracy of a
common low-cost 3D printer and to study how some process parameters affect the
geometrical and dimensional characteristics of the prototypes. In particular,
accuracy analyses have been performed on cylindrical specimens obtained using
different values of the layer thickness. The physical prototypes inspection has
been performed through a typical reverse engineering approach [11-13], using a
3D laser.

2 Materials and Methods

The analysed prototypes have been built using a Da Vinci 1.0A 3D printer
(Fig.1). The Da Vinci 1.0A can use both ABS and PLA filaments, it has a
maximum build volume of 20x20x20 cm and it is managed by the proprietary
Process parameters influence in additive manufacturing 263

"XYZware" software. Unfortunately, this software does not allow to edit


singularly and with accuracy the printing parameters. For this purpose, the
software "Slic3r" and "Simplify3D" have been used. The ".gcode" files created
with these software, containing the imposed set of printing parameters, have been
imported on the XYZware software to launch the printing phase.

Fig. 1. The Da Vinci 1.0A 3D printer


As said previously, the quality of AM prototypes can strongly depend on a
series of process parameters [7,9].
This paper presents some results of a more extensive research activity aimed to
find a correlation between the main printing parameters (layer thickness, extrusion
width, printing speed, slicing direction and extrusion temperature) and the shape
and dimensional accuracy of printed objects.
At this stage, preliminary results regarding the layer thickness, which represents
the thickness of each slice, will be presented.
The survey has been conducted on the cylindrical feature shown in figure 2.

Fig. 2. Dimensions of the analysed cylinder and two prototypes printed with 0.2 mm (on the left)
and 0.05 mm (on the right) layer thickness values

Prototypes made using two different values (0.2 mm and 0.05 mm) of the layer
thickness (fig. 2) have been tested. As regards other printing parameters, default
values of the Da Vinci printer have been used (table 1) and the sliding direction
has been imposed along the z-axis.

Table 1. Main printing parameters

Parameter Value
Nozzle diameter 0.4 mm
Print speed 5 mm/s
Bed temperature 90°
Nozzle temperature 210°
264 T. Ingrassia et al.

Fill density 25%


Wall thickness 0.7 mm
Some attempts have been also made to evaluate different sliding directions.
Unfortunately, in no case, any acceptable prototype has been obtained (fig. 3).
These printing failures reveal a remarkable technical limit of the printer.

Fig. 3. Cylindrical specimens obtained with sliding direction along the x-axis

2.1 3D acquisition and CAD modelling

The dimensional and shape accuracy inspection of the printed components has
been performed following a typical reverse engineering approach [14-15]. All the
prototypes have been acquired with a triangulation-based 3D laser scanner [16-18]
by Hexagon metrology (Fig. 4). This system has adjustable line lengths and it is
able to acquire point cloud at high speed (150000 points per second) with a good
accuracy level (0.013 mm).

Fig. 4. Hexagon metrology 3D laser scanner HP-L-20.8


The acquired point clouds (fig. 5,a) have been post-processed and converted
into NURBS surfaces (fig. 5,b) obtaining models that differ less than 0.035 mm
from the point clouds. In the final step of the process, the NURBS surfaces have
been converted into CAD solid models (fig. 5,c).
Process parameters influence in additive manufacturing 265

Successively, each prototype CAD model has been aligned with the nominal
CAD model (fig. 6).

Fig. 5. a) Point cloud, b) NURBS surfaces and c) CAD model of a printed cylinder
The alignment has been obtained using iterative algorithms [7,18] to minimize
the distance between the points of comparison. To this aim, two different
approaches have been used sequentially: at first, using a reduced number of
comparison points, the models have been roughly aligned; after, during the fine
adjustment phase, the models alignment has been optimized using a higher
number of comparison points. As soon as all the CAD models alignments have
been made, dimensional and shape inspections of the printed prototypes have been
performed.

Fig. 6. Aligned (acquired and nominal) CAD models.

3 Dimensional inspection

The dimensional inspection has been implemented measuring the deviation


values, considered as the shortest distance from the acquired model to any point
on the nominal CAD model. Root Mean Square Errors (RMSE) have been also
estimated, in order to gather information about the dimensional accuracy of the
printed object.
The obtained results are summarized in table 2.
266 T. Ingrassia et al.

Table 2. Layer thickness values and comparison results of the prototypes

Prototype Layer Thickness [mm] Deviation (mm) RMSE (mm)


1 0.2 -0.589 0.324
2 0.2 -0.550 0.254
3 0.2 -0.641 0.318
4 0.05 -0.524 0.298
5 0.05 -0.663 0.297
6 0.05 -0.609 0.324

Figure 7 shows coloured maps related to the deviation values distributions over
the CAD models of the prototypes. Thanks to these maps, it can be noticed that
large differences between the acquired models and the nominal CAD one occur
along the z-axis. All the prototypes, in fact, are lower than the nominal cylinder.

Fig. 7. Deviation (mm) maps of printed cylinders


The obtained results show that the maximum deviation (absolute) value ranges
from 0.524 mm to 0.663 mm, while the RMSE varies between 0.254 mm and
0.324 mm. The prototype 4 has the lowest values of deviation, whereas the lowest
RMSE has been estimated for the prototype 2.
After, following the previous described alignment procedure, also a
comparative survey among the prototypes has been carried out to evaluate the
repeatability of the results. In particular, the prototypes 1-2-3 (created using the
same set of printing parameters) have been compared two by two. The comparison
results are summarized in the deviations maps of figure 8.
Process parameters influence in additive manufacturing 267

Fig. 8. Comparison between printed cylinders: deviations (mm) maps


Similar values (about 0.280 mm) of the maximum deviation have been
measured for the three comparative analyses. The maximum deviation value,
equal to 0.288 mm, have been found between the prototypes 1 and 2.
These results demonstrate a moderate quality of the printer with regard to the
repeatability of the produced objects.

4 Circularity and cylindricity control

Circularity and cylindricity controls of the specimens have been also


performed.
As regards the circularity, a qualitative analysis has been made. In particular,
cross sections (along the x-y plane of fig. 2) at half height of the cylinders have
been extracted and compared with the nominal circular section.
Figure 9 shows the obtained results. It can be noticed that the cross sections of
all prototypes are not perfectly circular but they have a pseudo-elliptical shape, so
demonstrating a repetitive circularity error.

Fig. 9. Comparison of cross sections (x-y plane)


The cylindricity control has been made by finding, for each specimen, the
tolerance region, that is the zone bounded by two concentric cylinders (“inner”
and “outer”) within which all the points of the cylindrical surface lie (fig. 10).
268 T. Ingrassia et al.

Fig. 10. Cylindricity tolerance zone


For each prototype, the inner and outer cylinders have been found and their
diameters evaluated to calculate the cylindrical tolerance zone as T c=(Douter-
Dinner)/2.
Cylindricity control data are summarized in table 3. The best results, in terms of
lowest tolerance, have been found for the prototypes 4 and 5, printed using a 0.05
mm layer thickness. All the obtained data demonstrate a poor quality of the printer
concerning the geometric accuracy.

Table 3. Cylindricity control data (nominal diameter =20 mm)


Outer cylinder Inner cylinder Cylindricity
Prototype
diameter (mm) diameter (mm) tolerance TC (mm)
1 20.130 18.727 0.702
2 20.370 18.808 0.781
3 20.242 18.673 0.785
4 20.180 18.834 0.673

5 20.238 18.908 0.665


6 20.238 18.804 0.717

5 Conclusions

In this work, the results of a study about the influence of some parameters of
AM processes have been presented. A systematic survey has been performed to
evaluate the geometric and dimensional accuracy of a low-cost 3D printer. In
particular, some cylindrical specimens have been printed using different value of
the layer thickness. These prototypes have been acquired through a classical
reverse engineering approach and converted into CAD models. After, these
models have been compared with the nominal CAD model to evaluate the
maximum deviation values and the root mean square errors. Circularity and
cylindricity controls have been also performed.
Analysing the obtained results, it can be noticed that, as regards the
dimensional accuracy, comparable values of the maximum deviation have been
calculated for the prototypes printed with 0.2 mm and 0.05 mm layer thickness.
Process parameters influence in additive manufacturing 269

Also for the RMSE, similar values have been found for the 0.05 mm and 0.2 mm
layer thickness prototypes.
No remarkable influence of the layer thickness on the circularity has been
highlighted. All the specimens, in fact, have shown some irregularities of the cross
sections that have a pseud-elliptical shape.
With regard to the cylindricity control, prototypes printed with a 0.05 mm layer
thickness have obtained, on average, better results that 0.2 mm ones.
As a general rule, considering the used printer, the selected printing parameters
and the geometrical feature analysed, it can be stated that the best arrangement
could been obtained using a 0.05 mm layer thickness.
Discussed results have demonstrated that, using low-cost printers, the
achievable geometric and dimensional accuracies are quite poor. Moreover, it has
been observed that the suitable choice of the printing parameters could modify
significantly the quality of the printed object.

References

1. Bikas, H., Stavropoulos, P., Chryssolouris, G., Additive manufacturing


methods and modeling approaches: A critical review, 2016, International
Journal of Advanced Manufacturing Technology, 83 (1-4), pp. 389-405
2. Guo, N., Leu, M.C., Additive manufacturing: Technology, applications
and research needs, 2013, Frontiers of Mechanical Engineering, 8 (3), pp.
215-243
3. Lanzotti, A., Del Giudice, D.M., Lepore, A., Staiano, G., Martorelli, M.,
On the geometric accuracy of RepRap open-source three-dimensional
printer, 2015, Journal of Mechanical Design, Transactions of the ASME,
137 (10), art. no. 101703
4. Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multi-
layer structures by laser-based ultrasonic technique, 2008, Journal of
Adhesion, 84 (10), pp. 811-829
5. Boschetto, A., Bottini, L.,Accuracy prediction in fused deposition
modelling, 2014, International Journal of Advanced Manufacturing
Technology, 73 (5-8), pp. 913-928
6. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., Numerical study of
the components positioning influence on the stability of a reverse
shoulder prosthesis (2014) International Journal on Interactive Design
and Manufacturing, 8 (3), pp. 187-197
7. Lanzotti, A., Martorelli, M., Staiano, G., Understanding process
parameter effects of reprap open-source three-dimensional printers
through a design of experiments approach, 2015, Journal of
Manufacturing Science and Engineering, Transactions of the ASME, 137
(1), art. no. 011017
270 T. Ingrassia et al.

8. Boschetto, A., Bottini, L., Design for manufacturing of surfaces to


improve accuracy in Fused Deposition Modeling, 2016, Robotics and
Computer-Integrated Manufacturing, 37, art. no. 1357, pp. 103-114
9. Vaezi, M., Chua, C.K., Effects of layer thickness and binder saturation
level parameters on 3D printing process, 2011, International Journal of
Advanced Manufacturing Technology, 53 (1-4), pp. 275-284
10. Dai, X., Xie, H., Constitutive parameter identification of 3D printing
material based on the virtual fields method, 2015, Measurement: Journal
of the International Measurement Confederation, 59, pp. 38-43
11. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique
simultaneous approach for the design of a sailing yacht, 2015,
International Journal on Interactive Design and Manufacturing, DOI:
10.1007/s12008-015-0267-2
12. Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical
redesign of a semitrailer, 2005, WIT Transactions on the Built
Environment, 80, pp. 359-369
13. Nalbone, L., et al., Optimal positioning of the humeral component in the
reverse shoulder prosthesis, 2014, Musculoskeletal Surgery, 98 (2), pp.
135-142.
14. Cerniglia, D., Ingrassia, T., D'Acquisto, L., Saporito, M., Tumino, D.,
Contact between the components of a knee prosthesis: Numerical and
experimental study, 2012, Frattura ed Integrita Strutturale, 22, pp. 56-68
15. Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear
underrun protective device for truck, 2010, Proceedings of the 8th
International Symposium on Tools and Methods of Competitive
Engineering, TMCE 2010, 2, pp. 713-725
16. Ingrassia, T., Mancuso, A., Virtual prototyping of a new intramedullary
nail for tibial fractures, 2013, International Journal on Interactive Design
and Manufacturing, 7 (3), pp. 159-169
17. Jecić, S., Drvar, N., The assessment of structured light and laser scanning
methods in 3d shape measurements. 4th International Congress of
Croatian Society of Mechanics September, 18-20, 2003
18. Tóth, T., Živčák, J., A comparison of the outputs of 3D scanners, 2014,
Procedia Engineering, 69, pp. 393-401
Multi-scale surface characterization in additive
manufacturing using CT

Yann QUINSAT1*, Claire LARTIGUE1, Christopher A. BROWN2, Lamine


HATTALI3
1
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris Saclay, 94235 Cachan, France
2
Surface Metrology Lab, WPI, USA
3
Fast Univ. Paris-Sud, Université Paris Saclay 91405 Orsay, France
* Corresponding author. Tel.: +33 147 402 213; fax: +33 147 402 000. E-mail address:
yann.quinsat@lurpa.ens-cachan.fr

Abstract In additive manufacturing, the part geometry, including its internal


structure, can be optimized to answer functional requirements by optimizing pro-
cess parameters. This can be performed by linking process parameters to the re-
sulting manufactured geometry. This paper deals with an original method for sur-
face geometry characterization of printed parts (using Fused Filament Fabrication
FFF) based on 3D Computer Tomography (CT) measurements. From 3D meas-
ured data, surface extraction is performed, giving a set of skin voxels correspond-
ing to the internal and external part surface. A multi-scale analysis method is pro-
posed to analyse the relative internal area of the total surface obtained at different
scales (from sub-voxel to super-voxel scales) with different process parameters.
This analysis turns out to be relevant for filling strategy discrimination.

Keywords: CT measurement, additive manufacturing, multi-scale analysis

1 Introduction

Fig. 1. Example of internal structure

In additive manufacturing, the weight of parts can be optimized to support a


given load by designing an internal structure, including porosity, using a given
filling strategy (figure 1). Relations between strength and filling strategies are re-
quired for product and process design optimization. Our preliminary study has

© Springer International Publishing AG 2017 271


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_28
272 Y. Quinsat et al.

shown the significant influence of FFF (Fused Filament Fabrication) process pa-
rameters on both the mechanical and fracture properties of the manufactured parts.
Within the context of fracture study, a series of bending samples has been car-
ried out by considering variable filling parameters such as layer thickness, filling
mode, and density. All the specimens presented a small cracking directly issued
from manufacturing (figure 2(a)). Following cracking initiation, a 3-point bending
test was performed with the view to determining KC, critical stress intensity factor
corresponding to fracture initiation. Results displayed in table 1 clearly highlight
the great influence of the process parameters on the value of Kc. Therefore, the re-
lations between strength and filling strategies are necessary for product and pro-
cess design optimization.
Table 1. Values of Kc (from DOE define in table 3)

Tests 1 2 3 4 5 6 7 8 9
Kc (MPa.sqrt(m)) 0.54 1.27 1.68 0.99 2.03 0.78 0.55 0.68 0.61

In the paper, we propose a first approach to determine the relation between the
filling strategy and the resulting surface topography, both internal and external,
which can be linked to strength. Surface topography is thus considered the key el-
ement between the manufacturing process and the part function.

(a) 3-point bending setup (b) Typical load evolution vs displacement


Fig. 2. Description of the cracking tests

The study of 3D surface topography obtained by FFF is a problem well-


addressed in the literature. The most classical approximation consists in modeling
the external surface as a juxtaposition of elliptical pipes (or tubes) [1]. The influ-
ence of scanning and deposition directions on external topography during filling
has been studied in [2,3]. Zeng et al. show the influence of the process on local
curvatures in a multi-scale analysis [4]. An approach based on a finite element
analysis is proposed in [5] in order to predict 3D surface topography issued from
FFF based on a few measured points. Meanwhile, Pupo et al. [6] establish the link
between process parameters and the obtained topography in the case of sintering
of metallic powders. Hence, most studies address the influence of process parame-
ters on external surface topography, but few works have studied the effect of pro-
cess parameters on porosity. One difficulty lies in measuring the internal part sur-
face. Computer Tomography (CT) now allows non-destructive 3D measurements
of internal and external part geometry with uncertainties down to micrometers [7,
8, 9].
Multi-scale surface characterization in additive manufacturing using CT 273

Fig. 3. The 3 different filling strategies: (a) Hilbert curve (b) Honey curve(c) ZigZag curve

Data issued from tomography represent the different levels of X-ray attenuation
of the part for each point of the measured volume. The information is stored as a
numerical value (gray level) in a volume element called voxel [10]. Surface ex-
traction from these 3D data (also referred to as 3D edge detection) is the first nec-
essary stage for metrology applications [11, 12], and is strongly linked to the
voxel size which represents the tomography resolution [13]. Some studies show
that uncertainty in surface extraction can be reduced by making use of a sub-voxel
resolution method which numerically improves CT resolution [12, 10]. Uncertain-
ty can achieve 1/10 of the voxel size when sub-voxel resolution is used [14],
which makes CT applied for dimensional metrology applications.
Within this context, this paper deals with a method for surface characterization
in additive manufacturing to discriminate the filling strategy based on CT meas-
urements. The objective is to propose a tool to discriminate the internal geometry
based on the relative internal area evaluation. The originality of this approach lies
in the use of a multi-scale discrimination method, which analyses the relative in-
ternal area of the total surface, including internal surfaces of pores from CT meas-
urements. For the experimentation, parts were made using 3-D printer (Ultimaker
2) with three different filling strategies as displayed in figure 3. For each speci-
men, the same layer thickness ep = 0.25mm was used. This paper is organized as
follows. Section 2, is dedicated to the presentation of our method for CT data
treatment. In section 3, the surface area, including that of the pores, is character-
ized as a function of scale. The method application is proposed in section 4.

(a) Example of one measured part (b) Data representation


Fig. 4. Measurement from CT

2. CT data treatment

Data obtained by CT measurements correspond to a set of images defined in 3


perpendicular directions (see figure 4(b)) which constitute a set of voxels. For me-
274 Y. Quinsat et al.

trology applications, the part geometry is defined from the contours of its surface
(both external and internal). Ontiveros et al [15] propose a review of surface ex-
traction methods in relation with metrology applications. Among all the methods,
threshold methods are commonly used for industrial applications: a gray level val-
ue is defined as a limit to differentiate the air and the material [15].
To conduct a surface characterization by multi-scale analysis, an automatic CT
data treatment method is required. Hence, a threshold method, close to the algo-
rithm proposed in [17], and using Matlab© Image Processing Toolbox, has been
developed. The first step is then image binarizing according to a threshold value
determined with the well-known Otsu’s method [16]. Otsu’s thresholding chooses
the threshold to minimize the intraclass variance of the thresholded black and
white pixels. This step is followed by a morphological operation on the binary im-
age which removes interior pixels and keeps the outline of the shapes. As surface
extraction is the key point of surface characterization, and the basis for the evalua-
tion of metrological quantities, we propose to assess our method.
For this purpose, a gauge block made in ZrO2 is measured. This gauge ensures
a flatness defect of 0.10 μm and a parallelism defect between its two faces of 0.10
μm. The calibrated distance between the 2 faces is 1.3 mm with an uncertainty of
± 0.12 μm at 20°C. CT measurements are performed with a Zeiss Metrotom for
which the voxel size is 5.9μm×5.9μm ×5.9μm. Finally, the measured volume con-
sists of 200×300×1132 voxels. Our extraction method is applied leading to 2 sets
of points, each one corresponding to one of the 2 gauge faces (figure 5). To test
the interest of the sub-voxel technique, the extraction method is applied consider-
ing a smaller voxel-size, 30% of the original voxel-size. This is simply obtained
by a sub-voxel linear interpolation.
Assessment is thus performed considering two quality indicators, as proposed
in [18]: noise and trueness. Noise accounts for the measurement dispersion around
each extracted face (a plane in practice), and trueness measures the difference be-
tween the calibrated distance and the measured distance obtained after face extrac-
tion (plane-plane distance). Results, displayed in table 2, are consistent with the
gauge quality, which illustrates the efficiency of our surface extraction method. It
also can be observed that the noise along with the trueness is decreased when us-
ing a smaller voxel-size. This interesting result shows that sub-pixel refinement, as
suggested by numerous studies [19,15,10] actually improves surface extraction.

Table 1. Assessment of the surface extraction method

Voxel-size Noise (Face 1) (μm) Noise (Face 2) (μm) Trueness (μm)


original size 3 3.3 25.5
30% of original size 2.4 2.7 21.2
Multi-scale surface characterization in additive manufacturing using CT 275

Fig. 5. Contour extraction for the gauge block: (left) Gauge Measurement, (middle) Contour ex-
traction (original voxel-size) (right) Contour extraction (30% of the original voxel- size)

3 Surface characterization by multi-scale analysis

To link process parameters with the resulting surface, a multi-scale discrimina-


tion method is proposed to analyze the relative internal area of the total surface.
The surface area, including that of the pores, is characterized as a function of
scale. Prior to the description of our multi-scale analysis method, the calculation
of the surface area based on the identification of the skin voxels is first detailed.

3.1 Relative area calculation

The area of the part surface can be defined from the 3D extracted contours, the-
se contours defining a kind of skeleton structure. Hence, the skeleton is a 3D ele-
ment. To simplify the representation, an example of the skeleton's construction in
2D is reported in figure 6.

(a) Measurement (b) Image binarizing (c) Skeleton structure


Fig. 6. Surface extraction in the XY-plane

Indeed, the operation of surface extraction leads to the identification of all the
voxels containing a portion of surface (internal or external) referred to as skin
voxels, Vskin (figure 7(b)). The relative internal area is calculated as the sum of all
the areas of the skin voxel faces:
6
A ¦ ¦ Ai , j ˜ G i , j
i Vskin j 1
276 Y. Quinsat et al.

Where
°Gi , j 1, if theface i belongingto the voxel j is in contactwiththe air
­
®
°̄Gi , j 0 , if theface i belongingto the voxel j is not in contactwiththe air
The area so calculated depends on the dimensions of the studied zone. To avoid
this problem, the relative internal area Ar is introduced, defined by Ar =A/Ab,
where Ab is the area of the big voxel bounding the studied zone (figure 7(a)). Con-
sidering a 2D representation of a specimen measurement, the set of skin voxels
corresponds to the white voxels in figure 7(c).

(a) Original voxel map (b) Skin voxel representation (c) Skin voxel extraction
Fig. 7. Voxel representation of surface extraction

3.2 Multi-scale analysis

In order to conduct a multi-scale analysis, the surface area is characterized as a


function of scale [20]. The voxel size defines the scale, and we propose to vary
this value to have scales smaller and larger than the original dimension related to
the measurement scale. The value of the material density belonging to the consid-
ered voxel is obtained by linear interpolation. The influence of such a variation on
the surface extraction is illustrated in figure 9.

Fig. 9. Surface extraction as a function of scale (defining from initial voxel size) – from left to
right: Measurement scale 0.3, scale 1, and scale 3, Surface extraction scale 0.3, scale 1 and scale 3

The parameter Ar can thus be calculated as a function of scale. The scale is chosen
as the area of the voxel surface for a considered voxel size. The presented method
proposes to vary this voxel size (i.e. the scale) and to evaluate the corresponding
internal surface area, the scale varying from sub-voxel to meta-voxel. The relative
internal area evolves, increasing linearly on a decreasing logarithmic scale to a
plateau (see figure 10). The plateau begins at the scale of the original measured
Multi-scale surface characterization in additive manufacturing using CT 277

voxel (identified by a vertical blue line in the figure). Results highlight that, in this
study, the numerical sub-voxel treatment (scales < 10−2 in the present case) does
not provide additional information. In the same way, meta-voxels are too far from
reality. When the size of the voxel is close to the size of the test part, the relative
internal area becomes lesser than 1, which is not realistic. The study can thus be
restricted to the zone in which the relative internal area is above 1.
7

5
area area
internal

4
relative

3
Relative

0 �4 �3 �2 �1 0 1
10 10 10 10 10 10
Scale (mm 2
) )
2
scale (mm

Fig. 10. Evolution of the relative internal area as a function of the scale

4 Application and discussion

The interest of our surface characterization method is illustrated through 2 ap-


plications. The first one concerns the use of such a method to discriminate filling
modes, and the second one more particularly addresses the influence of process
parameters on resulting printed surfaces.

4.1 Filling mode discrimination

For this application, 3 different filling modes are used as described in figure 3
with the same process parameters: thickness is set to 0.25 mm, and the filling is
imposed to 60 %. The original voxel sizes are 37.8μm × 37.8μm × 37.8μm, and
the bounding voxel size is set to 5mm × 5mm × 5mm. The multi-scale analysis is
applied to the CT measurement obtained for the 3 specimens. Results, displayed in
figure 11, show that the evolution of the relative internal area is different depend-
ing on the filling mode. This is particularly distinct at low scales.
The linear decrease with the increase of the scale is in turn greater when the
original relative internal area is the largest, corresponding to the zig-zag strategy.
This analysis turns out to be relevant for filling strategy discrimination: multi-
scale analysis of the relative internal area enables printed surface characterization.
278 Y. Quinsat et al.

Fig. 11. Filling mode comparison using mutli-scale analysis

4.2 Study of process parameters

The second illustration aims at studying the influence of process parameters on the
manufactured surface. The zig-zag mode is considered for this study. The process
parameters retained are level thickness, direction of filling, and filling rate. A de-
sign of experiment is proposed according to a table L9 with 3 factors for 3 levels
(table 3).

Table 3. Design of experiment.

Factor Level1 Level2 Level3


Thickness (mm) 0.15 0.2 0.25
Direction (°) 0 30 45
Filling rate (%) 60 80 100

Fig. 12. Multi-scale analysis for different process parameters


Multi-scale surface characterization in additive manufacturing using CT 279

The 9 manufactured specimens are measured using CT and a multi-scale analysis


of the relative internal area is afterwards performed, leading to the results reported
in figure 12. As displayed in the figure, the value of the relative internal area at the
low scales is different in function of the considered specimen. Furthermore, in the
linear zone, the slope also varies for each specimen. This study confirms the rele-
vance of using the relative internal area to discriminate surface topographies ob-
tained using different process parameters. It is interesting to notice that due to pro-
cess uncertainties, specimen for which the filling rate is 100% present some pores,
which explains that the relative internal area is not constant for these specimens.
This is likely due to process uncertainty.

4 Conclusion

The set of skin voxels defined the 3D surface topography, both internal and ex-
ternal. To link process parameters to the manufactured surface, a multi-scale dis-
crimination method is proposed which analyses the relative internal area of the to-
tal surface. The surface area (area of all the skin voxels), including that of the
pores, is characterized as a function of scale. To show the relevance of multi-scale
relative internal area analysis, 2 different applications are proposed. The first one
clearly highlights the interest of our approach to discriminate filling modes when
using FFF. For the second application, the filling mode is the same (zig-zag
mode), and the study focuses on the influence of some process parameters (level
thickness, filling direction, and filling rate) on the printed surface. This study con-
firms the relevance of using the relative internal area to discriminate surface to-
pographies obtained with different process parameters. Hence, the geometry and
the structure of the part can be analyzed according to process parameters to an-
swer a given part function. This work is a first approach to determine the relation-
ship between mechanical properties and surface topography. We propose to use
the relative internal surface area to discriminate the internal geometry. According
to multiscale analysis, further work will focus on linking strength and internal rel-
ative area. A functional correlation could be found by regressing the relative inter-
nal areas at each scale versus the mechanical properties.
Acknowledgments Authors would thank Zeiss Industrial Metrology, LLC which provides all
the CT measurements.

References
1. Ahn, D., Kweon, J.H., Kwon, S., Song, J., Lee, S.: Representation of surface roughness in
fused deposition modeling. Journal of Materials Processing Technology 209(15-16), 5593 –
5600 (2009)
2. Galantucci, L., Lavecchia, F., Percoco, G.: Experimental study aiming to enhance the surface
finish of fused deposition modeled parts. CIRP Annals - Manufacturing Technology 58(1),
189 – 192 (2009)
280 Y. Quinsat et al.

3. Pandey, P.M., Reddy, N.V., Dhande, S.G.: Improvement of surface finish by staircase machin-
ing in fused deposition modeling. Journal of Materials Processing Technology 132(1-3), 323
– 331 (2003)
4. Zeng, Y., Wang, K., Wang, B., Brown, C.: Multi-scale evaluations of the roughness of surfac-
es made by additive manufacturing. In: ASPE - 2014 Spring Topical Meeting (2014)
5. Jamiolahmadi, S., Barari, A.: Surface topography of additive manufacturing parts using a fi-
nite difference approach. Journal of Manufacturing Science in Engineering 136(4), 1–8
(2014)
6. Yurivania, P., KarlaP, M., Joaquim, C.: Influence of process parameters on surface quality of
cocrmo produced by selective laser melting. The International Journal of Advanced Manufac-
turing Technology 80(5-8), 985–995 (2015)
7. Wang, J., Leach, R.K., Jiang, X.: Review of the mathematical foundations of data fusion tech-
niques in surface metrology. Surface Topography: Metrology and Properties 3(2), 023,001
(2015)
8. Chiffre, L.D., Carmignato, S., Kruth, J.P., Schmitt, R., Weckenmann, A.: Industrial applica-
tions of computed tomography. CIRP Annals - Manufacturing Technology 63(2), 655 – 677
(2014)
9. Bartscher, M., Hilpert, U., Goebbels, J., Weidemann, G.: Enhancement and proof of accuracy
of industrial computed tomography (ct) measurements. CIRP Annals - Manufacturing Tech-
nology 56(1), 495 – 498 (2007)
10. Yage-Fabra, J., Ontiveros, S., Jimnez, R., Chitchian, S., Tosello, G., Carmignato, S.: A 3d
edge detection technique for surface extraction in computed tomography for dimensional me-
trology applications. CIRP Annals - Manufacturing Technology 62(1), 531 – 534 (2013)
11. Dewulf, W., Kiekens, K., Tan, Y., Welkenhuyzen, F., Kruth, J.P.: Uncertainty determination
and quantification for dimensional measurements with industrial computed tomography.
CIRP Annals - Manufacturing Technology 62(1), 535 – 538 (2013)
12. Lifton, J.J., Malcolm, A.A., McBride, J.W.: On the uncertainty of surface determination in x-
ray computed tomography for dimensional metrology. Measurement Science and Technology
26(3), 035,003 (2015)
13. Kruth, J., Bartscher, M., Carmignato, S., Schmitt, R., Chiffre, L.D., Weckenmann, A.: Com-
puted tomography for dimensional metrology . CIRP Annals - Manufacturing Technology
60(2), 821 – 842 (2011)
14. Carmignato, S.: Accuracy of industrial computed tomography measurements: Experimental
results from an international comparison. CIRP Annals - Manufacturing Technology 61(1),
491 – 494 (2012)
15. Ontiveros, S., Yage, J., Jimnez, R., Brosed, F.:Computer tomography 3d edge detection
comparative for metrology applications. Procedia Engineering 63, 710 – 719 (2013). The
Manufacturing Engineering Society International Conference, MESIC 2013
16. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on
Systems, Man, and Cybernetics 9(1), 62–66 (1979)
17. Shahabi, H., Ratnam, M.: Simulation and measurement of surface roughness via grey scale
image of tool in finish turning. Precision Engineering 43, pp.146 – 153(2016)
18. Mehdi-Souzani, C., Quinsat, Y., Lartigue, C., Bourdet, P.: A knowledge database of qualified
digitizing systems for the selection of the best system according to the application. CIRP
Journal of Manufacturing Science and Technology pp. – (2016)
19. Jiménez,R.,Comps,C.,Yague,J.: An optimized segmentation algorithm for the surface extrac-
tion in ncomputed tomography for metrology applications. Procedia Engineering 132, 804 –
810 (2015). MESIC Manufacturing Engineering Society International Conference 2015
20. Brown, C.A., Johnsen, W.A., Hult, K.M.: Scale-sensitivity, fractal analysis and simulations.
International Journal of Machine Tools and Manufacture 38(5-6), 633 – 637 (1998)
Testing three techniques to elicit additive
manufacturing knowledge

Christelle GRANDVALLET*, Franck POURROY, Guy PRUDHOMME,


Frédéric VIGNAT

G-SCOP Laboratory, 46 avenue Félix Viallet, 38031 Grenoble, FRANCE


*Corresponding author. Tel.: +33 (0)4 76 82 52 79; fax: +33 (0)4 76 57 46 95. E-mail
address: Christelle.Grandvallet@grenoble-inp.fr

Abstract Additive manufacturing (AM) has enabled the building of parts with
new shapes and geometrical features. As this technology modifies the practices,
new knowledge is required for designing and manufacturing properly. To help ex-
perts create and share this knowledge through formalization, our paper focusses on
testing three knowledge elicitation techniques. After defining knowledge concepts
we present the State of Art in knowledge elicitation and a methodology. A case
study about support creation for AM points out: the assets and limits of the tech-
niques; the different types of knowledge elements per technique; some contradic-
tions between experts. We finally propose collective tools for a better elicitation
and formalization of AM knowledge.

Keywords: Additive manufacturing; Knowledge management; Elicitation meth-


ods; Knowledge elicitation tools; Individual elicitation.

1 Introduction

Additive Manufacturing (AM) technologies have modified the practices of engi-


neers and researchers. Among them, recent metallic AM technologies developed
rapidly, but design and manufacturing rules are still under formalization. This pa-
per focuses on testing three techniques with a limited number of experts in order
to check if they can facilitate the elicitation of AM knowledge not yet formalized.
It is part of our ongoing research work which aims to make AM knowledge acces-
sible and, as knowledge intermediary, to assist the knowledge producers [1] in the
elicitation, structuration and sharing of AM knowledge within a larger community.
A first section about the State of the Art (SoA) explains some concepts of
knowledge management and elicitation methods. A case study in the next section
presents the test of three elicitation methods - unstructured interview, semi-
structured interview and limited information task - with an AM activity. It is fol-

© Springer International Publishing AG 2017 281


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_29
282 C. Grandvallet et al.

lowed by a deep analysis of results, a conclusion and a proposition of progression


toward new method and perspectives.

2 State of the art review and research challenges

2.1 About knowledge representation and AM

The few research works about AM knowledge that can be found in the literature
are more related to knowledge based systems (KMS) than to elicitation techniques
(e.g. Gardan [2]). Although the KMS approach proves to be a promising solution
to specific engineering problems, it relies on a set of explicit and well defined de-
sign and manufacturing rules, and such rules still remain fuzzy, even unknown in
the young field of AM. The knowledge elicitation approach could contribute to
make AM knowledge more visible, and to foster emergence of design and manu-
facturing rules. To start with the basics, one can make a distinction between
knowledge, information and data. Wilson [3] defines a data as “everything outside
the mind that can be manipulated in any way” and it becomes information once
“the data are embedded in a context of relevance to the recipient”. Our position is
to consider that information stands outside individual and that knowledge is rooted
in people’s head. This position is shared by Nonaka and Takeuchi [4] who wrote
“in a strict sense, knowledge is created by individuals”, which highlights the dy-
namic aspect of knowledge. Knowledge is also characterized by two opposite but
coexisting dimensions, namely tacit and explicit knowledge. Citing Polanyi [5],
Nonaka and Takeuchi explain that “whereas explicit or "codified" knowledge re-
fers to knowledge transmittable in formal, systematic language, tacit knowledge is
personal, context-specific, and hard to formalize and communicate.” Mougin [6]
had studied the structuration of information in a knowledge transfer between
workers of an organization. His structure emphasizes the progression of
knowledge elements, from tacit to explicit states. The first one is a “knowledge
fact” defined as “an observable element that can be captured by video-recording or
direct observation of professional situations”. The second, a “knowledge foot-
print”, is intentional and can be an oral or written element expressed by a person.
Next structure is a “knowledge object”, more formalized, such as for example an
internal report. Then it could become a “packaged knowledge object” once it is
processed and codified into a repository. In an AM, context our challenge is then
to observe dynamically-constructed knowledge and use this structuration model to
identify tacit and explicit knowledge elements thanks to elicitation techniques.
Testing three techniques to elicit additive … 283

2.2 A knowledge elicitation methodology

Knowledge elicitation is “the process of collecting from a human source of


knowledge, information that is thought to be relevant to that knowledge”. [7].
Elicitation methods are used to help experts express their knowledge, in order to
structure, share and transfer it for a reuse within a community of workers. The
elicitation step is thus at the beginning of the knowledge cycle. One method for
knowledge elicitation has been developed in 2007 by Milton [8] for knowledge
acquisition. His guide aims to capture experts’ knowledge in 47 steps and 4 phases
for formalization into a KMS. A great deal of elicitation techniques are proposed
for catching knowledge (Fig. 1). For the purpose knowledge is structured from
basic/explicit to deep/tacit knowledge, as well as conceptual and procedural.

Fig. 1. Milton’s techniques for capturing different knowledge types.

In an AM domain, conceptual explicit knowledge could be expressed for example


by “I know that supports have a thermal impact onto the heat flow during part
production”. Procedural knowledge would refer to “I know how to create a sup-
port structure within my CAD/CAM system”.
Among the techniques, the ones on the left side are interviews that tackle more
explicit knowledge. Unstructured interview and semi-structured interview are the
first tools to use at the beginning of a project. Towards the right side we should
capture deeper knowledge. In our AM context we suppose that we should get a
mixture of knowledge, whether basic or deep, because of the youth of the field.
Also our idea is to consider any trace of knowledge as defined by Mougin. Based
on these assumptions and on the SoA we define our research question as follows:
What would be the benefits and limits in using such elicitation techniques in the
new domain of AM where knowledge is under construction? To attempt to reply
to this question we implemented the following case study.
284 C. Grandvallet et al.

3 Case Study

3.1 The activity of support structure creation

The scope of our analysis is the design and creation of support structures for me-
tallic parts built with EBM (Electron Beam Melting) technology. Example of sup-
ports is provided in Fig. 2.

Fig. 2. Example of a turbine built with supports with EBM technology.

This activity is in fact critical because it is closely interlinked with the characteris-
tics of the parts as well as on fabrication parameters and hence influences the final
quality of parts. Supports are indeed used for thermal and/or mechanical reasons;
they are needed for removing heat from the part and to support overhanging sur-
faces. It is then important to capture where and how to place them on the part, and
this relates to procedural and conceptual knowledge to capture.

3.2 Approach for elicitation of AM knowledge

We identified three researchers in our laboratory, acknowledged for their expertise


in support creation. Each of them has a specific profile which is summarized in
Table 1. The number of experts has been limited as the objective of this experi-
ence was to test the usability/relevance of the elicitation in a field in construction.

Table 1. Panel of experts.

Code Type of specialization


E1 Designer using CAD software for AM
E2 Engineer specialized in support structure creation
E3 Engineer more experienced with the AM process

Three techniques have been chosen among Milton’s list, namely: Unstructured In-
terview (UI); Semi-Structured Interview (SSI); Limited Information Task (LIT).
(See Table 2.) According to Milton, it is better to start elicitation with UI and SSI
Testing three techniques to elicit additive … 285

to get basic and explicit knowledge. The LIT technique was specifically chosen as
it prompts the expert to provide three pieces of information that he considers as
crucial in order to perform a defined task.

Table 2. Presentation of selected elicitation techniques.


Name Elicitation Characteristics according to Example of questions asked in
technique Milton the case study
UI Unstructured A freeform chat with the ex- Tell me how you do to create
interview pert to get basic knowledge supports?
of the domain.
SSI Semi- Based on a predefined ques- What are supports used for? What
structured tionnaire sent beforehand to are the main steps to create sup-
interview the expert. Written responses ports? Which problems do you
are reviewed and completed encounter? Why do you under-
during the interview. take these activities? What still
remains difficult or confusing?
LIT Limited in- Based on one request repeat- If you had to create supports with
formation ed until he expert does not only 3 pieces of information,
task have anything to add: to give which ones would you choose
orally 3 pieces of information first to execute your task? Fol-
essential to the execution of a lowed by: Now if you have 3
complex task. more pieces of information,
which one would you use?

The techniques were tested individually with the researchers accordingly (see Ta-
ble 3). At the beginning of each elicitation session, the rules of the exercise as well
as its objectives were presented to the person.

Table 3. Experts and individual elicitation techniques


E1 E2 E3
UI X X
SSI X X
LIT X X

During the UI, questions were enough open to start a free discussion. Its duration
was of at least one hour. We took notes on a laptop and tape-recording were used.
Initial notes were then completed based on knowledge footprints forms. For SSI, a
questionnaire requested to explain the activity of support creation, the objectives,
the steps to achieve them, as well as the problems identified and the proposed so-
lutions. Once filled in, the questionnaire was reviewed during the interview and
completed by a question and answer session of around 1 hour. As far as the LIT is
concerned it was used with E2 and E3 as they were more experienced than E1 un-
til the person did not have anything to add. Its duration ranged from 20 to 40
minutes.
286 C. Grandvallet et al.

4 Results

Together with our notes and tape-records taken during the elicitation the study
of the three elicitation techniques revealed interesting knowledge elements (see re-
sults synthesis in Table 4). Most notably, our analysis highlighted parameters and
criteria in connection with the part quality. They were encapsulated in knowledge
facts and footprints.

Table 4. Knowledge footprints captured during elicitation session

Tool Elicited knowledge


UI - Support typology choice
- Support teeth length to facilitate removal
- Offset selection between support and surface
- Manufacturing and cooling time onto grain microstructure
SSI - Supports help to stiffen the part, dissipate heat, remove the part from
the start plate
- Where to place the supports and how
- Supports depend on part orientation, part balancing in the chamber,
melting behavior of the powder, machine parameters
LIT - Part orientation
- Angle between plate and overhanging surface
- Length of supports
- Tolerance of the surfaces to support
- Collision between supports
- Roughness induces by the machine parameters

If we go into details, the unstructured interview enabled to get an overview of the


AM overall context and to understand who does what in this collaborative work.
References to scientific articles and SoA were mentioned. The elicited knowledge
was from various scientific domains (EBM materials, design, or manufacturing).
The knowledge footprints were captured in the form of parameters and criteria,
more expressed in terms of problems still unsolved: which typology of support to
add (contour, web, etc.); which teeth length to choose for facilitating the support
removal; which offset to select between the support and the surface… Other pa-
rameters were discussed, not directly linked to the support creation but more in re-
lation with the part quality: e.g. the influence of the manufacturing time and cool-
ing onto the grains’ microstructure; the difficulty to remove non-consolidated
powder from a part containing supports. Moreover a 3D file of a part and its sup-
ports was shown by one expert with Magics software in order to back up his ex-
planations. Such a demo, that we can typically define as a knowledge fact, was a
Testing three techniques to elicit additive … 287

means to express less explicit knowledge. It could be similar to a scenario, alt-


hough not set up by the knowledge engineer.

As far as SSI is concerned it had the advantage of asking the expert to clarify its
activity objectives, as well as the sequential steps required in support creation, the
input and output for specific tasks, the problems and solutions, problems not yet
solved, and links with other activities in the whole AM process. Attempts of defi-
nitions were provided as knowledge footprints. About our following question
“what is your objective?” replies were “to know the functionality of supports,
where and how to place them”. Answers to the question “what are supports used
for?” were less straightforward as supports may have several functions: they help
to stiffen the part, to remove heat, to unstick the part from the start plate… but
these functions are interdependent with many elements, e.g. the part orientation
and balancing in the chamber, the powder melting behavior, machine parame-
ters…These oral and written knowledge elements were retranscribed afterwards
and gathered under a series of various parameters that completed the keywords
and footprints captured during the other types of interviews. Experts concluded in
the interview they must still find “some operating rules through more trials and
experiences”. This proves that the level of knowledge maturity of experts is low.

Lastly, LIT technique allowed to acquire concepts and precise knowledge ele-
ments related to support creation and to have them sorted per priority. The main
footprints captured about support creation were the following: part orientation;
angle between the start plate and the overhanging surface; dimension and thick-
ness of the various surfaces to support; length of the support; tolerance of the sur-
faces to support; risk of collision between supports; possible roughness induced by
the manufacturing parameters. These elements are considered as critical
knowledge footprints as they were identified by the experts as crucial for the
knowledge creation process. This tool revealed hesitation and highlighted some
contradictions and disagreement between the experts when comparing which piece
of information is more important than another. LIT helped to elicit more detailed
knowledge related to the support creation process. It had the advantage to rank the
above-mentioned support parameters according to the expert’s priority, i.e. avoid
the part defects. It appeared indeed of highest importance to take care of the part
surface quality and obtain a good geometrical quality, the right dimensions, the
right mechanical behavior or a good microstructural quality. These knowledge el-
ements did not however always intersect between experts, they either converged
or diverged. This reveals that researchers' knowledge is still in construction, and
that they have a different level of maturity.
288 C. Grandvallet et al.

5 Conclusion

Our research focused on the individual elicitation of the knowledge of experts


specialized in the support structure creation for AM metallic parts. To do so three
tools were selected and tested: the unstructured and semi-structured interviews,
and the limited information task. The question was where and how to place the
supports. A first result analysis led to the conclusion that many knowledge ele-
ments were elicited although experts cannot provide comprehensive or straight an-
swers due to the immature stage of the AM knowledge. Knowledge facts arose
when a scenario was proposed by an expert, which can be explored as a potential
tool. More importantly, our work brought to light a list of support parameters and
criteria which emerged on the fly as knowledge footprints. Support parameters
were defined in terms of density, form and placement. Criteria concerned the part
quality - defined by its surface quality, geometry, dimension, microstructure ap-
pearance or mechanical behavior – as well as the process duration and cost. This
conclusion leads us to propose as perspectives to consider support parameters ac-
cording to their degree of influence onto the part quality. We recommend there-
fore to complete our elicitation techniques and open our research to more collec-
tive elicitation tools such as the one proposed by Stenzel et al. [9]. A matrix
crossing the critical process parameters with performance criteria would serve as
an elicitation tool at the heart of the debate. This would have the advantage to
make interact people with a different level of expertise and opinions. The idea
would be to elicit the knowledge elements during this collective session so as to
get “shared knowledge footprints” and thus to foster knowledge maturing process.

References

1. Markus LM. Toward a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations
and Factors in Reuse Success. J Manag Inf Syst, 2001,18:57–93
2. Gardan N. Knowledge Management for Topological Optimization Integration in Additive
Manufacturing. Int J Manuf Eng, vol:2014
3. Wilson TD. The nonsense of knowledge management. Inf Res 2002, 8:paper 144
4. Nonaka I, Takeuchi H. The Knowledge-Creating Company: How Japanese Companies Create
the Dynamics of Innovation.Oxford University Press, 1995.
5. Polanyi M. The Tacit Dimension. London, Routledge & K. Paul, 1967.
6 Mougin J, Boujut J-F, Pourroy F, Poussier G. Modelling knowledge transfer: A knowledge
dynamics perspective. Concurrent Engineering, 2015, 23(4), 308-319.
7. Cooke NJ. Varieties of knowledge elicitation techniques. Int J Hum-Comput Stud 1994,
41:801–49
8. Milton NR. Knowledge Acquisition in Practice: A Step-by-step Guide. Springer Science &
Business Media, 2007.
9. Stenzel I, Pourroy F. Integration of experimental and computational analysis in the product
development and proposals for the sharing of technical knowledge. Int. J. Interact. Des.
Manuf, 2008, 2(1), 1–8.
Topological Optimization in Concept Design:
starting approach and a validation case study

Michele BICI1*, Giovanni B. BROGGIATO1 and Francesca CAMPANA1


1
Dip. Ingegneria Meccanica e Aerospaziale – Sapienza Università di Roma
* Corresponding author. Tel.: +39-06-44585253; fax: +39-06-4881250. E-mail address:
michele.bici@uniroma1.it

Abstract Nowadays, the most updated CAE systems include structural optimiza-
tion toolbox. This demonstrates that topological optimization is a mature tech-
nique, although it is not a well-established design practice. It can be applied to in-
crease performance in lightweight design, but also to explore new topological
arrangements. It is done through a proper definition of the problem domain, which
means defining functional surfaces (interface surfaces with specific contact condi-
tions), preliminary external lengths and geometrical conditions related to possible
manufacturing constraints. In this sense, its applicability is possible for all kind of
manufacturing, although, in Additive Manufacturing, its extreme solutions can be
obtained. In this paper, we aim to present the general applicability of topological
optimization in the design workflow together with a case study, exploited accord-
ing to two design intents: the lightweight criterion and the conceptual definition of
an enhanced topology. It demonstrates that this method may help to decrease the
design efforts, which, especially in the case of additive manufacturing, can be re-
allocated for other kind of product optimization.

Keywords: Topological Optimization; Additive Manufacturing; Design Intent; Conceptu-


al Design; Lightweight Design

1 Introduction

Design in engineering is the ability of planning a product that can absolve its func-
tion in respect of various constraints. Currently, the development of design princi-
ples, methods and technologies caused that this concept evolved into the research
of the best compromise among possible designs, in compliance with constraints
and demands. Every field, every part of the product lifecycle and every “design
for” defines an “optimum” and the final product is an ideal merge of all these op-
timized solutions and configurations. Design phase and product process planning
determine the “time to market”. It means a large amount of costs without immedi-

© Springer International Publishing AG 2017 289


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_30
290 M. Bici et al.

ate profit. However, there is not a linear distribution of design and production
costs, so that during the first phases of “time to market”, in the design phase, more
relevant design modification are possible with lower impact in terms of costs, until
no manufacturing investment are planned and made. The technological process
severely constrains the design in terms of cost and technical feasibility, so it is
necessary to talk about product-process integrated design and manufacturability
oriented design. In general, it is recommended to use principles of Concurrent De-
sign [1].
In a context like this, the latest developments of Additive Manufacturing (AM)
technologies come to have an increasing relevance. In fact, Additive technologies,
generally, allow to add material where necessary, layer-by-layer, obtaining forms
not otherwise producible [2]. Obviously, also AM has its own technological con-
straints so that it is impossible to produce everything with every AM technology
[3]. Nevertheless, it is important to understand that design dynamics change. In
the AM field, the possibility of producing complex shapes and various features,
regardless of process, reduces the effort of component modifications during the
design phase. AM allows to move the designers’ focus from the executive design
to the concept one, with a high reduction of the “design for” rules related to the
process technology. This is why Topological Optimization (TO) procedures as-
sume a substantial role, allowing, already in the concept phase, to obtain a compo-
nent that has its own volume distributed in function of its conditions of load and
use. It can be made through an extremely consistent mathematical formulation that
is able to explore a very generic design space for a given set of loads and bounda-
ry conditions, so that the resulting layout satisfies a prescribed set of performance
targets.
In this paper, we discuss about the TO presenting its definition and application
during design (Section 2 and 3) highlighting, through a case study, described in
Section 4, its suitability not only to lightweight design but also like new tool for
conceptual design according to specific design intent.

2 Topological Optimization in brief

TO is a structural optimization technique that started its evolution in the second


half of the last century. According to Bensdoe [4], structural optimization pertains
to topology, shape and size of a component. Topology extends the concepts of
shape or geometry including the capability of adding and deleting volumes, that
basically means changing the space connectivity via opening-closing operations
(Figure 1). Shape and size optimization looks for mathematical conditions able to
minimize (or maximize) a structural design objective through geometrical parame-
ters like feature sizes (thickness, length, perimeter, ...) or geometric shape, without
changes in the topology (that means without opening/closing connections). In this
sense, a rectangular plate with a circular hole has different shape from a triangular
Topological Optimization in Concept Design … 291

one with an elliptical or hexagonal hole, but they are topologically equivalent,
since they can be transformed one into the other through a transformation map
(homeomorphism) [5].

Fig. 1. Space connectivity examples (on the left); example of shape modification and topology
change via “opening operator” (on the right - in clockwise, from the upper left shape, defor-
mations are in blue and space reduction in the first object depicted in cyan).

At the beginning, the structural problem has been investigated looking for
mathematical conditions able to delete or maintain material in each volume ele-
ment of the assumed design space [4]. This leads to the microstructural or material
approach that, together with the macrostructural approach, represent the most gen-
eral and well-posed definition of the problem [6]. SIMP, that stands for Solid Iso-
tropic Material with Penalization, and the Level Set Method are two relevant ex-
amples of micro and macrostructural approaches [4,7]. Other approaches with
very intuitive formulations (e.g. hard-kill methods like Evolutionary Structural
Optimization), can be seen as heuristic although many efforts made to extend or
strictly define their field of applicability [8].
Microstructure methods are also called density-based methods. Under the hy-
pothesis of linear elastic behavior and assuming a FEM discretization, they are de-
fined as:
‹ ˆሺUǡ ‫ܝ‬ሻ
‫ۓ‬
۹ሺUሻ‫ ܝ‬ൌ ۴ሺUሻ
(1)
‫ ۔‬୧ ሺUǡ ‫ܝ‬ሻ ൑ Ͳ ݅ ൌ ͳǡ ݉
‰
‫ א ߩŠ–‹™ە‬ሾͲǡͳሿ
where the objective function, f, can be the total design space mass, the natural fre-
quencies, or cost function depending on stresses or compliance. u represents the
nodal strain vector; K(U) the stiffness matrix in function of the density factor, U. It
is defined between 0 (void) and 1 (bulk). F represents the applied nodal loads and
gi is the set of m constraints.
The design variables able to look for the minimization of the material inside the
design space, concern with the element stiffness matrices, generically named Kel,
which opportunely assembled define the stiffness matrix K. Each of them is func-
tion of ߩ by:
‫ܭ‬௘௟ ൌ ߩ௣ ‫ܭ‬௘௟ (2)
where p stands for the penalty weight. ߩp is able to transform the problem from
discrete to continuous since p > 1 is able to penalize ߩ values far from 1. SIMP
sets p=3. Each element of the mesh contributes to the component stiffness via (1)
292 M. Bici et al.

thus if the element does not result effective the p value will decreases its rele-
vance, until to its deletion.
As reported in literature, checkerboard patterns and mesh dependency are the
major mathematical drawbacks of this formulation. Filtering techniques or math-
ematical relaxation of the optimization problem are two possible solutions to these
problems [9]. Both of them contribute to achieve a "well-posed" optimization
problem, obtaining a reliable approach to face the TO via CAE. Filtering tech-
niques reduce the set of possible solutions excluding, via filters, unphysical solu-
tions. Many filters can be applied as described, for example, in [10]. The mathe-
matical relaxation of the minimization problem consists of adding new design
variables. This is achieved putting aside the concept of solid isotropic distribution
of material and defining an assigned microstructure of voids for each element. The
new design variables can be represented by the sizes of the void areas (hole in cell
approach) or by the configuration of a layer structure (layered structures of differ-
ent ranks). The solution is then found by the so-called homogenization techniques
[4].
The macrostructure approach consists in boundary variation methods. They are
based on implicit functions able to describe what happens on the edge of the de-
sign space [8]. Doing so, changes of topology are linked to the distribution of the
contours of the implicit function )(x). In the conventional Level Set Method, this
problem is described by the “Hamilton-Jacobi-type” equation that can be solved
through sensitivity analysis on an assigned grid in the design-space domain [10].
As already mentioned, the original formulation of these methods obviously as-
sumes linear elastic behavior. Generalization to non-linear problems have been
made, see for example [8]. This reference has also an interesting overview of the
possible fields of applications that range from MEMS to biomedical, to civil struc-
ture and multiphysics applications.

3 Practical use and impact on the CAD modeling workflow

TO is available in many well-known FEM software (e.g. Ansys, Optistruct, In-


spire, Nastran). Their practical use starts from a proper definition of the problem
domain, in terms of preliminary envelope of the volume and lengths, functional
surfaces (interface surface with specific contact conditions), geometrical condi-
tions related to possible manufacturing constraints (e.g. symmetry, draft angle). In
this sense, its applicability is possible for all kind of manufacturing, although in
AM also TO solutions reached without specific manufacturing constraints can be
obtained.
The preliminary envelope of the volume can be a geometrical entity or a dense
FEM mesh, that has to be divided into design space and fixed volumes (e.g. func-
tional surfaces or manufacturing constraints). Although this distinction is a quite
natural concept, it can be implemented in different ways according to the adopted
Topological Optimization in Concept Design … 293

software. Starting from a mesh, it asks for a selection of two sets of element. Us-
ing CAD based software (e.g. Inspire), it may ask for splitting the volume in more
than one set. As a consequence, contact conditions must be given not only among
different parts of an assembly, according to FEM procedure, but also among dif-
ferent volumes of the same component. Contacts are part of the load conditions
that may be split into loads and Degree Of Freedom (DOF) constraints. Also in
this case, CAD-based software tend towards load applications not directly to mesh
elements but on geometry entity, hiding the mesh loading results. It may reduce
the knowledge of the effective load/constraint conditions that are applied, so that
careful checks are required to evaluate the compliance of the model. Concerning
loads, more than one operative condition can be analyzed and studied. The optimi-
zation can be applied on different loadcases, looking for a compromise solution.
From the computational point of view, not so many input are required. The ob-
jective function and the constraints, usually taken from compliance, mass, natural
frequency or a combination of them, … Mesh size must be rather uniform and
dense enough to reach the proper sensitivity to the element deletion, generally it
must be equal or smaller than that used in a good structural analysis evaluation. In
many cases a preliminary run is required to check the goodness of the model.
Optimal solution is provided in terms of final volume or density factor contour
plot. It must be checked through safe factor or other design requirements. If it suc-
ceeds, other CAD activities are necessary: surface smoothing (since the computa-
tion mesh is rather rough after the optimization, due to its constant length), small
area deletion or pocket closure if other manufacturing constraints are considered.
In this scenario, new CAD technologies (e.g. curve and surface modeling, syn-
chronous modeling) represent a relevant aid to reduce time, nevertheless the ne-
cessity of robust data exchange and a common user interface among
CAD/CAE/CAM may represent one of the major drawbacks of the practical use of
TO.

4. Case study

The goal of the case study is to give evidence of the workflow defined in Section 3
by using a commercial TO software (in the specific case solidThinking Inspire
2015) and to validate this design approach comparing the results to those previ-
ously achieved through a conventional “trial and error” design process.
For this reason we develop two tests from a case study related to a suspension
wishbone attachment of the Formula SAE car, named Gajarda, designed by the
“Sapienza Corse” team. The attachment is used to connect the uniball joint to the
monocoque chassis of the car at the end of each suspension’s A-arm, as shown in
the red circles of Figure 2.
294 M. Bici et al.

Fig. 2. Gajarda in two versions: the 2012 (on the left) and the 2013 (on the right). Red circles
highlight some of the positions where the suspension wishbone attachments are located.

The attachment is made of an aluminum alloy characterized by: E=70GPa,


ρ=2700 Kg/mm3, σyield=260 MPa, ν=0.33. It has been developed and modified pass-
ing from car 2012 version to 2014, to reduce mass, saving functionalities and re-
sistance. Figure 3 shows the design evolution made during these years, reducing
the total mass from 0.061 kg to 0.031 kg.

Fig. 3. Suspension wishbone attachment: interface surfaces (on the left) and shape evolution
made by “Sapienza Corse” Formula SAE team from 2012 to 2014

According to the design intent, the interface surfaces are (Figure 3 on the left):
counterbore holes that must be provided on the base, to allow the connection with
the wall of the chassis; a pocket that allows the insertion and assembly of the ball
joint with various angular orientations of the arm; two aligned holes for the lock-
ing pin. The two tests are defined as follows:
Test 1. Starting from the shape and the geometry of the 2012 version, we look
for an optimal design reducing the original mass at the 50% (comparable with the
reduction obtained in the 2014 version) and at the 20%, maximizing stiffness.
Test 2. with the aim of decoupling the optimization process from the designer’s
choices, we give as input for the design space just the component’s envelope vol-
ume. It has been defined as cylindrical since it should be made with almost axial-
symmetric features, due to the fact that the actual component is manufactured by
machining and the capability of fastening in different positions is required.
Concerning the loads scheme, in the actual configuration they are applied
through a locking pin that is inserted in the two aligned holes of the component. In
Topological Optimization in Concept Design … 295

the TO models, to take into account bending effects due to the pin deflection, they
are applied on the middle point of the axis between the two holes and transferred
to the corresponding cylindrical surfaces by a connector (thus defining the most
severe condition of bending), as shown in figure 4. On the left, the three load con-
ditions can be seen at the middle of the axis between the two holes for the locking
pin. The central vertical plane has been defined as manufacturing constraint, since,
as already said, the designer intent is also to maintain component symmetry, in
both test cases. Fixed volumes are associated to the bolt interfaces necessary to
link the component to the frame. The DOFs that are involved from these assembly
constraints are clearly shown in red. Figure 4 shows, for the two tests, also the de-
sign space in brown and the fixed volumes, constrained as non-design space (in
blue).

Fig. 4 Test 1 (on the top) and Test 2 (on the bottom). Brown volumes are design spaces, blue
volumes constrained areas, in red loads and constraints on DOF

In both tests, we have chosen to keep the number of fixing holes to three, dif-
ferently to what developed by the design team, which has increased the number of
connections reducing, at the same time, their diameter, always in a perspective of
reduction of the masses.
296 M. Bici et al.

5. Results and discussion

Test 1 aims to investigate if, through TO, it is possible to include well-established


design intents that are mainly imposed by the designer knowledge. For this reason,
we setup the optimization to obtain comparable solutions starting from the virtual
model of the 2012 attachment.

Fig. 5. Test 1: mass reduction@50% (at the top); mass reduction@75% (at the bottom)

The optimization problem has been defined as a max-stiffness research with the
constraint of total weight at 50%. Figure 5 shows the final achievements for Test
1. Mass reduction@50% gives a total mass of 0.033 kg in accordance with the
imposed constraint. Starting from this solution an enhanced one has been investi-
gated, moving the final mass constraint up to 75%, taking care that the hypothesis
of linear elastic behavior is not missed. Figure 5, at the bottom, shows this result
that set the mass to 0.017 kg, basically sharpening and smoothing the volume
around the pin hole.
Test 2 aims to investigate the ability of the TO starting from the most general
domain definition. Doing so a decoupling of the TO results and the designer
knowledge is performed, to better assess the TO potentiality as concept design
tool. Imposing the same load and constraint conditions (connected to the grey
parts of non-design space) of Test 1, the optimization has been launched looking
for the minimum achievable mass. Figure 6 shows the results. They refer to an
83% of mass reduction, starting from the CAD value of 0.14 kg. This leads to a fi-
nal mass of 0.024 kg.
Topological Optimization in Concept Design … 297

In both tests, the TO finds improved solutions. Nevertheless, Test 1 mass re-
duction @75% represents a shape refinement not a TO, since it does not change
the geometric connectivity of the component.

Fig. 6. Test 2: Final solution

Figure 7.a shows the von Mises equivalent stress of the original team’s design,
computed by FEM made by Ansys, Figure 7.b the results of the 2 improved solu-
tions investigated in Test 1 mass reduction @50% and Test 2, respectively (made
by Inspire). Obviously, the most stressed areas are those with the greatest amount
of material, around the pin hole and at the bolt localizations. The original design
shows a stress below the yield stress (about 200 MPa). Similar stress values are
found via TO, since red area must be considered outlier values due to stress con-
centration.

Fig. 7. Von Mises equivalent stress: a) Sapienza Corse’s model; b) Test 1 mass reduction @50%
and Test 2 (with the connector for load transfer not hidden).

Figure7.b shows the Von Mises equivalent stress, also for Test 2 (take care that
in this case the locking pin seems to be present since it is graphically shown by the
connector used in the load definition, as discussed in section 4). This last result
seems to exhibit a more uniform stress distribution since a smoother change of the
model boundaries can be found when starting from a larger volume. Moreover, it
must be pointed out that stress concentrations are always found near the non-
design space at the bolts. This is another confirmation of the necessity of a subse-
298 M. Bici et al.

quent smoothing, e.g. rounding, near abrupt changes of section or shapes. It can be
seen as a shape and size refinement that is necessary for the next detailed design.

6 Conclusions

Concerning the ability of capture the design intent, both tests are well-posed in
terms of starting domain and loads. This confirms the suggestion of Bensdoe to
use TO as “a creative sparring partner” [4] or a design speed-up tool. In the case
of TO as concept design tool (Test 2) a total mass of 0.024 kg is found. Compared
to the 0.031 kg of the final Sapienza Corse’s component, it can be seen as an in-
teresting result concerning with TO. So why does not Sapienza Corse achieved the
same result? Basically due to cost and design for a non-additive manufacturing
process. Thus, in our opinion, it is clearly demonstrated that via CAD modifica-
tions the preliminary shape design made by TO can be easily modified in the next
design step (detailed design) according to other design requirements. This leads to
an "intrinsic" lightweight or compliance design that is possible to make heavier or
stiffer successively. In this sense, also without adopting Additive Manufacturing,
TO allows to overcome the classical workflow: "rough concept first, then the op-
timization" towards "optimal concept design first, manufacturing constraints, if
any, later".

Acknowledgments The research work reported here was made possible by the kind help of the
whole “Sapienza Corse” team of the University of Rome “La Sapienza” (Italy).

References

1. Ulrich, K.T., Eppinger, S.D., Product Design and Development, 2003, McGraw-Hill Educa-
tion (India) Pvt Limited
2. Dr. Ian Gibson, Dr. David W. Rosen, Dr. Brent Stucker, “Additive Manufacturing Technolo-
gies. Rapid Prototyping to Direct Digital Manufacturing”, 2010, Springer.
3. William Oropallo, Les A. Piegl, Ten challenges in 3D printing, Engineering with Computers
January 2016, Volume 32, Issue 1, pp 135-148.
4. Bendsoe, M. P., Sigmund, O., Topology Optimization Theory, Methods, and Applications,
2003, Springer
5. Kelley, J.L. General Topology. New York: Springer-Verlag, 1975.
6. Hans A Eschenauer and Niels Olhoff, Topology optimization of continuum structures: A re-
view,Appl. Mech. Rev 54(4), 331-390 (Jul 01, 2001).
7. Deaton, Joshua D.and Grandhi, Ramana V., A survey of structural and multidisciplinary con-
tinuum topology optimization: post 2000. Structural and Multidisciplinary Optimization, Jan-
uary 2014, Volume 49, Issue 1, pp 1-38.
8. Pasi Tanskanen, The evolutionary structural optimization method: theoretical aspects, Com-
puter Methods in Applied Mechanics and Engineering, Volume 191, Issues 47–48, 22 No-
vember 2002, Pages 5485–5498.
Topological Optimization in Concept Design … 299

9. Roubíček, T. (1997). Relaxation in Optimization Theory and Variational Calculus. Berlin:


Walter de Gruyter. ISBN 3-11-014542-1.
10. Dunning PD, Kim HA, Mullineux G (2011) Introducing loading uncertainty into topology
optimization. AIAA J 49(4):760–768.
Section 2.2
Advanced Manufacturing
Simulation of laser-sensor digitizing for on-
machine part inspection

Nguyen Duy Minh PHAN, Yann QUINSAT and Claire LARTIGUE1*


1
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 94235 Cachan, France
* Corresponding author. Tel.: +33-147-402-986 ; fax: +33-147-402-200. E-mail address:
lartigue@lurpa.ens-cachan.fr

Abstract: Integrating measurement operations for on-machine inspection in a 5-


axis machine tool is a complex activity requiring a significant limitation of meas-
urement time in order not to penalize the production time. When using a laser-
plane sensor, time optimization must be done while keeping the quality of the ac-
quired data. In this paper, a simulation tool is proposed to assess a given digitizing
trajectory. This tool is based on the analysis of sensor configurations relatively to
the geometry of the studied part.

Keywords: In situ measurement, Laser plane sensor, Digitizing quality, Trajecto-


ry simulation

1 Introduction

Integrating inspection procedures within the production process involves rapid


decision-making regarding the conformity of parts. In the case of on-machine in-
spection for instance, part geometry measurements are performed in the same
phase as the machining operations without removing the part from its set-up,
which facilitates comparing the machined part to its CAD model for the conformi-
ty analysis. This also contributes to reducing the time allocated to measurement.
Within this context, a few recent studies have focused on the use of laser-plane
sensors to carry out on-machine inspections as they have a great ability to measure
deviations of machining parts within a time consistent with on-machine inspection
[1,2]. In the particular case of the milling process, laser-plane sensors can be inte-
grated in the machine tool, the sensor replacing the cutting tool. As the measure-
ment operation is performed while the process is stopped, one of the main issues
concerns sensor path-planning. In fact, the time allocated to measurement must be
minimized to preserve global production time, but the quality of the acquired data
must be sufficient to measure potential deviations.

© Springer International Publishing AG 2017 303


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_31
304 N.D.M. Phan et al.

Fig. 1. Laser-sensor in the machine-tool

The integration of a laser-sensor in a machine-tool is an issue little addressed in


the literature. The fact that the sensor accessibility is increased, considering the 5
degrees of freedom plus the spindle rotation, is an opportunity. Indeed, as for la-
ser mounted on industrial robots, this gives the sensor the possibility to scan an
object from any direction even along curved paths [3]. Wu et al. [4] propose a
method of sensor path planning for surface inspection on a 6 degree-of-freedom
robot that automatically adapts its trajectory to the complex shape of the object by
continuously changing the viewing direction of the scanner mounted on the robot.
Each viewpoint of the planned path must however satisfy several constraints: field
of view, scanning distance, view angle and overlap. Within the context of sensor
path planning for part inspection, it is quite classical to impose some constraints to
the sensor relatively to the part to be measured. In [5], authors introduce the con-
cept of visibility (local and global) to generate a sensor trajectory well-adapted to
the control of complex parts. Prieto et al. propose to keep the sensor as normal as
possible to the surface, while obeying a criterion of quality depending on both the
scanning distance and the sensor view angle [6]. Son et al. include additional con-
straints such as the number of required scans and the checking of occlusions [7].
Yang and Ciarallo use a genetic algorithm to obtain a set of viewing domains and
a list of observable entities for which the errors are within an acceptable tolerance
[8]. The approach developed in [9] relies on the representation of the part surface
as a voxel map, for which the size of each voxel is defined according to the sensor
field of view (fov). To each voxel, a unique point of view is associated in function
of visibility and quality criteria leading to a set of admissible viewpoints to ensure
the surface digitizing with a given quality. Mavrinac et al. [10] formalize the
search of sensor viewpoints under constraints for 3D inspection using an active
triangulation system. Each viewpoint (sensor configuration) is assessed thanks to a
performance function that results from the combination of constraint functions to
be respected. This interesting approach seems valuable to assess the validity of a
sensor trajectory prior to its optimization.
This paper deals with laser-sensor trajectories well-adapted for on-machine in-
spection on a 5-axis milling machine-tool. First, we propose to define an original
description format of the sensor trajectory directly interpretable by the CNC of the
machine-tool. The laser-plane sensor takes the place of the cutting tool in the
spindle. This original format takes advantage of the additional degree of freedom
Simulation of laser-sensor digitizing for … 305

given by the spindle rotation. The sensor trajectory is a series of ordered view-
points that must satisfy a set of constraints (visibility, quality, number of view-
points, overlaps, etc.). Prior to the stage of trajectory optimization, it might be in-
teresting to have a tool assessing given trajectories according to the constraints to
be satisfied. In this direction, the second part of the paper presents a method for
simulating the digitizing from a given trajectory.

2 Sensor trajectory in 5-axes

A description format must be defined that can be interpretable by the CNC of


the machine-tool. The study can be applied for laser-plane sensor types for which
the field of view (fov) is planar (2D) (figure 2). The sensor trajectory consists in a
set of ordered sensor configurations (each configuration defining a viewpoint), i.e.
a set of positions and orientations. The sensor orientation is given by the couple of
& &
vectors: vc , the director vector of light-beam axis, and vL , the director vector of
the digitizing line (figure 2). By analogy of cutting tool trajectories for which the
cutter location point (CL point) is the tool extremity [11], the sensor position is de-
&
fined through the point CE which positions the digitizing line: C0CE d *˜ vc .
Therefore, in the part frame, the sensor trajectory is a set of configurations
& &
( CE ; vc ; vL ) expressed in the part frame as a set of coordinates (X, Y, Z, I, J, K, I*,
J*, K*) (figure 2). This trajectory is expressed in the machine-tool frame thanks to
the Inverse Kinematics Transform, leading to (X, Y, Z, A, C, W) where A and C are
the classical angles for a RRTTT machine tool and W allows the spindle indexa-
tion. This additional degree of freedom is particularly interesting to orient the laser
beam relatively to the surface.

CE
X Y Z I J K I* J* K*
-28 -2 59 0 0 1 0.701 0.701 0
-28 -1 59 0 0 1 0.701 0.701 0
…….

Inverse Kinematics
transformation

X Y Z A C W
10 -1 10 0 0 45
11 10 0 0 0 45

Fig. 2. Parameters defining the sensor trajectory for on-machine inspection

Indeed, the sensor trajectory is classically defined to satisfy a set of constraints,


generally visibility and quality constraints, leading to the ordered series of sensor
306 N.D.M. Phan et al.

&
configurations ( CE ; vc ). The width of the scanning line, which can be assimilated
to the width of the cutting tool, varies in function of the scanning distance d* (see
figure 2). The additional degree of freedom given by the spindle indexation W
&
permits to orient vL . It could thus be possible to plan a sensor trajectory so that
the width of the scanning line is maximized by modifying the sensor indexation.
After describing the parameter setting for the definition of a digitizing trajecto-
ry on a 5-axis CNC machine-tool, a simulation tool is proposed with the aim of as-
sessing the quality of the acquired data.

3 Digitizing simulation

Considering a given sensor trajectory, a simulator has been developed to assess


this trajectory with regard to a set of constraints. As reported in the literature, the
most usual constraints are visibility, and digitizing quality. The laser scanner is
characterized by its actual fov, which corresponds to the area of the scanning plane
which is visible by the camera. The fov is thus defined by its height, H, and its
widths Lmin and Lmax, each width corresponding to the minimal and maximal height
in the fov. Hence, a portion of part surface is visible if it belongs to the fov so de-
fined (figure 4). However, as the sensor moves from one configura-
& &
tion ( CEi ; vci ) to another ( CEi1; vci 1 ) , a portion of surface is digitized if it be-
longs to the swept volume created by the displacement of the laser plane from the
first configuration to the second, as displayed in figure 4.
The orientation of the sensor relatively to the surface and the digitizing distance
characterize the quality of the acquired data. Therefore, the digitizing quality is
evaluated according to both the scanning distance d, and the view angle, D as we
proposed in previous studies [5,13]. Parameters defining the digitizing conditions
are summarized in figure 4 and table1.

Sensor trajectory
CEi+1

CEi
Digitized Surface
H
Nominal Surface
Laser plane

Fig. 4. Parameters of the laser sensor and digitizing volume between 2 configurations

To develop our approach, we take advantage of the formalism proposed in


[4,10], in which the constraints to be verified are expressed as a combination of
functions. Such formalism is rather flexible as it permits to add or remove con-
straints in function of the complexity we want to introduce in trajectory genera-
tion. The CAD model of the part is tessellated through a STL format. This gives a
Simulation of laser-sensor digitizing for … 307

mesh defined by a set ST of n triangular facets Tj. Each facet Tj is defined by 3 ver-
*
tices denoted V ji , and the normal vector to the facet is denoted n j (see table 1).

Table 1. Parameters for surface digitizing

Mesh Parameters Digitizing parameters


ST, set of n triangular facets VCEiCEi1 , swept volume between 2 configurations
Tj, facet j , T j  ST , j >1,n@ dE, distance from the bottom to CEi in the fov
Sv, set of vertices H, distance from the bottom to the top in the fov

V jk , vertex of a facet Tj, k 1,3 > @ Dmax, limited view angle of the sensor
*
n j , normal to the facet Tj dV k , distance from the bottom of fov to the vertex V jk 
j

In a first approach only 2 functions are considered: the visibility function and
the quality function. These functions are applied to the tessellated model.

3.1 Visibility function

The visibility function is used to determine the facets which we denote as seen
by the laser sensor. As the trajectory is a set of sensor configurations, the visibility
function is defined for each trajectory segment, i.e. between two successive con-
figurations. For each facet Tj of the CAD model, the function is defined as a com-
bination of two functions as expressed in equation 1:

FV* (Tj ) FV (Tj )˜ FsD (Tj ) (1)

The swept facet function FV(Tj) checks if the facet belongs to the volume swept
by the laser beam between 2 configurations VCEiCEi1 . A facet belongs to the
swept volume if all its vertices belong to the swept volume:

°1, if k >1,3@, V jk VCEiCEi1


­
FV (T j ) ® (2)
°̄0 , otherwise

Generally, the view angle is limited [12]. If the angle between the normal vec-
&
tor to the facet and vc exceeds the maximal view angle Dmax, the facet is not seen.
This is expressed by:
* *
­1, if n j ˜ vc t cos(Dmax )
FsD (T j ) ® (3)
¯0 , otherwise
308 N.D.M. Phan et al.

At the end of this stage, when the whole sensor trajectory is considered (i.e.
& &
when all the trajectory segments defined by ( CEi ; vci ) ( CEi1; vci 1 ) are consid-
*
ered), all the facets verifying FV (Tj ) 1 are characterized as seen and define the
set STs ^ `
Tj ST , FV* (Tj ) 1 .

3.2 Quality function

The visibility of the facet does not ensure the digitizing quality. Indeed, numer-
ous studies point out the importance of the digitizing distance and of the view an-
gle on the digitizing noise, factors that strongly influence the digitizing quality [5,
9,12,13]. Quality is ensured when the digitizing noise is lesser than a threshold,
threshold generally given by the user in function of the considered application.
This involves admissible ranges for both the digitizing distance and the view angle
allowing the definition of the quality function as follows:

Fws(Tj ) FV* (Tj )˜ Fwsd(Tj )˜ FwsD (Tj ) (4)

In equation (4), Fwsd and FwsD account for the quality in terms of digitizing dis-
tance and view angle respectively. Therefore, a facet is said well-seen in terms of
digitizing distance if all its vertices belong to the admissible range of digitizing
distances Iad:

°1, if k >1,3@, dV jk  Iad


­
Fwsd(T j ) ® (5)
°̄0 , otherwise

A facet is said well-seen in terms of view angle, if the angle between the nor-
&
mal vector to the facet and vc , belongs to the admissible range of view angles de-
fined by D1 and D2:

* *
­1, if cos(D1 ) d n j ˜ vc d cos(D2 )
FwsD (T j ) ® (6)
¯0 , otherwise

At the end, all the facets verifying Fws(Tj ) 1 are characterized as well-seen
and define the set STws ^ `
Tj STs , Fws(Tj ) 1 . All the other seen facets are
tagged as poorly-seen and in turn define a set STps
ps
^ `
T j STs , Fws (T j ) 0 with
ST ST ‰ ST . The facets which are not-seen define the set STns , complementary
s ws
of STs in ST : ST STs ‰ STns .
Simulation of laser-sensor digitizing for … 309

4 Results and discussion

The objective is here to validate our simulator by comparing digitizing obtained


using our simulator to the actual digitizing, and considering various trajectories.
The simulator is tested using a case study, and for the laser sensor Zephyr KZ25
(www.kreon3d.com). Although most of the sensor parameters are given by the
manufacturer, a protocol of sensor qualification is required to identify the actual
sensor parameters such as the dimensions of the fov, or the limited view angle, but
also to identify quality parameters that define the admissible ranges of digitizing
distances and view angles.

4.1 Sensor parameters

First, the dimensions of the fov are identified by simply measuring a reference
plane. As the intersection of the reference plane and the laser-beam is a line, the
height H of the fov is identified by observing if the line is visible in the CCD. The
experiment gives H = 50mm. According to the protocol defined in [13], the evolu-
tion of the digitized noise, denoted G, is identified in function of the digitizing dis-
tance and the view angle. The digitizing noise accounts for the dispersion of the
measured points with respect to a reference element, and it is usually evaluated by
measuring a reference plane surface for different digitizing distances and various
view angles.

Fig. 5. Noise in function of the scanning distance (a) and the view angle (b).

The evolution of the digitizing noise in function of the digitizing distance ex-
hibits a significant decrease of the noise from the bottom position to the top posi-
tion in the fov (figure 5a). On the other hand, the evolution of the noise in function
of the view angle does not show a significant trend (figure 5b). However, it can be
pointed out that the maximal view angle is equal to Dmax 60q , and that for the
whole range >0q; 60q@ , the noise remains lesser than 0.015mm. Considering that
value as the threshold Gadfor quality, the admissible range of digitizing distances
310 N.D.M. Phan et al.

Iad is defined by >20; 50@ mm. Those two intervals guarantee a digitizing with a
noise lesser than Gad = 0.015 mm.

4.2 Simulator tests

The sensor trajectories used to test our simulator are classical pocket-type tra-
jectories. For the first tests, the sensor orientation is constant, and the trajectory
consists in a set of points CE defined at a constant altitude z (figure 6). To assess
the simulator, the simulated digitizing is compared to the actual one. For this pur-
pose, actual digitizing was carried out using a Coordinate Measuring Machine
(CMM) equipped with a motorized indexing head, which enables the scanner to be
oriented according to repeatable discrete orientations. We choose to assess our
simulator using a CMM, because a 3-axis Cartesian CMM is a machine with less
geometrical defaults than a machine-tool. But this does not change anything in the
principle of our simulator. On the CMM, the orientations of the sensor are given
by the two rotational angles A and B. Therefore, the trajectories expressed in the
part coordinate system (for the simulation) must first be expressed in the CMM
coordinate system (figure 6).
Trajectory in the part coordinate system ( _A0B90)

1 -40,84 20 -47,15 0 0 -1 0 1 0

2 199,77 20 -47,15 0 0 -1 0 1 0

A B

1 -842,47 -363,29 -272,92 0 90

2 -601,87 -363,29 -272,92 0 90


A B
_A0B90 0 90

+30 0 90
Trajectory in the machine coordinate system

Fig. 6. Scanning trajectories for test (A = 0°; B = 90°, z= 0 mm).

Different trajectories for various digitizing distances and sensor orientations


have been tested. Only results associated with one orientation (A = 0°; B = 90°)
and two different distances (z = 0 and z = 30mm) are commented in this paper.
The algorithm is applied to the tessellated CAD model of the part, and facets are
classified in the corresponding set according to visibility and quality functions as
proposed in section 3. To simplify the representation, a color code is adopted:
well-seen facets are green, poorly-seen facets are orange, and not-seen facets are
red (table 2). On the other hand, the actual digitizing gives a point cloud which is
registered onto the mesh model. For each facet, a cylinder, whose basis is the tri-
angle defining the facet and the height is the maximal measurement error, is creat-
ed.
Simulation of laser-sensor digitizing for … 311

Table 2. Results for actual and simulated digitizing.

Simulation of the digitizing Actual digitizing


A=0°; B=90°;
z = 0 mm

A=0°; B=90°;
z = 30mm

The set of digitized points belonging to the cylinder so defined corresponds to


the actual digitized facet. To compare actual digitizing to its simulation, we have
to characterize each facet according to visibility and quality functions in the same
way. In this direction, we consider that a facet is not-seen if the density of points
associated with the facet is less than 5 points/mm2; the facet color is red. For each
facet, the geometrical deviations between the digitized points and the facet are
calculated. The associated standard deviation accounts for the actual digitizing
noise. If the noise is greater than the threshold Gad = 0.015mm, the facet is tagged
as poorly-seen, and its color is set to orange. Conversely, if the noise is lesser than
Gad, the facet is tagged as well-seen, and its color is green. Results displayed in ta-
ble 2 bring out the good similarity between simulation and actual digitizing. This
is particularly marked for the trajectory z = 0. However, some differences exist for
which the simulator underestimates the digitizing. A whole area which appears red
in the simulation is green in the actual digitizing (on the left of the part for the tra-
jectory z = 30 mm for instance). This is likely due to the fact that the digitizing
noise is evaluated using an artefact with a specific surface treatment which makes
the surface very absorbing, whereas the part is coated with a white powder that
matifies the surface. Digitizing is thus facilitated. Nevertheless, the simulator turns
out to be an interesting predictive tool prior to sensor trajectory planning.

6 Conclusion

Within the context of on-machine inspection using laser-plane digitizing sys-


tems, sensor trajectory planning is a challenge. To ensure the efficiency of the
measurement, it is necessary to minimize measurement time while ensuring the
312 N.D.M. Phan et al.

quality of the acquired data. The presented work proposes a description format of
a sensor well-adapted to on-machine inspection on 5-axis machine-tools. Given a
digitizing trajectory, a simulation tool of the acquired data quality is presented.
After a real digitizing, a good similarity between simulation and actual digitizing
can be observed. The simulator is thus an interesting predictive tool that can be
used to assist in finding the best strategy to digitize the part with a quality con-
sistent with geometrical deviations obtained in milling.

References
1. L. Dubreuil, Y. Quinsat, C. Lartigue, Multi-sensor approach for multi-scale machining defect
detection, Joint Conference On Mechanical, June 2014, Toulouse, France, Research in Inter-
active Design Vol. 4
2. F. Poulhaon, A. Leygue, M. Rauch, J-Y. Hascoet, and F. Chinesta, Simulation-based
adaptative toolpath generation in milling processes, Int. J. Machining and Machinability of
Materials, 2014, 15 (3/4), pp.263–284.
3. S. Larsson and Johan AP Kjellander. Path planning for laser scanning with an industrial ro-
bot. Robotics and Autonomous Systems, 2008, 56(7), pp.615–624.
4. Q. Wu, J. Lu, W. Zou, and D. Xu. Path planning for surface inspection on a robot-based
scanning system. In Mechatronics and Automation (ICMA), IEEE International Conference
on, 2015, pp. 2284–2289.
5. A. Bernard, M. Véron, Visibility theory applied to automatic control of 3d complex parts us-
ing plane laser sensors. CIRP Annals-Manufacturing Technology, 2000, 49(1), pp.113–118.
6. Prieto, F., Redarce, H., Lepage, R., and Boulanger, P., Range image accuracy improvement
by acquisition planning. In Proceedings of the 12th conference on vision interface (VI’99),
Trois Rivieres, Québec, Canada, 1999, pp.18–21.
7. Son, S., Park, H., and Lee, K. H., Automated laser scanning system for reverse engineering
and inspection. International Journal of Machine Tools and Manufacture, 2002, 42(8),
pp.889–897.
8. Yang, C. C. and Ciarallo, F. W., Optimized sensor placement for active visual inspection.
Journal of Robotic Systems, 2001, 18(1), pp.1–15.
9. Lartigue, C., Quinsat, Y., Mehdi-Souzani, C., Zuquete-Guarato, A., and Tabibian, S., Voxel-
based path planning for 3d scanning of mechanical parts. Computer-Aided Design and Appli-
cations, 2014, 11(2), pp.220–227.
10. Mavrinac, A., Chen, X., and Alarcon-Herrera, J. L., Semiautomatic model-based view plan-
ning for active triangulation 3-d inspection systems. Mechatronics, IEEE/ASME Transactions
on, 2015, 20(2), pp.799–811.
11. S. Lavernhe, Y. Quinsat, C. Lartigue, Model for the prediction of 3D surface topography in
5-axis milling, International Journal of Advanced Manufacturing Technology, 2010, 51, pp.
915–924.
12. M. Mahmud, D. Joannic, M. Roy, A. Isheila, J.-F. Fontaine, 3D part inspection path planning
of a laser scanner with control on the uncertainty, Computer-Aided Design , 2011, 43, pp.
345–355.
13. C. Mehdi-Souzani, Y. Quinsat, C. Lartigue, P. Bourdet, A knowledge database of qualified
digitizing systems for the selection of the best system according to the application, CIRP
Journal of Manufacturing Science and Technology, 2016, DOI:
doi:10.1016/j.cirpj.2015.12.002
Tool/Material Interferences Sensibility to
Process and Tool Parameters in Vibration-
Assisted Drilling

Vivien BONNOT1*, Yann LANDON1 and Stéphane SEGONDS1


1
Université de Toulouse; CNRS; USP; ICA (Institut Clément Ader), 3 rue Caroline Aigle,
31400 Toulouse, France.
* Corresponding author. Tel.: +33 561 17 10 72. E-mail address: vivien.bonnot@univ-tlse3.fr

Abstract Vibration-assisted drilling is a critical process applied on high-value


products such as aeronautic parts. This process performs discontinuous cutting and
improves the drilling behavior of some materials, including chip evacuation, heat
generation, mean cutting force... Several research papers illustrated the differences
between vibration-assisted and conventional drilling, hence demonstrating that
conventional drilling models may not apply. In this process, the cutting conditions
evolve drastically along the trajectory and the tool radius. The tool/material inter-
ferences (back-cutting and indentation) proved to significantly contribute to the
thrust force. A method properly describing all rigid interferences is detailed. A lo-
cal analysis of the influence of the tool geometry and process parameters over in-
terferences is presented. Interferences distribution on the tool surfaces are high-
lighted, and the presence of back-cutting far away from the cutting edge is
confirmed. A comparison is performed in conventional drilling between the pre-
dicted shape of the interferences on the tool surfaces and the real shape of a used
tool. The most interfering areas of the tool surfaces are slightly altered to simulate
a tool grind, the interference results are compared with the original tool geometry,
and significant interference reduction is observed.

Keywords: Vibration-assisted drilling, analytical simulation, interferences,


sensibility analysis.

1 Introduction

The drilling process is performed in the fabrication of countless industrial prod-


ucts. As an example, hundreds of thousands holes are necessary to assemble aero-
nautic structures. These holes are performed at the end of the manufacturing pro-
cess on high-value parts. Considering the economic risk, the drilling process has to
© Springer International Publishing AG 2017 313
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_32
314 V. Bonnot et al.

be highly reliable and efficient. The vibration assistance improves the drilling be-
havior by forcing the fragmentation of the chip, hence facilitating its evacuation.
This behavior results in an improved reliability and efficiency of the process.
The process adds axial vibrations to conventional drilling trajectory. As a re-
sult, the cutting and interferences conditions drastically evolve along the tool radi-
us and the trajectory. Knowing these conditions is required to master the process
[1][2]. Several thrust force models have been proposed, for cutting [3][4], back-
cutting [5], and indentation [3]. Le Dref [3] segmented the tool cutting edge to ap-
ply a different model on each segment, based on local cutting conditions gaps.
Ladonne [6] proposed a dynamic model including the tool holder, source of the
vibrations. Bondarenko [5] considered the back-cutting surfaces as a succession of
cutting edges erasing the material. This study illustrates an alternative approach by
including the entire tool geometry into consideration.
In this article, the studied process is detailed, and the presence of interferences
far the cutting are illustrated. The input, the data used to describe the tool and tra-
jectory parameters is firstly described, alongside the outputs: instant and integral
measurements of the interferences. Then, the first interference results in conven-
tional drilling are presented. Subsequently, the influence of the tool and trajectory
parameters with vibrations is estimated around a fixed set of entry parameters. Fi-
nally the most interfering areas on the tool surface are identified, these areas are
then corrected to simulate a tool grinding, and the interferences are re-evaluated.

2 Process interferences and simulation details

2.1 Difference between clearance angle and clearance profile

Several technologies include vibrations in drilling, each may be categorized using


the following indicators: self-generated[7-8]/forced[9-3] vibrations, high[9]
/low[3] frequency, and high/low amplitude (opposed to frequency), and also more
complex process using different technologies simultaneously [10]. Under high
frequency and low amplitude several studies do not even consider interferences
[11-12]. This study mainly focuses on high amplitude and low frequency forced
vibrations, such conditions are obtained using a system inside the tool holder in-
cluding a sinusoidal bearing. Under these conditions, the cutting is discontinuous
and interferences are proven to be significantly influent over the process behavior.
Nevertheless the following strategy may apply to any of the previous categories.

f a
Ztool(T ) .T  .sin(W.T ) (1)
2S 2
Tool/Material Interferences Sensibility to Process … 315

The equation (1) above [13] describes the tool trajectory. The left part express
the forward movement, f is the feed rate, θ is the tool angular position. The right
counterpart express the oscillations, a is the amplitude, W is the frequency of the
oscillations (osc/rev) closely tied to half the number of lobes in the sinusoidal
bearing. Given such equation, and the number of tool teeth, one may represent the
cutting profile (the edge trajectory opposed to the corresponding downhole from
previous cutting), such as represented Figure 1.

Fig. 1. Tooth trajectory profiles.

Usually, interferences are evaluated using a comparison of conventional clear-


ance angle and trajectory angle. Conventional clearance angle may only describe
the local geometry behind the cutting edge, such approach considers clearance
profile to be linear in cylindrical coordinates. The figure 2 illustrates the real cy-
lindrical clearance profiles for several radius samples, these profiles have been
generated for illustration using a tool CAD model.

Fig. 2. Tool CAD Model – Clearance Profiles.

The cutting edge is at the 180° mark. The conventional clearance angle can be
measured at the tangent to these profiles at 180°. The difference between the tan-
gent and the actual profile becomes significant far away from the cutting edge. In-
316 V. Bonnot et al.

terferences will also occur far away from that edge. This illustrates the benefits of
considering the entire clearance geometry.

2.2 Simulation inputs/outputs details

The geometrical evaluation of interferences is based on a Z-level analysis. Addi-


tionally, at each evaluation step a cutting and interference volume are removed,
and interference geometrical characteristics for the current step are recorded. The
simulation considers a list of entry parameters. Process parameters are the feed
rate, the oscillations frequency, amplitude and the tool rotation frequency N. The
tool parameters are angular and distance measurements that can be easily obtained
on a tool. These are used to create a CAD model [13], the local parameters such as
described by the norm may be retrieved through a geometrical analysis of the
CAD model, however these may not be used conveniently to specify the global
geometry. An extraction of points describing the tool cutting edge and clearance
surface is performed, the Z-analysis is performed: the relative point positions are
analyzed along the tool trajectory and the interfering volumes are extracted. The
following homogeneous rotation matrix is used to move the tool points along the
tool trajectory.

ªcos(T )  sin(T ) 0 0 º
« sin(T ) cos(T ) 0 0 » (2)
R « »
« 0 f a
0 1 .T  . sin(W .T )»
« 2S 2 »
«¬ 0 0 0 1 »¼

The tool parameters list is exhaustive; as most of them do not impact interfer-
ences, two parameters that situate a specific point P (Figure 2) will be analyzed.
The location of this point is influent on the clearance profile. H is the height be-
tween this point and the tool tip, and θt is the angular sector between this point and
the tool nose, measured around the tool tip in the xy plane. This study will analyze
the interference behavior locally around the following set of values (Table 1).

Table 1. Set of parameters used in the analysis.

f a W N H θt
0.2 mm/rev 0.2 mm 1.5 osc/rev 2000 rpm 2.5 mm 45°

The outputs are data characterizing the interference volume and how it was
generated. This volume can be presented in two ways: the volume VPart as re-
moved on the part or the volume VTool as removed by the clearance tool surface.
The first represents the evolution of the interferences over the chip formation cy-
cle while the former illustrates the interfering areas on the tool. Three scalar
Tool/Material Interferences Sensibility to Process … 317

measurements taken on these volumes are compared: the global volume Vt (which
is the same for both representations), the maximum height Hmt of the tool volume,
and the maximum height Hmp of the part volume, given Table 2. The simulation
also allows to extract the evolution of two instantaneous measurements: the inter-
ference flow-rate (additional volume at each calculation step, divided by the time
step) Q [mm3.s-1] and the projected surface S [mm3]. The subsequent paragraphs
detail the results under the conditions of Table 1.

Table 2. Results in terms of global volume and heights.

Vt Hmt Hmp
0.0287 mm 3
0.190 mm 0.133 mm

The total interference profile on the tool (Figure 3) highlights that interferences
are concentrated at the common edge between the first and second clearance sur-
faces. This gives the first insights to improve interference behavior. Furthermore,
the interference flow-rate (Figure 4) reaches its maximum earlier than the project-
ed surface, this gives us an insight of the evolution of the interfering conditions,
initially concentrated on a small surface with intense penetration, and then spread
to a larger surface with lower penetration.

Fig. 3. Integral interferences on the tool (left) VTool and the part (right) VPart corresponding to one
chip/teeth, the black line represents the cutting edge of one teeth.

Fig. 4. Evolution of interference flow-rate Q and projected interference surface S.


318 V. Bonnot et al.

2.3 Simulation under classical drilling conditions

The results under classical conditions are coherent (Figure 5). The interference
flow-rate profile and projected surfaces are invariant over time. And the distribu-
tion of the integral interference volume on the tool is similar to one that can be ob-
served on a used tool, namely the maximum interfering radius next to the cutting
edge is half of its counterpart on the common edge between first and second clear-
ance face (a similar observation can be made under vibratory conditions on the
tool distribution Figure 3). This phenomenon may only be observed with integral
clearance consideration, as it is generated by the rotation of the angled clearance
face.

Fig. 5. Aluminum interference residues from conventional drilling measured (a) and correspond-
ing cumulated interferences simulated on the tool on the same position (b), different scales, simi-
lar patterns.

3 Evaluation of parameters influence over interferences


characteristics

In order to evaluate influential parameters around the described conditions, the lo-
cal partial variabilities of the outputs were evaluated. The results are presented in
table 3. The analysis was conducted on three of the process parameters and two of
the tool parameters. For clarity, units are not detailed.

Table 3. Results of local partial variabilities of the outputs.

df da dW dH dθt
(mm/rev) (mm) (osc/rev) (mm) (deg)
dVt / d… (mm3)/(…) 0.403 0.0195 0.0886 -0.062 0.0004
dHmt / d… (mm)/(…) 1.161 -0.019 0.214 -0.0043 0.0008
dHmp / d… (mm)/(…) 0.668 -0.002 -0.082 0.0001 0.00001

In order to get understand these results, these local variabilities are used con-
sidering a 5% variation in most entry parameters. Outputs percentage variation can
Tool/Material Interferences Sensibility to Process … 319

be expressed (Table 4). A 1% variation for the oscillations frequency was consid-
ered as it corresponds to the uncertainty of the frequency according to Jallageas
[14] results.

Table 4. Variabilities of the outputs considering parameters variation.

df 5% da 5% dW 1% dH 5% dθt 2%
(mm/rev) (mm) (osc/rev) (mm) (deg)
dVt (mm )
3
14% < 1% < 1% 20% 1.4%
dHmt (mm) 7% < 1% < 1% < 1% < 1%
dHmp (mm) 5% < 1% < 1% < 1% < 1%

The feed rate and the height of the CAD parametric point have a significant in-
fluence over the interfered volume around these local values. These results may
drastically change under other local conditions, closer or further away from chip
fragmentation, and must be taken as an example. For instance, the amplitude will
have an influence at some point, as it determines the fragmentation.

4 Influence of tool geometry, tool grind to reduce interference


volume

Considering the previous results regarding the cartography of integral interference


on the tool, most of the interference is carried by the common edge between the
first and second clearance surface. The CAD model allows us to easily modify that
edge, by changing the height of the red marked point, namely changing the value
of H. The vertical distance between the tool tip and the considered point has been
increased by 0.1mm. As expected, the volume interfered is reduced significantly
(18%) for a minimal tool modification (Table 5). However the maximum heights
remain unchanged. The maximum interference height on the part remains tied to
process parameters. As for the maximum interference height on the tool, the invar-
iance suggests that the interfered volume is changed mostly in the interfering sur-
face.

Table 5. Results in terms of global volume and heights after tool modification.

Vt grind Hmt grind Hmp grind


0.0234 mm3 0.190 mm 0.133 mm
320 V. Bonnot et al.

Conclusion

This study demonstrated the importance of considering the integral geometry of


the tool while evaluating interferences. Also, the feed rate and the edge between
the first and second clearance faces significantly influence the interference vol-
ume, but considering the absence of influence of the amplitude of the vibrations
on our local analysis, this highlights that the influence of process/tool parameters
may vary importantly depending on the local values. Finally, a slight change in the
tool clearance face can have a drastic impact on the interference volume, and thus
the thrust forces. Further process testing must be conducted, with different tool
geometries to corroborate these results.

References

1. L. ZHANG, L. WANG, X. WANG. « Study on vibration drilling of fiber reinforced plastics


with hybrid variation parameters method », Composites: Part A, Elsevier, 34 (2003) 237–244
2. X.WANG, L.J. WANG, J.P. TAO. « Investigation on thrust in vibration drilling of fiber-
reinforced plastics », J. of Materials Processing Technology, Elsevier, 148 (2004) 239–244
3. J. LE DREF. « Contribution à la modélisation du perçage assisté par vibration et à l’étude de
son impact sur la qualité d’alésage. Application aux empilages multi-matériaux. », Ph.D The-
sis, Université de Toulouse, 2014
4. O. PECAT, I. MEYER. « Low Frequency Vibration Assisted Drilling of Aluminium Alloys.
», Advanced Materials Research, Trans Tech Publication, 779 (2013) 131–138
5. D. BONDARENKO. « Etude mésoscopique de l’interaction mécanique outil/pièce et contribu-
tion sur le comportement dynamique du système usinant », Thèse, 2010
6. M. LADONNE, M. CHERIF, Y. LANDON, J.Y. K’NEVEZ, O. CAHUC, C. DE
CASTELBAJAC. « Modelling The Vibration-Assisted Drilling Process: Identification Of
Influencial Phenomena », Int. J. of Advanced Manufacturing Technology, Vol 40, 1-11, 2009
7. N. GUIBERT, H. PARIS, J. RECH, C. CLAUDIN. « Identification of thrust force models for
vibratory drilling », Int. J. of Machine Tools & Manufacture, Elsevier, 49 (2009) 730–738
8. G. MORARU. « Etude du comportement du système ”Pièce-Outil-Machine” en régime de
coupe vibratoire », Ph.D Thesis, 2002
9. A. BOUKARI. « Modélisation des actionneurs piézoélectriques pour le contrôle des systèmes
complexes », Ph.D Thesis, 2010
10. K. ISHIKAWA, H. SUWABE, T. NISHIDE, M. UNEDA. « A study on combined vibration
drilling by ultrasonic and low-frequency vibrations for hard and brittle materials », Precision
Engineering, Elsevier Science, 22 (1998) 196–205
11. L.-B. ZHANG, L.-J. WANG, X.-Y. LIU, H.-W. ZHAO, X. WANG, H.-Y. LUO. « Mechani-
cal model for predicting thrust and torque in vibration drilling fiber-reinforced composite ma-
terials », Int. J. of Machine Tools & Manufacture, Pergamon, 41 (2001) 641–657
12. J. A. YANG, V. JAGANATHAN, R. DU. « A new model for drilling and reaming processes
», Int. J. of Machine Tools & Manufacture, Pergamon, 42 (2002) 299–311
13. S. LAPORTE, J.Y. K’NEVEZ, O. CAHUC, P. DAMIS. « A Parametric Model Of Drill Edge
Angles Using Grinding Parameters», Int. J. of Forming Processes, 10.4, 411-428, 2007
14. J. JALLAGEAS, J.Y. K’NEVEZ, M. CHERIF, O. CAHUC. « Modeling and Optimization of
Vibration-Assisted Drilling on Positive Feed Drilling Unit», Int. J. of Advanced Manufactur-
ing Technology, Vol 67, 1205-1216, 2012
Implementation of a new method for robotic
repair operations on composite structures

Elodie PAQUET 1, Sébastien GARNIER 1, Mathieu RITOU 1, Benoît FURET


1
, Vincent DESFONTAINES ²
1.
UNIVERSITY OF NANTES : Laboratoire IRCCyN (UMR CNRS 6597), IUT de Nantes, 2
avenue du Professeur Jean Rouxel, 44470 Carquefou
2.
EUROPE TECHNOLOGIES, 2 rue de la fonderie, 44475 Carquefou Cedex
* Corresponding authors. E-mail address: elodie.paquet@univ-nantes.fr,
sebastien.garnier@univ-nantes.fr, mathieu.ritou@univ-nantes.fr, benoit.furet@univ-
nantes.fr,v.desfontaines@europechnologies.com

Abstract

Composite materials nowadays are used in a wide range of applications in aero-


space, marine, automotive, surface transport and sports equipment markets. For
example, all aircraft’s composite parts have the potential to incur damage and
therefore require repairs. These shocks can impact the mechanical behavior of the
structure in a different ways: adversely, irretrievable and, in some cases, in a
scalable damage. It is therefore essential to intervene quickly on these parts to
make the appropriate repairs without immobilizing the aircraft for too long.
The scarfing repair operation involves machining or grinding away successive ply
layers from the skin to create a tapered or stepped dish scarf profile around the
damaged area. After the scarf profile is machined, the composite part is restored
by applying multiple ply layers with the correct thickness and orientation to re-
place the damaged area. Once all the ply layers are replaced, the surface is heated
under a vacuum to bond the new material. The final skin is ground smoothed to re-
trieve the original design of the part. Currently, the scarfing operations are per-
formed manually. These operations involve high costs due to the precision, heath
precautions and a lack of repeatability. In these circumstances, the use of
automated solutions for the composite repair process could bring accuracy,
repeatability and reduce the repair’s time. The objective of this study is to provide
a methodology for an automated repair process of composite parts, representative
of primary aircraft structures.

Keywords: Robotic machining, Composite repair, Repair of structural composite


parts, machining process.

© Springer International Publishing AG 2017 321


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_33
322 E. Paquet et al.

1 Introduction

Composite materials nowadays are used in a wide range of applications in aero-


space, marine, automotive, surface transport and sports equipment markets [1].
For example, all aircraft’s composite parts have the potential to incur damage and
therefore require repairs. These shocks can impact the mechanical behavior of the
structure in a different ways: adversely, irretrievable and, in some cases, in a
scalable damage. It is therefore essential to intervene quickly on these parts to
make the appropriate repairs without immobilizing the aircraft for too long.

There are two main repair techniques, and these are referred to as scarf and lap
(see figure 1). In the scarf technique the repair material is inserted into the lami-
nate in place of the material removed due to the damage. In the lap technique the
repair material is applied either on one or on both sides of the laminate over the
damaged area. [2]

Fig. 1.Stepped lap and Scarf main repair techniques

To perform and automated this repair operations on CFRP components, a light-


weight, portable manipulator with a collaborative robot has been designed and de-
veloped during the project “COPERBOT”.

Fig. 2 Robotic solution developed for composite materials reparation in the project COPERBOT
Implementation of a new method for robotic … 323

The aim of this project is the development of an integrated process chain for a fast,
low price, automated and reproducible repair of high performance fiber composite
structures with a collaborative robot.

This platform will be mountable on aircraft structures even in-field, which allows
for repairs without disassembly of the part itself. Consequently, a faster, more
reliable and fully automated composite repair method is possible for the
aeronautical and nautical industry. The objective of this article is to propose a new
method to automate repair process of composite parts, for example monolithic
CFRP laminate plates representative of primary aircraft structures.

This article is based on industrial examples from the collaborative “COPERBOT”


project.

1 Development of a robotic repair method for composite struc-


ture.

Repair range of an impact on a composite structure consisted of the steps listed


below:

Setting laws for scarfing


1 Define the laws for scarfing according to the characteristics of
the repair part (stack, thick folds...)

2 3D and NDT scanning of damaged area


Reconstruct in 3D the damaged area,

Scarf or Stepped lab profile machining of damaged areas


3 Remove broken folds

Cleaning
4 Ensure optimal bonding

Ply cutting
5 cut the folds for repair

Draping
6 Strengthening the damaged area

Polymérisation
7 Guarantee pressure / vacuum condition

Finition
8 Getting the initial surface condition

3D and NDT scanning of the affected area


9 Check geometry and quality of the repaired area

Automated opérations
Fig. 3 Reparation range of an impact on a composite structure
324 E. Paquet et al.

The work carried out in the project COPERBOT are limited for the moment to
repair tests on monolithic composite prepreg Hexply 8552 - AGP280-5H with a
draping plan [0,45,0,45,0,0, 45,0,45,0].This type of plate is representative of
materials, thicknesses and stacking sequences found in primary aircraft structures
such as the radome of aircraft.

2 Surface generation by 3d-scanning

Most composite structures present in an airplane such as radome has curved


shapes. It is therefore necessary to make a 3d scanning of the surface in the area to
be repaired in order to recover the normal to the surface to adjust the trajectory
machining.

Fig. 4. Example of a stepped lap on convex part.

The first step in our robotic repair method is to reconstruct the surface to prepare
the stepped lap trajectory of the damaged area with an onboard laser sensor on the
6th axis of the robot. The method adopted to reconstruct the surface is scanning
the surface with the robot following a regular mesh defined by three points by the
operator. By combining the position of the robot and the information given by a
distance sensor (a line laser), we can then reconstruct the damaged area surface.
Three typical examples are shown in fig 5:

Fig. 5. Surface reconstructed by laser sensor fixed on a robot


Implementation of a new method for robotic … 325

3 3D Scarf calculation and milling trajectory.

From the recovered data on the reconstructed surface, machine path trajectories
are calculated to create the appropriate geometry scarf on the surface. This patch
needs to be draped mathematically on the surface, otherwise it wouldn't fit
later in the scarf especially for parts with a smaller radius. [2] [5] Based on
this 3D scarf definition the final milling trajectory is calculated taking into
account different cutter types (shank or radius) as well as the stability of the part
during the milling process. Two typical trajectories are shown in fig 6:

Fig. 6 Two trajectories for a stepped lap.

4 Stepped lap milling

To evaluate the optimum conditions of different repair materials, two types of


tools, and three types of material were used. Repairs were made by the stepped lap
techniques. [9] The cutting conditions selected are listed in the table below:

Tools: PCD Ø 10mm Carbide tool Ø 10mm

19250 tr/min 12 000 tr/min


Rotation speed

604.45 m/min 376.8 m/min


Cutting speed

Feed per revolution 0.25 mm/rev 0.25mm/rev

Cutting depth 0.10 mm 0.10 mm

Width of each step. 20 mm 20 mm

Fig. 7 Cutting conditions selected for tests.


326 E. Paquet et al.

Results of tests recommend to use a polycrystalline diamond tool (PCD) for


machining of stepped lap. This type of cutter is designed to withstand the abrasive
properties of composite material.

Fig 8 Stepped lab composite part produced by the robot

To limit the defects created by the machining forces, two types of parameters
were tested to determine the most suitable for our application: on one side, those
associated with the tool, those related the cutting conditions (feed per revolution,
cutting speed, direction of the fibers in relation to the feed rate ...) [2]. The ma-
chining paths that were chosen with a view to study subsequently the influence of
fiber orientation on the cutting forces [5].

5 Metrological controls of surfaces obtained by stepping in to


optimize the process conditions.

Optical 3D measurement has controlled the removed pleat depths and the surface
roughness obtained by machining robots on our test plates.
Implementation of a new method for robotic … 327

Fig 9 Metrological controls realized of stepped lap in composite part produced by the robot

Micrographic examination of the surface topography and the profile control


review by a coordinate measuring machine on machined steps show an accuracy
of a tenth of the depth machined on each floor for each of the two tests. Robotic
scarfing made by the technical steps with PCD tool enables a low roughness (Ra
of approximately 21) and without the presence of delamination floors contour
levels.

Fig 10. Analyzing surface quality obtained by PCD tool.

The tests have shown that the quality of the automatic repair is at least
as good as for a repair manually executed by skilled repairmen.
Even for simple repairs the robotic scarfing process has shown to be
two times more efficient than a manual process.

6 Conclusions

This article points a scientific view about the problematic of composites repairs
and proposes excavation solutions achieving normalized using robot.
Through testing we found that the PCD tools associated with certain operating
conditions allow achieving the desired quality level for the preparation of the
repair area. The approach of 3D surface scan and projection paths were validated
by measuring the qualities of realized scarf. Analysis of scarfing tests checks on
robot allows to consider robotic solutions finalized type “ cobot “ for repair of
composite parts on ship or airplane by interventions directly on the sites or
exploitation zones.
328 E. Paquet et al.

However, additional tests must be conducted to validate the proposed


methodology, and the mechanical characterization of the interface repaired and
analysis of the structural strength of the repair by testing expense and fatigue of
different specimens repaired.

7 Acknowledgements

We want to thank Mrs. Rozenn POZEVARA (R&D Composites Project manager)


in EUROPE TECHNOLOGIES for providing us with the necessary material for
testing and also for the partnership ET / IRCCyN for the “COPERBOT” project
funded by the BPI France and uses robotic means from the Equipex ROBOTEX
project, as well as members of the M02P-Robotics team and CAPACITY SAS for
testing and metrological analyzes.

8 References

1. B.FURET – B. JOLIVEL – D. LE BORGNE, "Milling and drilling of composite materials for


the aeronautics", Revue internationale JEC Composites N°18, June-July 2005
2 A. EDWIN- E.LESTER, "Automated Scarfing and Surface Finishing Apparatus for Complex
Contour Composite Structures", American Society of Mechanical Engineers, Manufacturing
Engineering Division, MED 05/2011; 6.
3.S.GOULEAU–S. GARNIER–B.FURET, « Perçage d’empilages multi-matériaux : composites
et métalliques », Mécanique et Industries, 2007, vol. 8, No. 5, p. 463-469.
4.A.MONDELIN–B.FURET– J. RECH, « Characterisation of friction properties between a lam-
inated carbon fibres reinforced polymer and a monocrystalline diamond under dry or lubri-
cated conditions », Tribology International Vol. 43, p. 1665-1673, 2010.
5. B.MANN, C.REICH, « Automated repair of fiber composite structures based on 3d-scanning
and robotized milling» Deutscher Luft- und Raumfahrtkongress, 2012.
6.C.DUMAS, S. CARO, M. CHERIF, S. GARNIER, M. RITOU, B. FURET, “ Joint stiffness
identification of industrial serial robots ”, Robotica, 2011. (2011-08-08), pp. 1-20, [hal-
00633095].
7.C.DUMAS, A. BOUDELIER, S. CARO, B. FURET, S. GARNIER, M. RITOU, “ Develop-
ment of a robotic cell for trimming of composite parts”, Mechanics & Industry 12, 487–494
(2011), DOI: 10.1051/meca/2011103
8.A.BOUDELIER, M.RITOU, S.GARNIER, B.FURET, “Optimization of Process Parameters in
CFRP Machining with Diamond Abrasive Cutters”, Advanced Materials Research (Volume
223), 774-783 (2011), DOI: 10.402/www.scientific.net/AMR.223.774
9.BAKER, A.A., A Proposed Approach for Certification of Bonded Composite Repairs to
Flight-Critical Airframe Structure, Applied Composite Materials, DOI 10.1007/s10443-010-
9161-z
10.WHITTINGHAM, B., BAKER, A.A., HARMAN, A. AND BITTON, D., Micrographic stud-
ies on adhesively bonded scarf repairs to thick composite aircraft structure, Composites: Part
A 40 (2009), pp. 1419–1432
11.GUNNION, A.J. AND HERSZBERG, I., Parametric study of scarf joints in composite struc-
tures, Composite Structures, Volume 75, Issues 1-4, September 2006, pp. 364-376
12.C.BONNET- G.POULACHON-J.RECH-Y.GIRARD-J.P COSTES, “ CFRP drilling: Funda-
mental study of local feed force and consequences on hole exit damage “ International Jour-
nal of Machine Tools and Manufacture, 2015, 94, pp.57-64.
CAD-CAM integration for 3D Hybrid
Manufacturing

Gianni Caligiana1, Daniela Francia1 and Alfredo Liverani1


1
University of Bologna, v.le Risorgimento 2, Bologna, 40136, Italy
* Corresponding author. Tel.: +390512093352; fax: +390512093412. E-mail address:
d.francia@unibo.it

Abstract Hybrid Manufacturing (HM) is oriented to combine the advantages of


additive manufacturing, such as few limits in shape reproduction, good customiza-
tion of parts, distributive production, minimization of production costs and mini-
mization of waste materials, with the advantages of subtractive manufacturing, in
terms of finishing properties and accuracy of dimensional tolerances. In this con-
text, our research group presents a design technique that aims to data processing
that switches between additive and subtractive procedures, to the costs and time of
product-manufacturing optimization.
The component prototyping may be performed combining different stages (addic-
tion, gross milling, fine milling, deposition…) with different parameters and
head/nozzles and is able to work with different materials either in addictive, either
in milling.
The present paper is dedicated to introduce different strategies, or in other terms,
different combinations of machining features (addictive or deductive) and differ-
ent materials to complete a prototype model or mold. The optimization/analysis
piece of software is fully integrated in classic CAD/CAM environment for better
supporting the design and engineering processes.

Keywords: Hybrid manufacturing; CAD; CAM; Process design; Multimaterial


manufacturing.

1 Introduction

During the last decade, intensive research efforts in Rapid Prototyping (RP) fo-
cused on Additive Manufacturing (AM) techniques, because of their efficiency in
terms of time and cost reduction, in the product development and manufacturing.
AM enables production of complex structures directly from 3D CAD models in a
layer-by-layer process using metals, polymers, and composite materials.

© Springer International Publishing AG 2017 329


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_34
330 G. Caligiana et al.

A large number of additive processes are now available [1]. They differ in the
way layers are deposited to create parts and in the materials that can be used.
Among them, a promising technique is the 3D Printing that puts its roots in the ink
jet printing technology, by means of a printer head that lay down small beads of
material, which harden immediately to form layers. A thermoplastic filament or
metal wire that is wound on a coil can be unreeled to supply material to the extru-
sion nozzle head. The nozzle head heats the material and turns the flow on and off.
Each layer deposited can be seen as a thinly sliced horizontal cross-section of the
eventual object and each cross section can be extremely detailed.
However, the performance of 3D Printers (3DPs) has often to be tested: the di-
mensional accuracy, feature size, geometric and dimensional tolerance and surface
roughness are weak points of 3DPs [2, 3].
On the other side, subtractive processes have several advantages that overcome
the limits afore mentioned. In subtractive processes, a piece of raw material is cut
into a desired final shape and a controlled material-removal process sizes it by the
use of machine tools.
A good issue is to combine the two processes, in order to gain advantages from
the addictive and subtractive techniques, depending on the piece to be produced.
Automation is the key to making subtractive prototyping competitive with the ad-
ditive methods, but it has to face up to the CAD model translation into tool paths
for milling machinery.
Since the 1990s, the concept of hybrid 3D printer arises, that merges additive
and subtractive techniques in one machine. Combining the benefits of milling and
3D printing in one unit, these machines may break through barriers experienced
by design engineers especially inherent to limitations in terms of surface finish
and precision of 3DPs alone. Hybrid 3DPs (H3DPs) produce pieces ready to go
right out of the machine, with no need for a separate milling operation, and guar-
antees dimensional accuracy and quality standard, difficult to be achieved other-
wise.
One lack of H3DPs remains their limited volume capabilities: usually, they can
produce components in the range of size of several centimeters, up to the meter.
Besides, a bottleneck problem in CAD and CAM systems integration, from 90s
to now, has been their implementation in open source environment [4-5]. The
open source development model encourage collaborative work in order to enhance
CAD-CAM/CNC integration tools and several efforts tends towards the develop-
ment of more efficient integration platforms on open environments.
In this context, the challenge is to overcome the constraints of common hybrid
3D Printers and to optimize the addictive-subtractive techniques interchange by
means of automatic tools and a managing software that can be implemented in
open environment.
This goal motivated our research group to design a 3D hybrid-layered manufac-
turing printer, which lies on the highest podium in size, comprising both a part of
milling and a part of layering deposition. In order to integrate CAD and CAM
CAD-CAM integration for 3D Hybrid Manufacturing … 331

communication, a management software has been compiled, starting from CAD


and CAM software available in open source environment.
Starting from the analysis of the requirements concerning the dimensions and
accuracy of a piece, this approach evaluates the possible manufacturing combina-
tions between additive and subtractive technologies, seeking the ideal in terms of
processing times, processing waste, materials employed [6].
Generally, a typical sequencing in the proposed process development can be re-
sumed as follows:

1. when a new piece is assigned, the part features have to be recognized


and classified, before the manufacturing process begins, depending on
its function: it could be a mould or a model part [7];
2. when the part is a mould, an inner core is prepared by means of a
rough cut of a starting block and, upon this support, the software man-
ages the deposition of material in order to complete the part up to its
final shape;
3. when the part is a model, the deposition of rough layers of foam is set
up, in order to obtain a piece near to the final shape, to be latter re-
fined by milling, up to the desired shape of the part;
4. latter operations, by means of spray technique, could even occur in or-
der to finish parts with a desired surface material or to paint it;
5. the manufactured part, as it is a mould as it is a model, could be even-
tually refined by milling operations in order to achieve good finishing
properties.
The following paragraphs describe the design technique more in particular,
starting from the method adopted and to the equipment description and finally pre-
senting an estimation of the gain in terms of time and cost yielded by this promis-
ing technique.

2 The Method

The novel design approach we propose is targeted to exploit the benefits of both
additive and deductive manufacturing. Our aim is to perform a data processing
able to switch between additive and subtractive procedures, enabling the manufac-
turing of products of any shape, combinations of different materials for the opti-
mal manufacturing of products, in terms of costs and time reduction, also available
for small quantity production. The data processing is implemented exploiting the
open source environment.
In this section, we describe the sequence of operations that can be interchanged
in order to optimise the piece manufacturing by means of hybrid techniques, tak-
ing into account, as target, the realization of a mold.
332 G. Caligiana et al.

The mold can be realized in two parts: an inner core, which can be roughly
shaped, and an external surface that has to be carefully defined.
This allows to reduce the external material to the minimum necessary and to
maximize the internal material core, in order to save material costs and weight.
When such optimization can be adopted, addictive and deductive technologies can
be combined in order to perform the manufacturing process. The inner support of
the mold is prepared by milling a raw block up to the desired shape and, upon it, a
minimal deposition of the external material is calculated, in order to reduce time
and costs of the total operation. A further finishing of the surfaces of the mold can
be provided.
Figure 1, which follows, shows an example of how the procedure, starting
from a model, lead to the definition of the mold made of different parts. The mold
in the figure is made of two different materials and is realized through two differ-
ent manufacturing processes.

Fig. 1. The hybrid manufacturing for mold application.

The driven concept is that a part, even a complex in shape part, rather than be-
ing produced as a whole, can be realized as decomposition of an external thin sur-
face deposited upon a pre-prepared support. As input for the manufacturing, the
H3DP requires a CAD model of the mold in order to extract from it the CAD
model of an appropriate support. Figure 2 shows the support generation phase in
which, from the geometric model G1, the geometric model G2 is calculated.

Fig. 2. The geometric model extraction for the inner support.

This support can be milled starting from a row block made by a filling material,
such as polystyrene [8].
Then, as shown in Figure 3, it is possible to complete the mold shape by addic-
tive manufacturing. In order to addict material to the support up to reach the final
shape, the RP machine requires a further CAD model that can be obtained by
comparing the final shape of the mold to the inner core support geometry.
CAD-CAM integration for 3D Hybrid Manufacturing … 333

Fig. 3. The model for the layer manufacturing process.

However, after the layering manufacturing, the piece obtained could require
some other finishing operation, as shown in Figure 4, in order to meet roughness
values that layer deposition cannot guarantee.

Fig. 4. The roughness of surface obtained by layering deposition must be removed to attend the
final shape of the mold.

Thus, the mold is obtained by sequences that switches from addictive to deduc-
tive manufacturing. The sequence of these operations can be resumed as follows,
in Figure 5, where the symbol + is used to refer to addictive manufacturing, the
symbol – to refer to subtractive manufacturing.

Fig. 5. The sequences of the hybrid manufacturing process


Otherwise, for model parts production, additive layering manufacturing can be
employed to roughly form a part, to be later refined, by milling, in order to reach a
final desired shape, with good tolerances and roughness accuracy. Anyway, the in-
tegration between additive and subtractive manufacturing strictly depends on the
integration between CAD/CAM, which support the manufacturing processes.
334 G. Caligiana et al.

For this purpose, our research group compiled also a control software, able to
simulate CNC machining on the block, in order to detect errors, potential colli-
sions, or areas of inefficiency. This enables to correct errors before the program
can be loaded on the CNC machine, thereby eliminating manual prove-outs.

3 The Equipment

The equipment arranged in our laboratories is able to work as additive and sub-
tractive manufacturing system at the same time: our research group compiled the
software that support this system. It is an open source software and is able to
translate and interconnect different programming language, in order to coordinate
different functions of the system: it includes a 3D slicer and a CNC/CAM module,
fully integrated with the CAD software. Thanks to open source CAD/CAM soft-
ware, it is possible to design the CAD geometry, perform multi-physics simula-
tions to optimize the design and to generate the G-code, ready for the 3D Printing
and milling [9].
The hybrid 3D printing process begins with the modelling of a part by means of
a CAD software. This is an open source software, developed starting from the
Freecad architecture. Freecad is one of the most promising open source 3D-CAD
software focused on mechanical engineering and product design. It is Feature-
based, parametric, with 2D sketch input with constraint solver, it supports brep,
nurbs, booleans operations or fillets.
The subtractive process is managed by the integration of a milling module,
based on the Freemill architecture. FreeMill is a module for programming CNC
mills & routers. It creates one type of tool path, called parallel milling, where the
cutter is driven along a series of parallel planes to machine the part geometry. It
runs full cutting and material simulation of the tool path and outputs the G code to
the machine tool.
The slicing module has been compiled in order to give instructions to the RP
machine to produce the desired part starting from the software Slic3r. It is able to
convert a digital 3D model into printing instructions for the 3D printer. It cuts the
model into horizontal slices (layers), generates tool paths to fill them and calcu-
lates the amount of material to be extruded.
The main purpose is to manage many aspects in a single environment and at low
cost: to be able to handle 3D printing and CAM operations in an economic envi-
ronment, with open source tools, extension of CAD and CAM programs. The re-
search group extended the Freecad’s environment and integrated it with the 3D
printing software (Slic3r) and CAM module (Freemill).
As check tool of the communication between the different additive-subtractive
phases, three different visualization modules have been inserted in the system, for
the G-codes visualization. They describe the 3D object to be produced in all its
slicing steps and they are: Repetier-Host, Colibrì and Openscam. The RexRoth
CAD-CAM integration for 3D Hybrid Manufacturing … 335

MTX module, finally, emulates the entire slicing thus allowing the control of the
process and, in case of errors, the avoidance of the printer damage.
In order to inter-connect the different software and modules and to set the pa-
rameters required for the production setting, graphical ad hoc interfaces have been
designed, by means of the programming language Phyton. The software is imple-
mented on a 3 axis machinery, shown in Figure 6, with head/nozzle replacement
for milling and addictive manufacturing for fast switch. The system spans over a
huge volume (5.000 x 3.000 x 2.000 mm) and may also be equipped by a nozzle in
order to spray a film coat on the surface. Through a very user friendly interface,
the user can choose a process, can simulate it and then can make the system work-
ing.

Fig.6. The Hybrid 3D Printing in our laboratories and its managing software interface.

4 Discussion

In this paragraph is briefly discussed the convenience to address to the innova-


tive hybrid approach for some products, which entails constraints in shape or di-
mension of the pieces, that traditionally are obtained through laborious and time-
consuming manufacturing. For example, boat hulls are items commonly made of
fiberglass. Fiberglass parts are produced in molds through a manual process
known as a lay-up. In the most suitable case, advanced boatyards are able to
manufacture hull mold through a technique similar to the one we proposed, but
that is not assisted by automated systems. In particular the inner support is realized
by the milling of a polystyrene block and, upon it, a paste is manually deposited.
After the deposition, a finishing of the external surface is required.
The alternative approach proposed in this paper is aimed to replace the labor-
intensive and time-consuming process of hand making, by the combination of two
successful technologies that can guarantee shorter lead times and lower expense.
336 G. Caligiana et al.

As above detailed, the mold can be arranged, through an hybrid approach, by a


first rough machining of a support and, upon it, by the automatic deposition of
fused material that will reproduce a target shape with good precision. Further-
more, in order to accomplish to accuracy standards, refinement operations can be
performed. The hybrid 3D printer carries out all the addictive-deductive phases.
Figure 7, that follows, shows the traditional hand-made mold construction and,
on the other side, the successful techniques proposed in the hybrid manufacturing.

Fig.7. The comparison between traditional and innovative manufacturing for the same kind of
product: a boat mold.

Table 1 collects some data about the two facing manufacturing approaches.
The mayor costs are evaluated and compared and the lead times have been esti-
mated. The last row evidences the time and cost reductions that the new hybrid
approach delivers, compared to the handmade mold approach, for a race boat hull,
more or less 5 meters long. The estimate for a hand-made mold is € 6500 and 86
hours of lead time. In contrast, the hybrid manufacturing yields a lead time of 56
hours and €4700 in cost, with evident savings.

Table 1. A rough valuation of time and costs of hand-made and automated manufacturing.

Resources Hand-made Hand-made Hybrid-Manuf. Hybrid-Manuf.


Costs (€) Time (h) Costs (€) Time (h)
Raw material 3500 900
CAM & setup 100 4
Add/sub & setup 200 8
Labor time 1280 64 200 10
Machining cost 1620 18 3420 38
Total Expense 6500 86 4720 56
SAVINGS 28% 35%
CAD-CAM integration for 3D Hybrid Manufacturing … 337

5 Conclusions

The design approach presented in this paper aimed to enhance the flexibility of
production in terms of sizes, accuracy and functionality of products, to reduce
waste, to minimize handcrafted operations and to make affordable the manufactur-
ing speed even on pieces of large dimensions.
Depending on the assigned part, addictive and subtractive techniques can be
interchanged. A part could be produced by addictive deposition and then could be
milled, in order to reach more accurate shape or dimensions, or it can be prepared
starting from a block of raw material, different from the material of the part, and,
upon it, the final material can be addicted. This way, the shape can be obtained
only by the deposition of few layers upon an inner core. Depending on the
attainable shape of the part and on its material, a spray technique can be adopted
in order to realize a 3D deposition.
The present paper is dedicated to introduce and evaluate different strategies, or in
other terms, different combinations of machining features (addictive or deductive)
and different materials to complete a prototype model or mold, with evident re-
duction in time and costs, faced to traditional manufacturing. The optimiza-
tion/analysis piece of software is full integrated in classic CAD/CAM environment
for better supporting the design and engineering processes.

References

1. Nannan G., Ming C. L. Additive manufacturing: technology, applications and research need.
Frontiers of Mechanical Engineering, 2013, 8(3), 215–243.
2. Hongbin L., Taiyong W., Jian S., Zhiqiang Y. The adaptive slicing algorithm and its impact
on the mechanical property and surface roughness of freeform extrusion parts. Virtual and
Physical Prototyping, 2016, 11 (1), 27-39.
3. Bassoli E., Gatto A., Iuliano L., Violante M.G. 3D Printing technique applied to rapid cast-
ing. Rapid Prototyping Journal, 2007, 13 (3), 148-155.
4. Chin-Sheng Chen Jintong Wu. CAD/CAM Systems Integration. Integrated Manufacturing
Systems, 1994, 5 (4/5), 22-29.
5. Vinodh S., Sundararaj G., Devadasan S.R., Kuttalingam D., Jayaprakasam J., Rajanayagam
D. Agility through the interfacing of CAD and CAM. Journal of Engineering Design and
Technology, 2009, 7 (2), 143-170.
6. Bianconi, F., Conti P., Moroni S. An approach to multidisciplinary product modeling and
simulation through design-by-feature and classification trees. In Proc. of the 16th IASTED
Int. Conf. on Applied Simulation and Modelling, Palma de Mallorca, 2007, pp. 288-293.
7. Liverani A., Leali F., Pellicciari M. Real-time 3D features reconstruction through monocular
vision. International Journal on Interactive Design and Manufacturing, 2010, 4 (2), 103-112.
8. Cerardi, A., Caneri, M., Meneghello, R., Concheri, G. Mechanical characterization of poly-
amide porous specimens for the evaluation of emptying strategies in rapid prototyping. In
Proc. of the 37th Int. MATADOR 2012 Conference, Manchester, July 2012, pp. 299-302.
9. Liverani A., Ceruti A. Interactive GT code management for mechanical part similarity
search and cost prediction. Computer-Aided Design and Applications, 2010, 7 (1), 1-15.
Section 2.3
Experimental Methods in Product
Development
Mechanical steering gear internal friction:
effects on the drive feel and development of an
analytic experimental model for its prediction

Giovanni GRITTI1, Franco PEVERADA1, Stefano ORLANDI1, Marco


GADOLA2, Stefano UBERTI2, Daniel CHINDAMO2, Matteo ROMANO2
and Andrea OLIVI1.
1
ZF-TRW Active and Passive Safety Systems, 25063 Gardone V.T. (BS) Italy
2
Dept. of Mechanical and Industrial Engineering, University of Brescia, Italy
* Corresponding author. Tel.: +39-030-371-5663 ; E-mail address:
daniel.chindamo@unibs.it

Abstract: The automotive steering system inevitably presents internal friction


that affects its response. This is why internal friction phenomena are carefully
monitored either by OEMs and by vehicle manufacturers. An algorithm to predict
the mechanical efficiency and the internal friction of a steering gear system has
been developed by the ZF-TRW Technical Centre of Gardone Val Trompia and
the University of Brescia, Italy. It is focused on mechanical steering gear of the
rack and pinion type. The main contributions to the overall friction have been
identified and modelled. The work is based on theoretical calculation as well as
on experimental measurements carried out on a purpose-built test rig. The model
takes into account the materials used and the gear mesh characteristics and en-
ables the prediction of the steering gear friction performance before the very first
prototypes are built.

Keywords: steering, friction, rack and pinion, steering feel, vehicle dynamics

1 Introduction

Car manufacturers tune the steering system very carefully in order to meet cus-
tomer requirements. The steering system has a primary impact on the tactile feel
perceived by the driver through his hands acting on the steering wheel. This per-
ception –often called “steering feel”- is considered to be vital “because steering is
the driver’s main line of communication with the car; distortion in this guidance
channel makes every other perception more difficult to comprehend” [1]. Accord-

© Springer International Publishing AG 2017 341


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_35
342 G. Gritti et al.

ing to [2], steering feel, or steering torque feedback, is widely regarded as an im-
portant aspect of the handling quality of a vehicle, as it is known to help the driver
in reducing path-following errors. Some authors even suggest that, apart from
eyesight, the driving action is mainly based on feedback communication through
the steering [3]. Friction in the mechanical steering gear plays a fundamental role
in the final behaviour of the system and can affect feel and feedback heavily; as
such they have to be finely tuned during the design and development phase. The
subject is examined in depth in [4].
This paper describes the main contributions to the steering gear friction and
how to model them. Given the lack of bibliography on the subject, an energy-
based approach was devised. It combines Coulomb friction with power loss con-
tributions within the system. This makes it possible to predict the performance in
terms of overall steering gear friction as a function of gear mesh design and mate-
rial characteristics. As no evidence was found of a similar concept applied to me-
chanical gears, the method can be considered innovative.

2 How friction influences the drive feel

The steering system plays an important role in determining the driving feel, as
it is the most direct linkage between the tyre contact patch and the driver itself.
Apart from the effects of tyre characteristics like the self-aligning torque, in
theory the steering system influence on driver perception should depend upon
steering geometry and servo assistance curves only. However, a typical steering
system, even in its simplest form (i.e. unassisted rack & pinion), features a gear
mesh and many other components sliding on each other, so it is inevitably subject
to friction and the related actions and forces. These phenomena play a significant
role in the system response but, as explained later in this paper, sometimes their
effect is welcome. The overall friction force in a steering gear is given by the con-
tribution of different sources, mainly the gear mesh and the sliding plastic compo-
nents which support the rack in order to achieve an ideal meshing condition. In
particular, the main contributions to be considered are: 1) static friction, 2) dy-
namic friction, 3) friction variations along rack travel. On top of that it is impor-
tant to underline that the overall friction of the steering system is not only given
by the steering gear box but it also includes friction in the suspension joints, in the
assistance system, and in the steering column bearings and joints.
Mechanical steering gear internal friction ... 343

2.1 Effects on unassisted steering and Electric Power Steering

The dynamic friction is the contribution which resists the movement of the
steering system. For a given tie rod load, an increase of the dynamic friction will
lead to a higher steering wheel torque to keep the system in motion.
The on-centre steering condition is a regime where steering friction (and even-
tually backlash) can have large influence on the vehicle behaviour. In an on-centre
manoeuvre, therefore with very small steering wheel angles where the steering
angle/steering torque curve is nearly flat, a high level of friction would mask the
steering wheel torque small variations to be applied during any manoeuvre around
the straight-ahead position. Another kind of issue is related to steer returnability.
When exiting from a turn, releasing the steering wheel should result in the steer-
ing wheel itself returning to its central position even without action from the
driver, bringing the vehicle back to the straight-ahead direction thanks to the self-
aligning actions acting on the tyre contact patch. A high dynamic friction, perhaps
combined with a non-sporty suspension/steering geometry (such as low castor an-
gle and/or longitudinal trail), can result in a residual angle at the end of the ma-
noeuvre. This tendency can easily lead to a very poor driving feel.
Finally, it should be stressed again that the steering wheel, as well as allowing
vehicle control, is the most relevant connection with the tyre contact patch. As a
matter of fact, it provides indications about the level of tyre grip and lateral load
along corners, or about road surface irregularities and imperfections. One of the
main friction effects is to work as a filter, therefore a high level of dynamic fric-
tion could filter out the information coming from the wheels, making steering feel
poor and deteriorating active safety as the driver is partially isolated from the
road.
On the other side, a very low dynamic friction could transmit every vibration
caused by an irregular road surface texture all the way up to the steering wheel,
thus making the driving feel tiring and somehow annoying.
The term static friction refers to the kind of friction that resists a change of
state in a system at rest. It can be sensibly higher than dynamic friction, and the
main issues it can cause are often strictly related to the difference between the two
kinds of friction, and to the transition from one to the other.
The most relevant effect of a high static friction can be felt during small cor-
rection manoeuvres around the on-centre position, probably on a straight road,
where the self-aligning actions are low or negligible, and consequently the steer-
ing wheel torque is down to a minimum. In this condition the static friction is ex-
perienced by the driver as a so-called “sticky feel”.
Another unpleasant effect due to a high difference between static and dynamic
friction-related forces is the so-called “emptying” of the steering wheel. This is
experienced when moving the steering wheel after a steady-state cornering ma-
noeuvre. In other words, the steering system is kept in a quiescent state along the
corner, but as soon as the steering wheel is moved the change from static to dy-
344 G. Gritti et al.

namic friction results in a reduction of the torque required. The effect can be ex-
perienced either when increasing and when reducing the steering lock angle.
In the first case, if a driving path correction is required to negotiate a tighter
turning radius, as soon as the driver moves the steering wheel to increase the lock
angle the transition from static to dynamic friction will lead to a transient reduc-
tion of the torque required. This is in contrast with his expectation, since normally
the steering effort is somehow proportional to how much the driver moves away
from the straight running condition, in order to overcome the self-aligning actions
related to tyre self-aligning moments and steering geometry.
On the other side when exiting from a steady-state cornering manoeuver, as
soon as the steering wheel is moved from the typical mid-corner quiescent state
back towards the on-centre position, the torque reduction will be larger than ex-
pected because of the transition from static to dynamic friction, with a consequent
tendency to widen the path more than required.
As a matter of fact, when the difference between static and dynamic friction
forces is high, any small steering angle corrections to be normally performed dur-
ing driving, and requiring reversal of the steering velocity, will be inaccurate and
the car as a whole will be perceived as slightly unpredictable and inconsistent
with the driver’s inputs. Another effect related to an excessive difference between
static and dynamic friction is the generation of stick-and-slip phenomena, with the
excitation of vibrations in the steering system resulting in the generation of noise.
The assisted steering systems are affected by internal friction as well. The as-
sistance can mitigate the negative effects of the friction, but not completely. In
addition, the servo system itself can be adversely affected by the presence of fric-
tion.
The electric power steering works via an actuator which controls the move-
ment of the rack or pinion, depending on the torque and speed input measured on
the steering column. Each car model requires an appropriate tuning of the operat-
ing logic, which is also based on other data from the vehicle ECUs. One of the
tuning targets is to artificially filter the effects of friction (by means of an active
self-centering action for instance), although this may require a compromise on
other aspects of the steering feel. In any case the main effects of friction on an
electric power system should be considered added to those already present in a
simple manual steering system. First of all, a high dynamic friction requires a
higher assistance level from the electric motor, which in turn has an impact on en-
ergy consumption therefore on fuel consumption end emissions.
On the other side a very low dynamic friction, with a poor filtering action with
respect to road inputs, could induce instability problems with the generation of vi-
brations and discontinuity in the steering wheel torque. Again, an excessive drop
between static and dynamic friction could create problems during the servo assis-
tance tuning phase as well. When calibrating the system, it is important not to ne-
glect friction variation, both in terms of time (i.e. the effect of wear) and in terms
of part to part tolerance variation. That means the tuning should ensure a good
steering feel, independently from the inevitable effects of running and wear in
Mechanical steering gear internal friction ... 345

terms of friction, and independently from the small product variability and toler-
ances which can’t be avoided, even with a very stable manufacturing process.

2.2 Hydraulic power steering

If compared to a column assistance EPS, the friction in a hydraulic steering


system comes mainly from the steering gear. By comparison with a standard me-
chanical steering gear, a hydraulic system presents additional sources of friction
mainly due to the hydraulic seals i.e. the proportional valve sealing rings, hydrau-
lic cylinder seals, and the hydraulic piston seal.
A very high dynamic friction could lead to the same self-centering issues of the
manual and electric systems, and to a decay of the feeling in the “almost straight”
driving. However, in this case a low dynamic friction could be critical as well.
Indeed, hydraulic systems are affected by vibrations and resonances, which can
be excited either by the pump, by the hydraulic pipes elasticity, by the propor-
tional valve torsion bar etc. Friction works as a damper against this kind of issues,
hence if it is not enough, noise and steering wheel vibrations can occur.
Another peculiar phenomenon related to friction in the hydraulic steering gears
is hysteresis. The assistance level of a hydraulic system depends upon the angular
misalignment between the two components of the proportional hydraulic valve:
the input shaft which is solidly connected to the steering column and the sleeve,
which is fixed to the pinion. Hysteresis is related to the friction between the two
components above, which have to work with a very narrow clearance in order to
ensure the correct flow of the hydraulic fluid. This leads to recurring contact; in
this case friction resists the relative rotation between the two valve components
therefore leading to a different assistance level depending on whether the steering
wheel torque is increasing or decreasing. In the loading phase friction resists valve
opening, hence the assistance level might be lower than expected, while when re-
leasing the steering wheel, friction resists the valve closing action. This can lead
to an unexpectedly high level of assistance. This problem is usually perceived as
unpleasant in S-shaped curves and changes of direction.

3. FRICTION SOURCES AND MEASUREMENTS

In order to simulate the operation of a steering gear, and to predict its effi-
ciency, the first step is to identify the various components dissipating energy
through friction. In the following pages the analysis will be focused on the stan-
dard rack and pinion mechanical steering gear type. In this case the contributions
to friction are: the rack and pinion mesh, the sliding zone between rack and bush,
the sliding zone between rack and yoke (see Figure 1).
346 G. Gritti et al.

The typical component-level test aimed at measuring steering gear friction per-
formance is the so-called returnability test. This test evaluates the resistance of the
steering system alone to self-centering actions offered by tyres and steer-
ing/suspension geometry. It is carried out by securing the steering box on the test
bench with the pinion shaft left free. The load required to move the rack along its
axis is evaluated by means of an actuator equipped with a load cell and fixed to a
tie rod. The load measured is the Returnability load R, that can be seen as the sum
of all the single contributions to friction:
R Rg  Ry  Rb (1)
Where Rg is the gear mesh contribution to total Returnability load, Ry is the
yoke liner contribution and Rb is the rack bush contribution.
This test (and other similar tests aimed at steering gear friction evaluation)
should be performed on a completely assembled steering gear. Needless to say it
is often useful to predict internal friction and its effects already in the design
phase and before any prototype is manufactured.

Fig. 1. Friction sources in a mechanical steering gear.

4. MODELING OF FRICTION SOURCES

The friction produced at the rack and pinion mesh interface can be evaluated
by estimating power dissipation due to friction in the gear teeth contact zone.
When in motion, the rack and pinion coupling is affected by sliding phenomena
between the teeth surfaces. There is always more than one pair of teeth in mutual
double flank contact. An energy approach has been used in order to estimate the
dissipation of the gear mesh. In a generic sliding system, the power loss caused by
frictional effects Nf is given by:
N f Ff ˜ Vs (2)
where Ff is the friction force (normal contact force multiplied by the friction
coefficient) and Vs is the sliding speed.
For a steering gear mesh it is possible to use the same relationship. Ff is re-
placed by Rg and Vs is replaced by the rack speed Vr:
Mechanical steering gear internal friction ... 347

Nf Rg ˜Vr (3)
If the sliding speeds and the contact forces between the teeth in contact are
known, it is possible to evaluate the power loss. Consequently, it is possible to
evaluate the gear mesh contribution to the Returnability load R dividing the power
loss by the linear speed of the rack:
Nf (4)
Rg
Vr
For a spur gear the sliding speed Vs at any point along the path of contact is
constant because the path of contact is parallel to the gear axis. Vs becomes null
on the pitch diameter only.
For a helical gear like the rack and pinion mesh, the path of contact is not par-
allel to the gear axis, so it is not possible to identify an instant pure sliding speed.
However, it is possible to compute the sliding speed integral along the path of
contact (e.g. for one tooth, see Figures 2, 3, and 4):
lo
(5)
A s ³
V dl
li
s

Where li and lo are the inlet and outlet points of the tooth flank contact path
and As is the instant sliding area.

Figs. 2, 3, 4. Sliding speed vector: decomposition on tooth flank and rack teeth plane.

In order to obtain the power loss, the sliding area should be multiplied by the
linear contact pressure along the contact line. For one tooth only it is:
lo lo
(6)
Nf Vs ˜ Ff ³ Pg ˜ pc ˜Vs dl Pg ˜ pc ˜ ³ Vs dl Pg ˜ pc ˜ As
li li

where μg is the friction coefficient between the gear tooth and pc is the tooth
linear contact pressure (normal load on the tooth divided by the actual total length
of the path of contact). pc is assumed constant as demonstrated in [5].
In a defined point of the contact line, the sliding speed is given by the vector
difference between the rack speed Vr and the pinion tangential speed Vp. The slid-
ing speed necessarily lies in the rack tooth flank plane πf, see Figure 2.
Vs can be calculated as the composition of two sliding speeds, the first (Vs1) in
a plane parallel to the rack teeth (Figure 3), and the second (Vs2) in the pinion
transversal plane (Figure 4), where:
348 G. Gritti et al.

Vs1 Vr § Vpr · (7)


¨¨ ¸
sen E hsg sen 90  E r  E hsg © sen 90  E r ¸¹
Vr, βr (angle between rack tooth and rack axis) and βhsg (angle between pinion
axis and rack axis, in a plane parallel to both) are constant in every point of the
meshing, so Vs1 and Vpr (rack speed projection on the pinion transversal plane)
are necessarily constant too in every point of the contact paths. Taking Vpr into
account allows to draw the following considerations in the pinion transversal
plane (Figure 5, where ψis the angular coordinate of the rack-pinion contact point,
and αtp is the pressure angle on the pinion transversal plane).
Vs 2 Vpr § Vp · (8)
¨ ¸
sen \ sen 90  Dtp \ ¨© sen 90  Dtp ¸¹
where Vs2 =f(ψ). Finally, it is possible to calculate the sliding speed Vs as:
Vs Vs1 Vs 2 (9)
The power loss due to sliding friction is calculated by numerical integration
along the contact path. It is therefore possible to evaluate the Returnability load
contribution of the gear mesh (Rg). All the above in this paragraph is based on [6].

Fig. 5. Decomposition of the sliding speed in the pinion transversal plane.

During the rack motion the yoke liner works in a pure sliding condition. The
yoke spring load balances the separation force given by the rack and pinion mesh-
ing. In this case the Coulomb friction model can be deemed appropriate to repre-
sent the system. The contribution to the total Returnability load given by the yoke
can be expressed as:
Ry P y ˜ Fy (10)
where Fy is the resultant force acting on the yoke liner and μy is the coefficient
of friction between the liner material and the rack itself to be taken from an ex-
perimental look-up table as described below.
The direction of the separation force given by the gear mesh depends upon its
geometry. The separation force amount depends primarily upon mesh design, and
also upon friction given by the three single sources. The friction generated be-
tween rack and bush depends upon the material of the bush itself, on the preload
given by the housing on the bush and in turn on the rack. Hence the preload de-
rives from the design preload. However, apart from the effect of variations in the
speed of the system, the Returnability load Rb given by the bush is constant.
Mechanical steering gear internal friction ... 349

5. EXPERIMENTAL MEASUREMENTS/LOOK-UP TABLES

As shown above, in order to model the different contributions to the total rack
pull some parameters have to be taken from a look-up table, to be filled by means
of experimental tests performed on the purpose-built test bench. In order to pre-
dict Rg (gear mesh contribution, see (1)) it is necessary to know the coefficient of
friction (CoF, μg) between the rack and the pinion in the meshing zone. The CoF
can be evaluated by performing a test very similar to the Returnability load test,
where bush and yoke are replaced by a low friction support with rolling bearings.
This test has to be performed for different rack speeds as the CoF is dependent
upon the relative velocity between contact surfaces. Once the average Returnabil-
ity load has been determined the calculation shown in Section 4 has to be reversed
in order to compute the steel on steel CoF. A typical trend is shown in Figure 6.

Fig. 6. Gear mesh coefficient of friction vs. rack speed.

Regarding the yoke, a specific test has been designed, where the pinion is re-
placed by a low friction support with roller bearings. The same approach has been
used to replace the rack bush. For the yoke a proper support that allows to control
and monitor the test preload has been designed. The Returnability load measured
in this way is then divided by the preload, to estimate the dynamic coefficient of
friction of each material to be tested. The test is performed at different speeds in
order to create a look-up table as above.
The rack bush contribution to the total Returnability load has to be directly
evaluated. The test is once again very similar to the Returnability load test, and as
such it can be performed on the same test rig. Both yoke liner and pinion are re-
placed by roller bearing based supports. The bush is supported in a housing of the
same dimension of the aluminium gear box housing of the steering system, this al-
lows to preload the bush in the same way it is preloaded in the real steering gear
system. The tests are performed at different speeds as usual, once again in order to
obtain a comprehensive look-up table.
Solving the friction model requires a numerical approach with iterative compu-
tation cycles. Therefore, the creation of a dedicated tool based on MS Excel ® was
deemed necessary. When properly set with all the input parameters (meshing ge-
ometry, test speed, yoke and bush material, spring preload, possible resisting
350 G. Gritti et al.

loads, etc.) it gives the total Returnability load split into each contribution, the
pinion torque and both the direct and reverse efficiency values.
Two comparisons between Returnability load measurements and respective
predictions based on the average measurement on a sample of 24 steering gears
are shown below. Gears 1 and 2 are components of different car models, each
with its peculiar geometry and materials. The 24 samples for each gear type basi-
cally encompass the whole manufacturing tolerance range.
Figure 7 shows a good fitting with the simulation results. Figure 8 shows that
average real-life and computed values along rack travel appear to be consistent,
while the correlation with peak-to-peak values is weaker, as the latter is influ-
enced by parameters that were not yet considered (e.g. tooth shape errors, rack
rolling due to gear mesh separation forces, and yoke clearance).

Figs. 7, 8. Model vs real-life Returnability measurements: gears 1 and 2 (left) and gear 1 (right).

6. RESULTS AND CONCLUSION

An experimental/analytical model was developed to predict friction forces in a


mechanical steering gear. It is based on power loss contribution given by gear
mesh, yoke liner and rack bush. A dedicated test bench was developed in-house.
A comparison of theoretical results with real-life measurements shows a good cor-
relation regarding mean values. Therefore, it is possible to predict friction effects
before the prototyping phase. As a matter of fact, this simulation tool is now a
standard within the design phase. This has led to development cost savings for
ZF-TRW and its customers, and to a more informed design process. Future model
developments will take parameters like rack rolling and yoke clearance into ac-
count, in order to achieve an improved correlation with peak-to-peak pull force
values as well.

References

1. D. Sherman in Car & Driver magazine, Dec. 2012.


Mechanical steering gear internal friction ... 351

2. N. Kim, D.J. Cole: A model of driver steering control incorporating the driver's sensing of
steering torque. Vehicle System Dynamics, 49(10), 2011, pp 1575-1596.
3. R.S. Sharp: Vehicle dynamics and the judgement of quality (pp 87-96), in J.P. Pauwelussen:
Vehicle performance – understanding human monitoring and assessment. Swets & Zeitlin-
ger, 1999.
4. F. Peverada, M. Gadola: Lecture notes on vehicle dynamics and design – steering systems,
University of Brescia, Italy, 2013.
5. G. Henriot: Ingranaggi, trattato teorico e pratico. Tecniche Nuove, 1977.
6. ISO 21771:2007; Gears -- Cylindrical involute gears and gear pairs -- Concepts and geometry.
Design of an electric tool for underwater
archaeological restoration based on a user
centred approach

Loris BARBIERI*, Fabio BRUNO, Luigi DE NAPOLI, Alessandro GALLO


and Maurizio MUZZUPAPPA

Università della Calabria - Dipartimento di Meccanica, Energetica e Gestionale (DIMEG)


* Corresponding author. Tel.: +39-0984-494976; fax: +39-0984-0494673. E-mail address:
loris.barbieri@unical.it

Abstract
This paper describes a part of the contribution of the CoMAS project ("In situ
conservation planning of Underwater Archaeological Artifacts"), funded by the
Italian Ministry of Education, Universities and Research (MIUR), and run by a
partnership of private companies and public research centers. The CoMAS project
aims at the development of new materials, techniques and tools for the documen-
tation, conservation and restoration of underwater archaeological sites in their nat-
ural environment. This paper details the results achieved during the project in the
development of an innovative electric tool, which can efficiently support the re-
storers’ work in their activities aimed to preserve the underwater cultural heritage
in its original location on the seafloor. In particular, the paper describes the differ-
ent steps to develop an underwater electric cleaning brush, which is able to per-
form a first rough cleaning of the submerged archaeological structures by remov-
ing the loose deposits and the various marine organisms that reside on their
surface. The peculiarity of this work consists in a user centred design approach
that tries to overcome the lack of detailed users’ requirements and the lack of
norms and guidelines for the ergonomic assessment of such kind of underwater
tools. The proposed approach makes a wide use of additive manufacturing tech-
niques for the realization and modification of prototypes to be employed for in-
situ experimentation conducted with the final users. The user tests have been ad-
dressed to collect data for supporting the iterative development of the prototype.

Keywords: Product Design, User centred design, Additive Manufacturing, Un-


derwater Applications.

© Springer International Publishing AG 2017 353


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_36
354 L. Barbieri et al.

1 Introduction

For a country like Italy, which owns one of the richest artistic and archaeological
heritages of the world, the restoration and preservation of archaeological and cul-
tural artefacts and sites is a challenge that requires a significant use of resources. If
these artefacts or sites are submerged, the efforts aimed to preserve the heritage
present a very high degree of difficulty. The operations conducted on submerged
sites have to follow an entirely different approach, if compared to the emerged
(terrestrial) sites and this approach has yet to be defined, both for the lack of ad
hoc devices and the absence of specific methodologies.
Unesco’s guidelines [1] for the restoration and preservation of cultural heritage are
in full force and effect only since 2001 and they expressly provide for ‘in situ con-
servation’ as the first option, before any other intervention. Starting from this indi-
cation, in the last decade many surveys and interventions have been conducted in
submerged archaeological sites, with the aim to create the basis for ‘underwater
archaeological tourism’.
In order to maintain the state of conservation of the submerged structures, to delay
as much as possible the proliferation of pest microorganisms (biofouling, lime-
stone deposits, sediments, etc.) and to preserve the areas of interest, a new skilled
professional is born: the underwater restorer. These professionals have to combine
the restorers’ skills to those of professional divers, by operating directly in ar-
chaeological sites, performing clean-up operations, maintenance and consolidation
of the areas to be restored. For all these operations, underwater archaeologists use
the same "terrestrial" tools - ice axes, scrapers, chisels, scalpels, sponges and
sweeps - adapted to the new environment. Currently, devices designed specifically
to support the work of underwater restorers are not available on the market.
Thanks to the CoMAS project [2] ("In situ conservation planning of Underwater
Archaeological Artifacts" – www.comasproject.eu), funded by the Italian Ministry
of Education, Universities and Research (MIUR) and run by a partnership of pri-
vate companies and public research centers, it has been possible to overcome the
limitations of the equipment previously adopted by underwater restorers for clean-
ing operations. In fact, the CoMAS project aimed at the development of new ma-
terials, techniques and tools [3,4] for the documentation, conservation and restora-
tion of underwater archaeological sites in their natural environment. Among these
tools an electric cleaning brush tool has been developed and tested in order to sat-
isfy restorers’ needs that occur during the subsequent phases of the cleaning work
performed, during the in-situ restoration process on the submerged artifacts.
The paper describes the user centred design (UCD) approach adopted for the de-
velopment of the electric underwater tool and details the various steps of the de-
sign process carried out with the ongoing support of end users for the testing of
the different prototypes. In particular, the testing activities have been performed
with the involvement of underwater restorers and professional divers and have
Design of an electric tool for underwater … 355

been carried out over the entire life-time of the CoMAS project in the Marine Pro-
tected Area - Underwater Park of Baiae (near Naples, Italy).

2 User centred design approach

In a typical UCD process [5, 6], there are three essential iterative steps which
should be undertaken in order to incorporate the users’ needs before to proceed
with the implementation of the final design solution. The process starts with an
analysis step that aims to understand and specify the context of use and define user
requirements. The second step produces design solutions that are tested and evalu-
ated in the following stage. The third one, indeed, is an empirical measurement
step where user studies are carried out to collect objective and subjective usability
data that allow engineers to evaluate how much the design differs from users’
needs and desires. The process involves iterating until the requirements are satis-
fied.
Our process follows the abovementioned UCD recommendations and then re-
quires the validation of the engineers’ assumptions with the direct involvement of
end-users at every stage of the development process. Since users have no previous
experience with underwater electric tools but have only an idea of the desired ob-
ject and, furthermore, due to the lack on the market and of the state-of-the-art of
electric underwater devices for in-situ conservation it has been necessary to enrich
the evaluation step of an entire set of experimental activities that focus on the
technical and mechanical requirements.
On the basis of these considerations, the UCD approach started with the definition
of users’ needs through a direct communication and deep conversation with un-
derwater restorers by means of focus groups and interviews. The focus group en-
couraged users to share their feelings, ideas and thoughts based on their know-
how, while the interviews allowed to find out personal experiences and attitudes.
In both cases, the designers gathered a large amount of information that allowed
them to have a better comprehension of the context of use and users’ desires and
needs. These needs have been interpreted and translated by engineers into a pre-
liminary set of usability and technical requirements that, due to the novelty of the
product, were not sufficient for a complete determination of the design specifica-
tion.
In order to overcome this shortage, a first prototype of the underwater tools has
been developed and tested with the use of different sensor types that allowed en-
gineers to acquire a large amount of experimental data necessary to perform an
accurate product development process and an optimized design of the tool.
Four different physical prototypes have been developed, taking advantage of the
modern additive manufacturing and topology optimization techniques [7,8], and
tested, both in laboratory studies and in the underwater environment, throughout
the entire UCD process. The tests performed in the real operating conditions have
356 L. Barbieri et al.

been carried out by end users to evaluate the tool in terms of functional and usabil-
ity requirements.

3 Electric tool design and testing

This section describes the development process of the electric underwater tool that
has been carried out in accordance to UCD approach described in the previous
section. The different prototypes have been manufactured by means of traditional
machining processes but also thanks to the adoption of additive manufacturing
techniques. In particular, Direct Metal Laser Sintering (DMLS) and Selective La-
ser Sintering (SLS) technologies have been used for the prototyping of metal and
polymer parts. The choice of the most suitable technology was dictated by the
analysis of the functional characteristics and the complexity of the geometry of
each component.

3.1 First prototype

The following image (Fig. 1) depicts the virtual mock-up of the first prototype of
the electric tool. The instrument tool is composed by two cylindrical aluminum
cases, assembled by means of flanges, that house a 36V brushed motor that gives a
maximum no-load speed of 1400 rpm.

Fig. 1. Virtual prototype.

The tool is powered by a 36V lead battery pack, mounted inside an external steel
case that it is placed on the seabed during the operations.
Above the flanges, that assemble the two main parts, a waterproof cylindrical
chamber, with a diameter of approximately 10 cm, has been placed in order to
house the data logger that collects the different data output of the sensors that
equip the tool. In particular, the sensors installed are three load cells to measure
Design of an electric tool for underwater … 357

the axial forces and sensors capable to monitor the engine operating parameters,
such as the electric current drawn and the number of turns, in order to make and
estimation of the torque arising during the working operations.
The calibration process has been carried out in the laboratories by means of a test-
ing workbench specifically design for these kind of sensors. In particular, as
showed in the following image, the testing workbench has been configured for the
measurement of the axial loads (Fig. 2a) and torque (Fig. 2b) that operate on the
mandrel.

Fig. 2. Laboratory tests. Workbench for the measure of axial force (a) and torque (b).

Once the laboratory tests have been accomplished then field trials have been car-
ried out with the participation of underwater restorers and certified deep sea divers
that performed different kind of experimentation of the electric tool on different
materials (Fig. 3a), biofouling organisms (Fig. 3b) and conditions of use.

Fig. 3. Users testing on different kind of biofouling (a) and materials (b).

In particular, users focused their attention also on the usability and manoeuvrabil-
ity of the instrument under different buoyancy conditions (Fig. 4a) and on tool
switching operations (Fig. 4b).
358 L. Barbieri et al.

Fig. 4. Users testing usability (a) and the switching of the brushes (b).

The information acquired by the data-logger have been processed and integrated
with video by means of a software developed ad-hoc. In particular, figure 5 shows
a frame of the software developed to support engineers in the interpretation of the
data acquired during the tests. The software gives information about the place and
the time of the experiment, average values of the main parameters involved during
the test and a graphical timeline representation of their actual values.
The analysis of the data acquired with the sensorized prototype allowed engineers
to have a deeper knowledge about the technical and mechanical requirements to
satisfy in order to meet users’ needs. In particular, it has been possible to define
the weights, the working and reaction forces, the ergonomic and functional opera-
tion characteristics of the instrument.
The results have shown that the manoeuvrability of the tool represents a critical is-
sue that demand specific attention throughout the entire lifetime of the product de-
velopment process. In fact, the efficacy of the instrument is tightly related to the
direction and force applied by the user that, in such a difficult working environ-
ment, are, in turn, strictly affected by the ergonomic and usability characteristics
of the device.

Fig. 5. Software for the integration and visualization of the information acquired by the data-
logger.
Design of an electric tool for underwater … 359

3.2 Second prototype

The underwater restorers’ feedbacks and the results of the data gathered and
analised during the test have been taken into account in oder to redesign and
improve the underwater tool. The following image shows the comparison between
g 6a) and the second prototype
the first (Fig. p yp (Fig.
g 6b).

Fig. 6. The first prototype (a) compared to the second one (b).

The new tool is more compact and manageable. The back part of the tool has been
redesigned in nylon in order to optimize its geometry and reduce its dimensions.
The engine management system has been improved thanks to the adoption of an
electronic controller card instead of the on-off switch button. The lead battery has
been substituted with a longer lasting lithium battery of 36V. The adoption of a
lithium battery allowed a significant reduction of 90% of the dimensions of its wa-
terproof case. Furthermore, according to the feedbacks provided by underwater re-
storers, the handle has been redesigned in order to allow users to easily counteract
the reaction forces and torques generated throughout the use of the instrument.
The new handle, manufactured by means of a water-jet cutting of an aluminum
plate, is placed in the anterior part of the tool, near to the mandrel, and presents a
symmetrical handlebar with two grips incorporating the controls (Fig. 7a).

Fig. 7. The second physical prototype (a) during user testing (b) in underwater environment.
360 L. Barbieri et al.

A first series of laboratory tests have been carried out on the second prototype
with hydrostatic experimentations at a maximum pressure of 4bar and duration of
60 minutes.
Subsequently, a second phase consisting of extensive user studies, has been car-
ried out in the testbed of the underwater archaeological park of Baiae. Here, un-
derwater restorers have tested the tools on various submerged remains affected by
different kind of bio-fouling organisms. Fig. 7b shows the user during the removal
of algae by means of the underwater device equipped with a hard nylon brush.

3.3 Final prototype

The testing activities carried out on the second prototype have made it possible to
detect some critical aspects of particular attention on which it was necessary to
keep working to find some improvements that better satisfy users’ needs.
In particular, with regard to the handle design, if on one side, the large handle of
the second prototype allowed users to easily counteract the reaction forces, on the
other side, it exhausted the wrist muscles more quickly and did not allow a precise
control of the tool. For these reasons a third handle design has been developed as
depicted in Fig. 8a. The tool presents two large independent handles that allow to
work always with straight wrists and a secure power grip. The first U-shaped han-
dle is form fitting and ergonomic due to its curved shape manufactured by means
of 3D printing technologies. The second handle provides a comfortable control on
the switch handle placed on it and features a locking knob that allows to customize
its angle in accordance to the direction’s force the user want to exert.
If compared to the second prototype, the third design version is featured also by a
keyless chuck that make faster and simpler the changing of the brushes.

Fig. 8. The third physical prototype (a) and the final design (b) of the electric tool.

The third prototype underwent also laboratory tests and field trials performed with
end-users whose feedbacks have been incorporated by engineers in the final de-
sign version of the electric underwater tool (Fig. 8b).
Design of an electric tool for underwater … 361

The final design presents other important improvements. The device is equipped
with a 4 pole brushless motor that double the performances exerted by the previ-
ous one. The engine control system has been improved too thanks to the adoption
of an electronic programmable control unit. The back part of the tool has been
manufactured in aluminum to improve the heat exchange, while battery case
weight has been optimized thanks to the adoption of Delrin plastic material. Fur-
thermore, the handle switch has been replaced with a magnetic one that makes
more effective the user’s comfort.

Fig. 9. The final prototype of the electric tool tested by final users.

The final tests have been performed in the area of Portus Iulius where archaeolog-
ical structures (Fig. 9b) and several mosaic floors (Fig. 9a) and opus signinum
floors lay on the seabed at a depth of 3-5 meters. The tool has been used by restor-
ers in different phases of the restoration work in relation to the cleaning operation
that had to perform on various construction material or for the removal of a specif-
ic living organism, such as, algae, sponges, molluscs, etc..

4 Conclusions

The paper has presented a user centered design approach adopted for the devel-
opment of an innovative underwater electric tool. This device is an outcome of the
CoMAS project and has been specifically developed to support underwater restor-
ers during their activities of conservation and restoration of underwater archaeo-
logical sites.
The development process has been carried out with the constant support and feed-
backs of end users that have been of fundamental importance especially during the
testing activities to validate the functionality of the prototype and to guide design
improvements. The four prototypes have been developed taking advantage of the
362 L. Barbieri et al.

great versatility and high capability to manage complex geometries offered by the
additive manufacturing technologies.
The final users have expressed their full satisfaction for the results achieved in the
UCD process. The developed electric underwater tool is easy to use and allows re-
storers to operate with better results in terms of speed and freedom.
The good results and the effectiveness of the described UCD approach have
pushed researchers and designers of the CoMAS project to implement the same
process for the development of a full set of electric underwater tools able to sup-
port restorers in all their different activities performed for the mechanical cleaning
of submerged archaeological remains.

Acknowledgments The authors want to express their gratitude to all the underwater restorers,
underwater operators and underwater instructors that have been actively involved in the design
process. A special thanks to Roberto Petriaggi, former director of Underwater Archaeological
Operation Unit at ISCR, for his support and scientific expertise. The authors would like to thank
also the Soprintendenza Archeologia della Campania for the permission to conduct the experi-
mentation of the electric tool in the Baie underwater archaeological site.
All the design activities have been carried out in the “CoMAS” Project (Ref.: PON01_02140 –
CUP: B11C11000600005), financed by the MIUR under the PON ’R&C’ 2007/2013 (D.D. Prot.
n. 01/Ric. 18.1.2010).

References

1. Unesco, 2001. Convention on the protection of the underwater cultural heritage, 2 November
2001. Retrieved 01/02/2016 from http://www.unesco.org
2. Bruno F., Gallo A., Barbieri L., Muzzupappa M., Ritacco G., Lagudi A., La Russa M.F.,
Ruffolo S.A., Crisci G.M., Ricca M., Comite V., Davide B., Di Stefano G., Guida R. The
CoMAS project: new materials and tools for improving the in-situ documentation, restoration
and conservation of underwater archaeological remains. Accepted for publication in the Ma-
rine Technology Society (MTS) Journal, 2016.
3. Bruno F., Muzzupappa M., Gallo A., Barbieri L., Spadafora F., Galati D., Petriaggi B.D.,
Petriaggi R. Electromechanical devices for supporting the restoration of underwater archaeo-
logical artifacts. In: OCEANS 2015-Genova. IEEE, 2015. p. 1-5.
4. Bruno F., Muzzupappa M., Lagudi A., Gallo A., Spadafora F., Ritacco G., Angilica A.,
Barbieri L., Di Lecce N., Saviozzi G., Laschi C., Guida R., Di Stefano G. A ROV for
supporting the planned maintenance in underwater archaeological sites. In: OCEANS 2015-
Genova. IEEE, 2015, p. 1-7.
5. Vredenburg K., Isensee S., Righi C. User-Centered Design: An Integrated Approach. Upper
Saddle River, NJ: Prentice Hall PTR, 2002.
6. ISO 9241-210:2010. Ergonomics of human-system interaction. Part 210: Human-centred de-
sign for interactive systems.
7. Muzzupappa M., Barbieri L., Bruno F., Cugini U. Methodology and tools to support
knowledge management in topology optimization. Journal of Computing and Information
Science in Engineering, 10(4), 2010, 044503.
8. Muzzupappa M., Barbieri L., Bruno F. Integration of topology optimisation tools and
knowledge management into the virtual Product Development Process of automotive compo-
nents. International Journal of Product Development, 2011, 14(1-4), 14-33.
Analysis and comparison of Smart City
initiatives

Aranzazu FERNÁNDEZ-VÁZQUEZ1* and Ignacio LÓPEZ-FORNIÉS1


1
Department of Design and Manufacturing Engineering, María Luna 3, Zaragoza, 50018, Spain.
* Tel.: +34-669-390-186; fax: +34 976 76 22 35; E-mail address: aranfer@unizar.es

Abstract: Complexity in cities is expected to become even higher in the short


term, which implies the need to face new challenges. The Smart City (SC) model
and its associate initiatives have become very popular for undertaking them but it
is not often very clear what it really means. Starting with a previous classification
of the initiatives developed under the SC model into two big categories, according
to their approach to citizens, this paper aims to make a critical analysis of this
model of city, and to propose the development of new initiatives for it based on
Citizen-Centered Design methodologies. Living Labs, both as methodology and as
organization, appear in this context as an interesting choice for developing initia-
tives with real citizen involvement along the entire design process, which it is ex-
pected to arise in later stages of research.

Keywords: Smart City, Living Lab, Citizen Centered Design, Design methods.

1 Introduction

Over the last decades cities have been facing new challenges that are expected to
become even bigger in the short term. The fact that 54% of world’s population
live in cities [1], and the expectation that it will increase up to 66% by 2050, are
incessantly repeated data that appears in almost every paper or publication regard-
ing urban planning or cities [2][3][4]. These facts are usually used for highlighting
the urgency with which new approaches must be made to improve citizens’ condi-
tions now and for the near future.
In this context, many models have emerged claiming to be the solution for the
upcoming challenges: eco-city, high-tech city or real-time city. One of the most
successful ones is Smart City (SC), and many initiatives and much research have
been developed in recent years around it. The objective of this study is to make a
critical analysis of different initiatives developed within this model based on the
role of citizens in each one of them. Citizen implication is a fact that can guarantee

© Springer International Publishing AG 2017 363


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_37
364 A. Fernández-Vázquez and I. López-Forniés

the success of the initiatives and its economic and social viability, which is of ma-
jor interest for all the parties involved in the develop of cities [5][6]. According to
the results of the investigation, it is intended, in the following phase of this re-
search, to develop new initiatives for the SC based on citizens’ interest, integrating
user-centered design methodologies.
It becomes clear that intensive research and numerous proposals have been de-
veloped under the SC label lately, but yet there is not a unique definition for SC,
and the indicators of the “smartness” of a city are still far from indisputable [7].
Nevertheless, the analysis of urban governance has appeared as a promising ap-
proach for measuring the impact of innovation in urban daily processes [8], and
for this end, it is interesting to analyse the role of citizen in the whole process.
Thus, analysing publications of the last fifteen years, more than one thousand
research articles can be found in Scopus with “smart cities” within their title. In
those, two broad categories can be established on SC initiatives when it regards to
the role of the citizens:
x The first, more abundant in publications, comprises proposals that fo-
cus on the integration of Information and Communication Technolo-
gies (ICT) to city services and infrastructure. In general, they respond
to a top-down approach, in which the initiatives are mainly developed
by administrations and/or companies, with citizens as mere end users.
x The second one, in some ways opposite, includes initiatives that pose
a redefinition of the ICT approach, and offers a user-centered design
focus. It responds to a bottom-up approach, in which citizen participa-
tion is encouraged throughout the process.

2 Smart City models and initiatives based on ICT

2.1 Technological definitions of SC

The first approach defines SC as the city that is using new ICT´s innovatively and
strategically to achieve its aims. According to this definition, the Smart City is
characterized by its ICT infrastructures, which facilitate an urban system increas-
ingly, smart, interconnected and sustainable [2].
The paradigm that supports the need of this ICT deployment is the Internet of
Things (IoT), which proposes a system in which the pervasive presence of a vari-
ety of devices able to interact with each other without the intervention of people.
In this context, SC is driven and enabled by interconnected objects placed in the
urban space. Based in technology such as modern wireless sensing machine to
machine (M2M), Radio Frequency Identification (RFID) or Wireless Sensor Net-
works (WSN), IoT is supposed to successfully contribute to a more efficient and
Analysis and comparison of Smart City initiatives 365

accurate use of the resources [9], allowing access to a large amount of information
(Big Data) that can be processed for its subsequent use by data mining techniques.
The futuristic concept of a SC where citizens, objects, utilities, etc., are seam-
lessly connected using ubiquitous technologies is almost a reality, so as to signifi-
cantly enhance the living experience in 21st century urban environments [10].
Proposals undertaken with this approach have been developed within the field of
transport, services and energy efficiency of cities, and all those related with big
data and data mining, can be included within this approach too. Many of them also
have been supported, promoted and/or advertised by large ICT´s companies, such
as Endesa-Enel and IBM in Malaga (Spain), IBM in Songdo City (South Korea),
TECOM Investments in SmartcityMalta (Malta), Cisco Systems in Holyoke, Mas-
sachusetts (USA) and Telefonica in Santander (Spain).

Fig. 1. Typical IoT approach Smart City representation [6]

But this point of view has not only been encouraged by companies. The Euro-
pean Commission itself started promoting a SC model with bigger focus on energy
efficiency, renewable energy and green mobility than in citizens themselves [11].
This tendency has slightly changed recently, but not significantly yet.
This issue has also been the subject of much academic research, mainly within
the fields of Computer and ICT sciences. Therefore, the investigation has focus
primarily on issues such as the architecture protocols and infrastructure needed for
the deployment of this model, as mobile crowd sensing (MCS) [12], or adaptations
of previously existent architectures, such as Extensible Messaging and Presence
Protocol (XMPP) [13], for developing new services for this city model.
366 A. Fernández-Vázquez and I. López-Forniés

2.2 ICT based SC initiatives: problems and redefinition

The previous definition of SC and its associated initiatives has, however, been
questioned [14][15][16][17]. On the one hand, it has been argued that while there
were no general consensus on the meaning of the SC term or what its describing
attributes were, there have been an intensive “wiring” of cities and the collection
of big amounts of information, without consideration of some of the possible as-
sociated problems, such as the need of ensure the privacy of participants when
data are collected by directly instrumenting human behaviour [14]. Accordingly,
“cities often claim to be smart, but do not define what this means, or offer any
evidence to support such proclamations” [15].
On the other hand, when analysing most of the initiatives developed within the
field of SC, it can be seen that the results only slightly resemble their ambitious
initial objectives. It appears to become difficult to “transform the higher level con-
cepts found in SC literature into actionable and effective policies, projects and
programs that deliver measurable value to citizens” [16]. With pressure growing
for cities to get even smarter, smart city claims have a self-congratulatory nature
that is causing a kind of anxiety around the development of this model [17].

3 Smart City initiatives based on Citizens

In response to the problems arising from the technological predominant SC model,


a current of opinion has claimed that the design of the genuine smart city only
could be possible by the emergence of smart citizens, who would be the ones that
will conferred the "smart" attribute to cities [18] [19].
Instead of considering people as just another one of the enabling forces of the
SC [20], these proposals have opted for the application of citizen-centric and par-
ticipatory approaches to the co-design and development of Smart City. This model
is emerging as a new and specific type of SC, the Human Smart City [21].
In spite of that, most of the proposals in which the emergence of smart citizens
is supposedly intended have limited citizen’s participation to roles of data provider
[22] or tester of a pre-designed model or service [23], but on rare occasion have
implicated them in the entire process. The main exception, and the environment
that has made possible the emergence of projects in which citizens have played a
major role throughout the entire process, have been the experiences of Living
Labs developed in the field of SC.
Analysis and comparison of Smart City initiatives 367

3.1. Living Labs general definition and first SC experiences

Living Labs (LL) have been defined both as a research and development method-
ology and as the organization that is created for its practice [24], and many times it
also refers to the context or space in which is developed.
As a methodology, LL is one in which innovations are created and validated in
collaborative, multi-contextual and multi-cultural empirical real-world environ-
ments [25]. This approach seeks for the implication of users in every phase of the
process as the mean to ensure their engagement with the services or products de-
veloped, and it is performed through iterative cycles of proposal, development of
alternatives and testing at every stage of the process. Thereby, it can be considered
a User Centered Design (UCD) methodology for the way in which user involve-
ment is encouraged.
Referring to LL as an organization, many European cities have established their
own ones for developing new initiatives. The European organization that brings
together most of this LL is the European Network of Living Labs (ENoLL) [26],
which was legally established as an international association in 2.010, and it has
developed since then all kind of initiatives for spreading its aims, methods and
objectives.

Fig. 2. Map of existing LL according to ENoLL Web Site [20]

From the beginning, LL have focused in developing new business models,


mainly in technical and industrial contexts. And due to the lack of definition of the
SC and the difficulty of city leaders to identify the quantifiable sources of value
that ICT networks can generate for them, this focus have made LL appear as an
ideal candidate to create an appropriate model for the implementation of the SC
[27] [28]..
These SC LL have aimed at improving the governance of cities, promoting
proposals coming from citizens themselves and applying user-centered design
methodologies, such as co-design or service design [29][30] [31].
368 A. Fernández-Vázquez and I. López-Forniés

3.2. Living Labs problems regarding SC

Considering the experiences and studies developed, it is not so clear which cate-
gory of methodologies LL could be included in. Although it has been claimed to
be a User Driven methodology, one of the main problems of European LL has
been the difficulty for citizens to forward their initiatives and ideas to the LL, so
users can not be considered as those who actually run the innovation process. Ac-
cording to that, LL could be better considered as a methodology between User
Centered Design and Participatory Design. But much investigation is yet needed
for defining the characteristics and potentials of LL methodologies [32].
Besides, it has been difficult to create a really consistent audience for these ini-
tiatives, so that sometimes the results are not significant or do not allow to obtain
sufficient data for processing. It has got difficult, mainly in countries with little
tradition of citizen involvement such as Spain, to get citizens involved implicated
in those projects. As the common good, understood as the social benefit achieved
by citizenship by the active participation in the realm of politics and public ser-
vices, has not been interiorized as desirable by society, the social benefit finally is
not achieved. Thus, many of the projects have remained in academia.
Finally, initiatives related to LL have still relied largely on the involvement of
an administration for its development, which on one hand has limited its scope of
action because of the context of crisis of recent years. And on the other hand, it
has been paid little attention to cost-effectiveness in LL projects, which can hinder
a future sustainable financing for private stakeholders.

4 Summary and Benchmarking of SC initiatives

It can be occasionally confusing to distinguish between initiatives, and ICT based


ones often seem to adopt a citizen driven approach, as by establishing a distinction
between so-called “hard” and “soft” domains, and including under the “soft”
definition all those related to governance and people [33]. But a clear distinction
can be made between the two models by analysing the indicators shown in Table
1. Some of these indicators have been previously explained in the previous sec-
tions, such as the leaders and drivers of the process in each category, or their char-
acteristic features.
The facts have been extracted from experiences exposed by international or-
ganizations, such as the previously mentioned ENoLL, or in cities web pages. This
information has been completed with searches in SCOPUS within the smart city
term in combination with “ICT”, “citizen”, “user” and, finally, “Living Labs”.
These searches have been made since 2013, and after filter the information, for
eliminate irrelevant information, more than 200 articles were analysed for obtain-
ing the facts exposed.
Analysis and comparison of Smart City initiatives 369

Table 1. Benchmarking of SC models.

ICT based SC Citizen based SC


Leaders and ICT/Energy/Utility companies Neighbourhood associations
drivers City policy actors Small collectives

Beneficiary Companies, Authorities and Citizens and Involved collectives


Citizens (partially)

Innovation base Technological based Open or collaborative innovation

Objectives & Urban development Social welfare


priorities Infrastructure improvements Common good
Efficient spending Engagement of citizens

Resources Public resources Individual funds


Private investment Crowdfunding

Characteristic Networks Citizen participation


Features ICT Devices Open clouds and platforms
Data Collection Social services

Pros Secured funding for projects Secured citizen engagement


Big media power Targeted initiatives
Data mining resources Focus towards Common good

Cons Poor citizen participation Lack of funds


Fuzzy goals Poor communication power
Private benefits Need for new tools/methods

Although Citizen based SC initiatives rely on co-creative and collective proc-


esses with involved groups of people that can be autonomous, ICT features can
become a very strong support. It is only necessary to re-think the idea of city we
are heading to.

5 Conclusions and further research

The notion of Smart City on the one hand refers to cities that are increasingly
composed of and monitored by pervasive and ubiquitous computing, and, on the
other hand, to those whose economy and governance is being driven by innova-
tion, creativity and entrepreneurship, enacted by smart people.
370 A. Fernández-Vázquez and I. López-Forniés

However, it does not seem to be a clear way of linking the two ideas into spe-
cific initiatives, and only the experiences arose in the so called “living labs” could
be considered close to have reached a proper convergence between the two mod-
els, by involving citizens throughout the whole process while integrating ICT in a
proper way. But they are not large in number or homogeneous in characteristics
and scope, and have had limited citizens participation and involvement. Further,
the dissemination of the results has not been enough to promote similar initiatives,
and the dependence on administration involvement can hinder their future.
LL characteristics are anyway very promising from the designer’s perspectives,
as they allow the emergence of new processes that can develop real and better user
involvement in SC. The integration of citizen-driven processes for fostering par-
ticipation in the early stages of the initiatives or the search for new communication
channels for allowing better result dissemination are just two of the possible re-
search fields for the near future.
It is our intention to try to develop in the short term a pilot project in the field
of SC using LL Design Methods and Citizen-Driven processes. The participation
of citizens along the entire design process might ensure that the product or service
will meet a real need in a proper way, which it is very interesting for companies
and administrations, thereby achieving the involvement of all stakeholders and en-
suring the viability of the initiatives. And as it would imply that throughout the
process user participation would be sought, the promotion of citizen creativity and
entrepreneurship would be also achieved.

References

1. United Nations. World Urbanization Prospects: The 2014 Revision. 2.014. New York.
2. Kumar Debnath A., Chor Chin H., Haque M. and Yuen B. A methodological Framework for
benchmarking smart transport cities. Cities, 2014, 37, pp.47-56.
3. Jair Cabrera, O. Infraestructuras que dan soporte a ciudades inteligentes. CONACYT sympo-
sium for scholars and former grantees. 2012. Available at: http://docplayer.es/7437135-
Ponencia-oscar-jair-cabrera-bejar.html [last date of access: 18/04/2016]
4. Karadağ, T. An evaluation of the smart city approach. Doctoral Dissertation, 2013. Middle
East Technical University.
5. Macintosh, A. Using Information and Communication Technologies to Enhance Citizen En-
gagement in the Policy Process, in OECD, Promise and Problems of E-Democracy: Chal-
lenges of Online Citizen Engagement, OECD Publishing, Paris. 2004. DOI:
http://dx.doi.org/10.1787/9789264019492-3-en
6. De Lange, M, De Waal, Mn. Owning the city: New media and citizen engagement in urban
design, First Monday, [S.l.], nov. 2013. ISSN 13960466, available at:
http://pear.accc.uic.edu/ojs/index.php/fm/article/view/4954/3786. Date accessed: 14/06/2016.
7. Manville, C et al. Mapping smart cities in the EU. 2014. Available at:
http://www.rand.org/pubs/external_publications/EP50486.html. Date accessed: 14/06/2016
8. Anthopoulos, L. G., Janssen, M., & Weerakkody, V. Comparing Smart Cities with different
modeling approaches. In Proceedings of the 24th International Conference on World Wide
Analysis and comparison of Smart City initiatives ... 371

Web Companion, May 2015, pp. 525-528, International World Wide Web Conferences Steer-
ing Committee.
9. Jin, J. Gubbi, J. Marusic, S. & Palaniswami, M. An information framework for creating a
smart city through internet of things. Internet of Things Journal, IEEE, 2014, 1(2), 112-121.
10. Dohler M. Vilajosana I., Vilajosana X. & LLosa, J. Smart cities: An action plan. In Barce-
lona Smart Cities Congress. Barcelona, Spain, December 2011,
11. Centre of Regional Science, Vienna UT. Smart cities – Ranking of European medium-sized
cities. Final Report. 2012. Available at: http://www.smart-cities.eu/press-ressources.html.
Date accessed: 18/04/2016.
12. Cardone C., Cirri A., Corradi A., Foschini L. The ParcipAct Mobile Crowd Sensing Living
Lab: The Testbed for Smart Cities. IEEE Communications Magazine, 2014, 52(10), 78-85.
13. Szabo R. et al. Framework for Smart City Applications based on Participatory sensing. In 4th
IEEE International Conference on Cognitive Infocommunications. Budapest, Hungary, 2013
14. Stopczynski A., Pietri R., Pentland A., Lazer D., Lehmann, S. Privacy in sensor-driven hu-
man data collection: A guide for practitioners. 2014. arXiv preprint arXiv:1403.5299.
15. Holland R. Will the real Smart City please stand up?. Creative, progressive or just Entrepre-
neurial. City, 2008, 12 (3), 302-320.
16. Cosgrave E., Arbuthnot K., Tryfonas, T. Living labs, innovation districts and information
marketplaces: A systems approach for smart cities. Procedia Computer Science, 16, 2013,
pp. 668-677.
17. Allwinkle S., Cruickshank, P. Creating smart-er cities: An overview. Journal of urban tech-
nology, 2011, 18 (2), 1-16.
18. Department for Business Innovation & Skills, Smart Cities. Background paper, available at:
https://www.gov.uk/government/publications/smart-cities-background-paper, 2013. Date ac-
cessed: 14/06/2016.
19. Haque, U. (2012). Surely there's a smarter approach to smart cities?. Wired, 17, 2012-04.
20. TECNO - Cercle Tecnològic de Catalunya. Hoja de Ruta para la Smart City. Available from:
http://www.socinfo.es/contenido/semina-rios/1404smartcities6/03-ctecno_hoja_ruta_smart-
city.pdf. Date accessed: 18/04/2016.
21. Marsh J., Molinari F., Rizzo F. Human Smart Cities: A New Vision for Redesigning Urban
Community and Citizen’s Life. In Knowledge, Information and Creativity Support Systems:
Recent Trends, Advances and Solutions. 2016. pp. 269-278. (Springer International Publish-
ing).
22. https://smartcitizen.me/ [last date of access: 15/04/2016].
23. https://stormclouds.eu/ [last date of access: 15/04/2016].
24. Almirall, E., Lee, M., & Wareham, J. Mapping living labs in the landscape of innovation
methodologies. Technology Innovation Management Review, 2012, 2(9), 12.
25. Schumacher J., Feurstein, K. Living Labs – the user as co-creator. 2007.
26. http://www.openlivinglabs.eu/
27. Cosgrave E., Arbuthnot K., Tryfonas, T. Living labs, innovation districts and information
marketplaces: A systems approach for smart cities. Procedia Computer Science, 16. 2013, pp.
668-677.
28. Eskelinen, J., Garcia Robles, A., Lindy, I., Marsh, J., & Muente-Kunigami, A. Citizen-
Driven Innovation (No. 21984). The World Bank. 2015.
29. http://humansmartcities.eu/project/apollon/
30. http://my-neighbourhood.eu/
31. http://www.opencities.net/node/66
32. Dell'Era, C., Landoni, P. Living Lab: A Methodology between User̺Centered Design and
Participatory Design. Creativity and Innovation Management, 2014, 23(2), 137-154.
33. Neirotti, P., De Marco, A., Cagliano, A. C., Mangano, G., & Scorrano, F. Current trends in
Smart City initiatives: Some stylised facts. Cities, 2014, 38, pp.25-36.
Involving Autism Spectrum Disorder (ASD)
affected people in design

Stefano Filippi* and Daniela Barattin

Politecnico di Ingegneria e Architettura Dept. (DPIA), University of Udine, Italy


* Corresponding author. Tel.: +39-0432-558289; fax: +39-0432-558251. E-mail address:
stefano.filippi@uniud.it

Abstract. This research aims at moving from design for disabled people to design
led by disabled people. This is achieved by defining a roadmap suggesting how to
involve people affected by Autism Spectrum Disorder (ASD) in design. These
people could represent an added value given their uncommon reasoning mecha-
nisms. The core of the roadmap consists of tests involving groups of ASD and
neurotypical people. These tests are performed using shapes; the testers are asked
for interacting with these shapes and highlighting aroused functions, meanings and
emotions. The outcomes are analyzed in terms of variety, quality, frequency and
originality, and elaborated in order to pursue unforeseen, innovative design solu-
tions.

Keywords: Design Activities, Autism Spectrum Disorder (ASD), Design by disa-


bled people.

1 Introduction

Classic design activities consist of neurotypical people developing products for


neurotypical people and, recently, for disabled people as well. The literature
shows many examples of design for disabled people, referring on one hand to
physical disabilities and ergonomic issues and on the other hand to cognitive disa-
bilities and the compatibility between the product and the human problem solving
process. Ergonomic issues are debated, for example, in Casas et al. [1], aiming at
designing an intelligent system with a monitoring infrastructure that helps elderly
with disabilities to overcome their handicap in performing household tasks. The
focus on cognitive disabilities is placed, for example, by Friedman and Bryen [2];
they define twenty guidelines for Web accessibility for people with different disa-
bilities. Dawe [3] describes interviews with young people with cognitive disabili-
ties and their families aimed at highlighting design aspects about assistive tech-
nologies to implement in the product, like portability, ease-of-learning, etc.

© Springer International Publishing AG 2017 373


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_38
374 S. Filippi and D. Barattin

Up to now, design activities have always consisted in exploiting skill and


knowledge of neurotypical designers to develop products for disabled people. The
research described in this paper aims at subverting this by introducing the concept
of design by disabled people. Specifically, it proposes a roadmap that suggests
how to involve people affected by Autism Spectrum Disorder (ASD) in design ac-
tivities as effectively as possible. These people show specific characteristics like
schematic and practical reasoning, high sensibility and strong emotional answers
to external stimuli, and a very peculiar way to interpret the external world and to
interact with it. As described in [4], these characteristics make ASD people one of
the best candidates to give un-imaginable explorations of the design space and this
could lead to innovative design solutions. The roadmap should allow establishing
an effective collaboration with ASD people by considering them as active actors
in design activities and by assigning them a clear, well-recognized role in the
product development process. Among other consequences, all of this could in-
crease their chances for what concerns possible job placements.
The involvement of ASD people exploits the "from shapes to functions" design
activities, where people interact with a set of shapes and express possible aroused
functions. This choice comes from many reasons. First, shapes are physical enti-
ties; therefore, ASD people do not need to use imagination in interpreting them (it
could be a problem for some of them). Second, these design activities allow for
considering and exploiting the meanings and emotions arisen by the interaction
with the products. As highlighted in [5, 6], meanings and emotions aroused during
interaction are fundamental to positively manage the product complexity and the
resource management during the product development process. Finally, pure
shapes allow ASD people to express their impressions and suggestions more
freely, without those limitations imposed by inner structures, materials, etc.
Neurotypical people will undergo the same activities and they will be considered
as controls. The outcomes of the two groups will be analyzed in terms of variety,
quality, frequency and originality.

2 Background

2.1 Autism Spectrum Disorder (ASD)

The Autism Spectrum Disorder (ASD) is an umbrella term that covers heteroge-
neous, complex and lifelong neurodevelopmental disorders that affect the way a
person communicates and relates to other people and to the world around him/her
[7, 8]. People affected by ASD are characterized by the "triad of impairments" [9].
The social/emotional impairments focus on difficulties in building friendships ap-
propriate for the age, in managing unstructured parts of the day, in predicting the
Involving Autism Spectrum Disorder (ASD) affected people in design 375

behavior of other people and in working cooperatively. Language/communication


impairments deal with the difficulties in processing and retaining information, in
sustaining a conversation, in understanding the body language (facial expression
and gesture), the jokes and sarcasm and the differences between literal and inter-
preted verbal expressions. The flexibility of thought impairments consider the dif-
ficulties in coping with changes in routine because they are overly dependent, in
imagining objects and concepts, in generalizing information and in managing em-
pathy [9-11].
In recent years, ASD people have started to be involved in design activities.
Frauenberger, Makhaeva and Spiel [4] are developing the "OutsideTheBox" pro-
ject. Thanks to participatory design activities, they work with ASD children aim-
ing at designing technological products suitable for their own needs and interests.
Malinverni et al. [12] exploit again participatory design activities to develop a ki-
netic-based game for ASD children to help them acquiring simple abilities in so-
cial interaction. Lowe et al. [7] exploit participatory observations, co-design work-
shops, interviews and mapping tools to involve ASD adult people in designing
living environments to enhance their everyday life experiences at home.

2.2 "From shapes to functions" design activities

The "from shapes to functions" design activities are based on the generation and
analysis of fashionable shapes [13]. Their generation aims at arousing specific
emotions in the people interacting with them. The analysis of the shapes usually
consists of tests where users interact with these shapes using touch and sight. This
analysis aims at highlighting both the emotions aroused by the shapes and possible
product behaviors and the related functions to implement in products showing
those shapes afterwards. Alessi is an example of company exploiting these design
activities. It produces iconic objects like household appliances, etc. [14].

3 Activities

The research activities define the roadmap that suggests how to involve ASD peo-
ple in design as effectively as possible. Figure 1 shows this roadmap; each activity
is described in the following.
Goal definition. The aim of the exploitation of the roadmap is to design a product
belonging to a specific application domain by exploiting the peculiarities of the
ASD people. The product will be suitable for both ASD and neurotypical people.
Output definition. The expected outcomes consist of a list of design solutions
generated by elaborating the functions, meanings and emotions aroused during in-
teraction. Variety, quality, frequency and originality are the parameters exploited
376 S. Filippi and D. Barattin

to generate the design solutions. Moreover, a comparison to the outcomes generat-


ed by a group of neurotypical people (the controls) highlights the different reason-
ing and understanding of the world of the two types of testers. Concerns also re-
gard the distribution as well as the presence of counter-posed outcomes inside the
ASD group and between the two groups.

Goal definition Output definition Input definition Selection of design activities Generation of material Environment Setup
- Design of a - List of design - People to involve -Participatory design exploiting the - Set of shapes - Relaxed and familiar
product solutions (as "from shapes to functions" design environment
belonging to a elaboration of - Application activities - Guide documents
defined functions, meanings domain - Few people
application and emotions) - Simple and structured tests - Forms for data collection
domain, - Video recording
involving ASD - Four parameters for equipment
people the analysis

Test execution Data collection and analysis Drawing conclusions


- One tester at a time - Classification of data respect to the tester type, the - Formulation of design
shape and the topic (function, meaning or emotion) solutions for the specific
- Execution of five steps application domain
- Analysis respect of the four parameters (variety,
- Filling of forms quality, frequency and originality)

Figure 1. The roadmap for the effective involvement of ASD people in design.

Input definition. There are two important input to define: the people to involve
and the application domain where the shapes are considered. Considering the peo-
ple, two groups are selected, one composed by ASD people and the other by
neurotypical people. The number of participants of each group should be the same
in order to have the same influence on the design solutions. This number depends
from available time and resources, as well as from the expected quality of the re-
sults in terms of contents and statistical relevance. To make the activities feasible
and their management easier, ASD people must be able to understand the activi-
ties they will be called to perform and communicate their impressions easily, in
order to reduce the test duration and minimize the intervention of other people
than designers, like parents or psychologists, to solve possible misunderstandings
and/or problems. The age of the testers depends from the application domain. Fi-
nally, for a good characterization of both ASD and neurotypical people, preparato-
ry tests like Raven’s Standard Progressive Matrices [15] and Trail Making [16]
should be adopted. Considering now the application domain, its definition is need-
ed to lead the shape choice and make the analysis of the outcomes focusing on the
sole interesting aspects. This avoids considering shapes and generating results that
are not interesting for the specific application.
Selection of design activities. The selection of the design activities depends main-
ly on the characteristics of the people involved, especially regarding the ASD
people. Previous researches suggest the participatory design as the best testing ac-
tivities [4, 7, 9]. People are involved in performing tests and these tests can be ex-
ploited in different design phases [2, 4]. The roadmap shows a customized release
of participatory design suitable for performing the "from shapes to functions" ac-
tivities. People undergo the tests to highlight functions starting from fashionable
Involving Autism Spectrum Disorder (ASD) affected people in design 377

shapes in the concept generation phase of the product development process. This
kind of activities lets the people express their thoughts freely because there are not
constraints due to inner structures, materials, etc., typical situations when charac-
teristics of real products are involved. The test must show a clear structure to help
ASD people in understanding the sequence of the activities and the designers in
leading the process all the time as best as possible. The activities must consist of
simple sub-activities where people interact with shapes exploiting sight and touch,
answering to interviews and filling questionnaires. The interaction must be com-
pletely free, except for a precise timing marked by the designers. Questions must
be short and focused on specific interaction aspects. Verbal and written questions
must be suitable for all the people who could have different ways to communicate
their impressions. The voice tone asking for the verbal questions must be calm and
colloquial to create a relaxed environment.
Generation of material. The material to perform the design activities, the tests in
particular, consists of a set of shapes, the documents the testers will use as guides
and the forms the designers and testers will fill during the data collection. The
shape choice constitutes the most important decision to take. Several criteria are
proposed to select those shapes that should exploit at best the characteristics of the
ASD people. First of all, the shapes must be real instead of digital. The testers
must have the possibility to get in physical touch with the shapes because ASD
people could show difficulties to work with imagination [3, 4]. Second, these
shapes must be composed by simpler shapes that help ASD people to recall past
uses, moments when these uses happened and the related functions, meanings and
emotions. Examples of simple shapes are the ice cream cone, the telephone hand-
set, the door handle, etc. Although these shapes offer a clear and known basis to
start reasoning, they barely limit the ASD people freedom of thinking; the un-
common links and relationships that the ASD people could see among these
shapes could suggest different functions and evoke unexpected/unusual meanings
and emotions. Third, the dimensions of the shapes must be chosen carefully be-
cause ASD people could find difficulties in mapping/scaling the shapes in their
mind with different dimensions than expected and the test results can be heavily
affected by this. The same could happen with colors. ASD people might consider
colors as an important aspect to evaluate and this can generate noise [17]. For this
reason, all shapes must have the same color in order to minimize the number of
variables to take care of. Fourth, ASD people often focus their attention on details
instead of on the shape as a whole and this behavior is quite different from that of
neurotypical people. Introducing details allows designers to keep ASD people fo-
cused and interested throughout the test. At the same time, the number of details
for each shape must be low, otherwise ASD people receive too many stimuli and
they could be too stressed to conduct the test in a natural way [9]. Fifth, since the
surface finishing of the shapes can have a deep impact on the emotions of ASD
people given their higher sensibility, shapes with rough and/or irregular surface
finishing must be avoided [9, 17]. Sixth, ASD people are attracted by symmetry;
therefore, the exploitation of symmetrical shapes can be a good way to capture
378 S. Filippi and D. Barattin

their attention [17]. Finally, designers should propose a low number of shapes,
e.g., five at most. More shapes could compromise the quality of the results be-
cause an excessive cognitive workload in terms of attention and stress would be
required. Table 1 reports some examples of shapes referring to the application
domains where home appliances and stationery are developed. These shapes are
built on simpler ones, as a bowl (shape a) a knob (c) or a needle (d); each of them
has from one (a and c) to three details (d) and the surfaces of all of them are
smooth and show the same, neutral color.

Table 1. Examples of shapes.

Application domain Shape 1 Shape 2

Home appliances

a b

Stationery

c d

Together with the shapes, five documents must be prepared to perform the tests
and to make the information gathering easier. The first document contains the
claim the designers will say to the testers before the interaction with the shapes
takes place. This claim should be presented in a narrative way [12]. A generic ex-
ample that can be exploit in different application domain is "you are going to see
some objects. I will ask you to perform some actions with them. In the meantime,
please tell me your sensations out loud. Specifically, I would like to know if these
objects recall something to you, if you think they could be useful for doing some-
thing, and if they arouse particular emotions to you. Are you ready?". The second
document will be given to the testers; it describes the activities they are called to
perform. This document is important especially for ASD people since they work
better by following written instructions because they do not like improvisation
and/or confusion and they work better with visual instructions [2, 9, 17]. The third
document contains the questions that designers and psychologists will ask to the
Involving Autism Spectrum Disorder (ASD) affected people in design 379

testers during the interaction with the shapes. These questions should recall these
ones: "is the object recalling something specific - a place, a moment, another ob-
ject, etc. - to you?"; "does the object suggest performing specific actions to you
(for example, if the object would recall a window handle, it could suggest the ac-
tion turn to open)?"; "do you think the object could be useful for doing some-
thing?"; "are you experiencing specific emotions while interacting with the ob-
ject?". The fourth document is a form used by designers to collect the answers of
the testers as well as personal comments about the testers' thinking out loud activi-
ties. Finally, the fifth document contains similar questions to those in the third one
and the testers are called to fill this document by themselves. Several empty spac-
es are present where the testers are free to add information in any format they like
(e.g., text, sketches, etc.).
Environment setup. The environment where the design activities will take place
must be suitable for ASD people. It should be relaxed [9] and somehow familiar
[2] in order to avoid possible causes of stress like interferences, noises, etc. For
example, a room with some games, a desk and a sofa would be suitable for chil-
dren because this would replicate their bedroom where they feel safe. For adults, a
mimic of a living room could be the best solution. Very few people should be pre-
sent during the test execution; one designer who leads the test and a psychologist
should be enough. For this reason, and for data collection and archiving, tests must
be recorded. This must be obviously performed with the testers' consent, but the
video recording equipment must be out of sight to avoid affecting the testers'
stress level.
Test execution. Once identified the testers, defined the application domain, pre-
pared all materials and the environment, the test activities can start. These activi-
ties are performed one tester at a time to avoid the testers influencing each other.
The activities should run as follows.
1. The designer introduces the test by reading the first document. Moreover,
he/she gives the tester the second document containing the list of the activities
to perform.
2. The designer places the first shape on the table and asks the tester for watching
it carefully, without touching it. After a short period (not more than 10 se-
conds), the designer starts to ask the questions contained in the third document
and uses the fourth document to annotate any comment and suggestion the test-
er should express spontaneously. Of course, the timing will need to be the same
for every tester.
3. After another short period (a bit longer than the previous one, but not more than
30 seconds), the designer invites the tester to touch/manipulate the shape. After
the same short period of activity 2, the designer asks again the questions con-
tained in the third document. The designer carries on writing in the fourth doc-
ument specific tester's comments and suggestions that should arouse.
380 S. Filippi and D. Barattin

4. After the same short period of activity 3, the designer takes the shape away and
gives the tester the fifth document to fill. The designer allows some minutes (3
to 5) for performing this task.
5. Once finished, activities 2 to 4 are applied for all the other shapes.
Data collection and analysis. At the end of the tests, data are collected from
questionnaires, designers' notes and recorded videos. Data are classified against
the types of testers (ASD vs. neurotypical), the shapes and the three topics of in-
terest (functions, meanings and emotions). Functions, meanings and emotions are
analyzed against four parameters. The first parameter, the variety, focuses on the
number of functions highlighted for each shape and on the differences among the-
se functions. The same applies for meanings and emotions. The second parameter,
the quality, refers to the completeness of functions, meanings and emotions. The
level of detail, the quantity of information given and the clearness of the verbal
and written expressions shown by the specific tester are all covered by this param-
eter. The frequency, third parameter, indicates the level of importance of a func-
tion, a meaning or an emotion. If a specific shape suggests the same function to
many testers, this means that designers should consider this function as intrinsical-
ly connected to that shape. Finally, the originality, the fourth parameter, highlights
the presence of possible innovative functions, meanings and emotions. Functions
and meanings completely different from all the others could represent new inter-
pretations of a shape; an unexpected emotion could represent the possibility to at-
tract new people towards that shape. The suggestions freely expressed by the test-
ers are classified against the shape and the function they are related to and are
exploited in the following activities. At the end, the outcomes of the two groups of
testers are compared in order to highlight overlaps and/or differences in the way
the two groups interpret the shapes in term of functions, meanings and emotions.
This integrates the results and helps in generating richer and more complete design
solutions.
Drawing conclusions. Thanks to the previous analysis, the most important and
useful meanings, emotions and functions are highlighted. These meanings and
emotions increase the contents and so the chances of the functions they belong to
be selected. After that, these enriched functions are elaborated to define the design
solutions. Obviously, these solutions will be formulated for the specific domain;
anyway, they could be exploited also in other application domains thanks to a
generalization of them.

4 Discussion

The proposed roadmap shows an ordered list of activities to perform. It is generic


and flexible enough to be adapted to every application domain and to add new pa-
rameters for a finer analysis of the outcomes. Moreover, the roadmap is thought to
Involving Autism Spectrum Disorder (ASD) affected people in design 381

support at best the characteristics and needs of the ASD people in order to maxim-
ize the achievement of unforeseen and innovative design solutions. This mainly
regards the shape selection and the test execution. The roadmap has already re-
ceived a first positive judgment both from the psychologists that helped its genera-
tion and from other professionals working in the ASD fields. This research could
show interesting theoretical implications. For example, the TRIZ theory about sys-
tematic innovation [18] tells that innovation can rely on searching solutions in
domains completely different from the ones the designers are used to. But, all of
this is meant to happen by exploiting the same reasoning mechanisms. Here, on
the contrary, innovative design solutions are searched by exploiting different rea-
soning mechanisms; different application domains are eventually considered only
afterwards.
Once described the positive aspects of the research, some drawbacks need to be
pointed out as well. First of all, a real application in the field to confirm the cor-
rectness of the roadmap and support the effectiveness of its results is on the way
but the results of these activities are still missing. Moreover, comparisons with
similar existing methods have not been performed up to now. Finally, the last two
activities of the roadmap are completely left to the experience and knowledge of
designers and psychologists because no tools or help are given. If designers should
be inexperienced in dealing with ASD people and/or the same should happen for
the psychologists regarding this particular kind of design activities, the design so-
lutions could be incomplete or even wrong.

5 Conclusions

Some years ago, design activities started to focus on the generation of products for
disabled people. This research aims at giving some indications on how to involve
disabled people, in particular people affected by Autism Spectrum Disorder
(ASD), in order to let them directly design products both for neurotypical and dis-
abled people. The result is a roadmap, composed by several activities that exploits
tests involving ASD and neurotypical people. These tests are based on the interac-
tion with specific shapes, aiming at collecting and analyzing pieces of information
about the functions, meanings and emotions those shapes arouse in the testers. All
these data should lead to the generation of innovative and unforeseen design solu-
tions to implement in new products. These results should allow assigning a recog-
nized role for ASD people as active members in design teams; as a consequence,
this could have implications also regarding possible job placement. The current re-
lease of the roadmap has already been positively judged by psychologists and ex-
perts in the ASD field; nevertheless, it needs further validation to be effectively
exploited in real application domains. Moreover, the structure of the roadmap
must be checked against existing similar design activities. The last two activities
of the roadmap should exploit help or, even better, automatic tools to make their
382 S. Filippi and D. Barattin

execution easier. Finally, the interaction with the shapes should involve other sen-
sorial elements than touch and sight, like sounds, tastes, colors, materials, etc., as
well as their combinations. In this way, the roadmap would become even more
generic and applicable in a wider set of application domains.

Acknowledgements. The authors would like to thank prof. Andrea Marini for his valuable help
in introducing them into the field of the Autism Spectrum Disorder from the psychological point
of view.

Reference

1. Casas R., Marín R. B., Robinet A., Delgado A. R., Yarza A. R., McGinn J., Picking R. and
Grout V. User Modelling in Ambient Intelligence for Elderly and Disabled People. Comput-
ers Helping People with Special Needs, 2008, 5105 of the series Lecture Notes in Computer
Science, 114-122.
2. Friedman, M.G. and Bryen D.N. Web accessibility design recommendations for people with
cognitive disabilities. Technology and Disability, 2007, 19, 205–212.
3. Dawe M. Desperately Seeking Simplicity: How Young Adults with Cognitive Disabilities and
Their Families Adopt Assistive Technologies. In Conference on Human Factors in Compu-
ting Systems, CHI2006, Montreal, Canada, April 2006.
4. Frauenberger C., Makhaeva J. and Spiel K. Designing smart objects with autistic children.
Four design exposes. In conference on Human-Computer Interaction, CHI 2016, San Jose,
CA, USA, May 2016.
5. von Saucken C.,Michailidou I. and Lindemann U. How to design experiences: macro UX ver-
sus micro UX approach. Design, User Experience, and Usability. Web, Mobile, and Product
Design, 2013, 8015, 130-139.
6. Desmet P.M.A. and Hekkert P. Framework of product experience. International Journal of De-
sign, 2007, 1(1), 57-66.
7. Lowe C., Gaudion K., McGinley C. and Kew A. Designing living environments with adults
with autism. Tizard Learning Disability Review, 2014, 19(2), 63 – 72.
8. Baron-Cohen S. Facts: Autism and Asperger syndrome. 2nd ed., 2008 (Oxford Univ. Press).
9. Daley L., Lawson S. and van der Zee E. Asperger Syndrome and Mobile Phone Behavior. In
International Conference on Human-Computer Interaction, HCI2009, San Diego, CA, USA,
July 2009, pp. 344-352.
10. Frauenberger C., Good J., Alcorn A. and Pain H. Supporting the design contributions of chil-
dren with autism spectrum conditions. In International Conference on Interaction Design and
Children, IDC'12, Bremen, Germany, June 2012, pp. 134-143.
11. American Psychiatric Association. Diagnostic and statistical manual of mental disorders (5th
ed.) DSM-5, 2013, Washington, D.C., USA.
12. Malinverni L., Mora-Guiard J., Padillo V., Mairena M. A., Hervás A. and Pares N. Participa-
tory Design Strategies to Enhance the Creative Contribution of Children with Special Needs.
In International Conference on Interaction Design and Children, IDC'14, Aarhus, Denmark,
June 2014, pp. 85-94.
13. Filippi, S. and Barattin, D. Definition of the form-based design approach and description of it
using the FBS framework. In International Conference on Engineering Design, ICED2015,
Milano, Italy, July 2015.
14. Alessi. The Italian factory of industrial design, 2016. Available online at www.alessi.com/en.
Retrieved 12/04/2016.
Involving Autism Spectrum Disorder (ASD) affected people in design 383

15. Hayashi M., Kato M., Igarashi K. and Kashima H. Superior fluid intelligence in children with
Asperger’s disorder. Brain and Cognition, 2008, 66, 306–310.
16. Reitan R. M. and Wolfson D. The Trail Making Test as an initial screening procedure for
neuropsychological impairment in older children. Archives of Clinical Neuropsychology,
2004, 19, 281–288.
17. Attwood, T. The Complete Guide to Asperger's Syndrome, 2006 (Jessica Kingsley Publish-
ers).
18. Altshuller G. and Rodman S. The innovation algorithm: TRIZ, systematic innovation and
technical creativity, 1999 (Technical Innovation Center, Inc, Worcester, MA).
Part III
Engineering Methods in Medicine

In recent years, engineering methods are more and more spreading in the medicine
field. The research of new engineering techniques and tools for medical applic a-
tions has become a very current topic and, consequently, the new figure of the bi-
omedical engineer has become one of the fastest growing career. The main goal of
biomed ical engineers is to focus on the convergence of disease, technology and
sciences by applying an engineering approach to medicine and, for these reasons,
they work at the intersection of engineering, life sciences and healthcare. Bio med-
ical engineers, in fact, take principles fro m applied science, like mechanical and
computer engineering, and physical sciences and apply them to medicine. The
creation and application of new engineering technologies has modified, over last
years, the classical medical approaches by making the management of various dis-
orders faster, less expensively, safer and with fewer side effects.
The papers presented in this chapter represent an updated report on the biomed-
ical engineering research. Main advances in the use of engineering methods (like
imaging, nu merical simu lations, reverse engineering, CAD modelling, etc...) in
med icine are reported. In most cases, very interesting experimental case studies
concerning real problems, with a substantially degree of technological innovation,
are presented. All the contributions demonstrate that combining engineering a p-
proach together with medical knowledge can help in the diagnosis, treatment and
prevention of the major d iseases affecting our society. Of course, this chapter is a
very interesting tool for obtaining an understanding of the newest techniques and
research in medical engineering.

Samuel Gomes – UTBM

Tommaso Ingrassia - Univ. Palermo

Rikardo Minguez - Univ. Basque Country


Patient-specific 3D modelling of heart and
cardiac structures workflow: an overview of
methodologies

Monica CARFAGNI1* and Francesca UCCHEDDU1


1
Department of Industrial Engineering, via di Santa Marta, 3, 50139 Firenze (Italy)
* Corresponding author. Tel.: +39-055-2758731 ; fax: +39-055-2758755. E-mail address:
monica.carfagni@unifi.it

Abstract Cardiovascular diagnosis, surgical planning and intervention are among


the most interested in recent developments in the field of 3D acquisition, model-
ling and rapid prototyping techniques. In case of complex heart disease, to provide
an accurate planning of the intervention and to support surgical planning and in-
tervention, an increasing number of Hospitals make use of physical 3D models of
the cardiac structure, including heart, obtained using additive manufacturing start-
ing from the 3D model retrieved with medical imagery. The present work aims in
providing an overview on most recent approaches and methodologies for creating
physical prototypes of patient-specific heart and cardiac structures, with particular
reference to most critical phases such as segmentation and aspects concerning
converting digital models into physical replicas through rapid prototyping tech-
niques. First, recent techniques for image enhancement to highlight anatomical
structures of interest are presented together with the current state of the art of
semi-automatic image segmentation. Then, most suitable techniques for prototyp-
ing the retrieved 3D model are investigated so as to draft some hints for creating
prototypes useful for planning the medical intervention.

Keywords: rapid prototyping; 3D modelling; 3D printing; medical imagery;


heart; cardiovascular diseases; surgical planning.

1 Introduction

The care and management of adult patients with congenital or acquired struc-
tural heart disease represents one of the most relevant areas of research in cardiol-
ogy, documenting a rapid grow of studies related to this vital area. Recent ad-
vancements in imaging technology, also taken by engineering [1-3] have
continued to raise awareness of hemodynamically significant intra-cardiac shunt

© Springer International Publishing AG 2017 387


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_39
388 M. Carfagni and F. Uccheddu

lesions in adults. Given the widely ranged complexity of possible structural heart
defects, non-invasive imaging have become paramount in their treatments. Alt-
hough both two-dimensional (2D) imaging modalities such as echocardiography
and three-dimensional (3D) devices such as computed tomography, and magnetic
resonance imaging (MRI) are undeniably valuable in the evaluation of adult pa-
tients with structural hearth disease, these methods are still constrained by their
overall lack of realism and inability to be “physically manipulated”. Thereby, such
techniques remain limited in their ability to effectively represent the complex
three-dimensional (3D) shape of the heart and its peripheral structures.
With the aim of providing an accurate planning of the intervention, an increas-
ing number of Hospitals [4] make use of physical 3D models of the cardiac struc-
ture, obtained using additive manufacturing starting from the 3D model retrieved
with medical imagery. In fact, the advent of 3D printing technology, has provided
a more advanced tool with an intuitive and tangible 3D fabricated model that goes
beyond a simple 3D-shaded visualization on a flat screen. For its use in medical
fields, the most important of the many advantages of 3D printing technology are
both the “zero lead time” between design and final production of accurate models
and the possibility of creating specific models resembling the actual structure of
the patient heart: in the clinical setting, the possibility of one-stop manufacturing
from medical imaging to 3D printing has accelerated the recent medical trend to-
wards “personalized” or “patient-specific” treatment.
According to recent literature, the most effective way for creating 3D models
starting from 2D medical imaging is based on the virtuous process cycle (starting
from 2D and 3D image acquisition and providing, in output, a model of the patient
heart) shown in Figure 1.

Fig.1. Patient-specific 3D modelling and printing workflow

Such an innovative process involves a number of steps starting from a medical


imagery with particular reference to (but not exclusively) computed tomography
(CT), multi-slice CT (MCT) and magnetic resonance imaging (MRI) [5-8]. Ac-
quired images are then processed in order to segment regions of interest, i.e. heart
chambers, valves, aorta, coronary vessels, etc. These segmented areas are convert-
ed into 3D models, using tools like volume rendering or surface reconstruction
procedures. Due to the increasing number of methods to comply with the above
mentioned process, the main aim of the present work is to provide an overview of
methodologies dealing with patient-specific 3D modelling of heart and cardiac
structures. First, main imaging systems medical imagery for acquiring 2D and 3D
data inferred to heart structure are introduced. Then, the most recent algorithms
for image enhancement and restoration are explored and a brief overview of seg-
mentation and classification algorithms is described. Section 5 is devoted to brief-
Patient-specific 3D modelling of heart … 389

ly overview most promising techniques for 3D heart model reconstruction process.


Finally, in Section 6, some considerations regarding 3D printing of heart structure
are draft.

2 Medical Imaging

Common types of medical imaging used for cardiac structure and heart analy-
sis, include the followings:
(i) X-ray (e.g. radiography, computed tomography (CT)) - Especially in the recent
advances, CT can provide detailed anatomical information of chambers, vessels,
coronary arteries, and coronary calcium scoring. In cardiac CT, there are two im-
aging procedures: (1) coronary calcium scoring with non-contrast CT and (2) non-
invasive imaging of coronary arteries with contrast-enhanced CT. Typically, non-
contrast CT imaging exploits the natural density of tissues. As a result, various
densities using different attenuation values such as air, calcium, fat, and soft tis-
sues can be easily distinguished. Contrast-enhanced CT is used for imaging of
coronary arteries with contrast material such as a bolus or continuous infusion of a
high concentration of iodinated contrast material.
(ii) magnetic resonance imaging (MRI) - Is an imaging technique based on detect-
ing different tissue characteristics by varying the number and sequence of pulsed
radio frequency fields, taking advantage of the magnetic relaxation properties of
different tissues [9]. MRI measures the density of a specific nucleus, normally
hydrogen, which is magnetic and largely present in the human body, including
heart [10], except for bone structures.
(iii) ultrasound - For cardiac usage, ultrasound is applied by means of an echocar-
diogram able to provide information on the four chambers of the heart, the heart
valves and the walls of the heart, the blood vessels entering and leaving the heart
and the pericardium.
(iv) nuclear (e.g., positron emission tomography - PET) - A PET scan is a very ac-
curate way to diagnose coronary artery disease and detect areas of low blood flow
in the heart. PET can also identify dead tissue and injured tissue that’s still living
and functioning.

3 Image Enhancement and Restoration

Digital medical imagery often suffers from different kinds of degradation, such
as artefacts due to patient motion or interferences, poor contrast, noise and blur
(see Figure 2). To improve the quality and visual appearance of medical images,
two main procedures are usually adopted, namely image restoration and image en-
hancement [11].
390 M. Carfagni and F. Uccheddu

Image restoration algorithms primarily aim at reducing blur and noise in the
processed image, naturally related to and introduced by the data acquisition proc-
ess. Denoising methods require to estimate and model blur and noise that affect
the image and depend on a number of factors, like capturing instruments, trans-
mission media, image quantization, discrete sources of radiation, etc. For example,
standard digital images are assumed to have additive random noise which is mod-
elled as a Gaussian, speckle noise is observed in ultrasound images, whereas Ri-
cian noise affects MRI images [12].

Fig.2. Example of a cardiac TC.

Image enhancement techniques are mainly devoted to contrast enhancement, in


order to extract, or accentuate, certain image features so as to improve the under-
standing of information content and obtain an image more suitable than the origi-
nal for automated image processing (e.g. for highlighting structures such as tissues
and organs). In literature, methods such as range compression, contrast stretching,
histogram equalization with gamma correction [13] are usually adopted to enhance
the quality of medical images. In general, despite the effectiveness of each single
approach, usually a combination of different methods allows to achieve the most
effective image enhancement result [14].

4 Segmentation and classification

Segmentation is the process of dividing an image into regions with similar


properties such as grey level, colour, texture, brightness, and contrast. In medical
imagery, the role of segmentation consists in identifying and subdividing different
anatomical structures or regions of interest (ROI) in the images. As result of the
segmentation task, the pixels in the image are partitioned in non-overlapping re-
gions, belonging to the same tissue class. Disconnected regions can belong to the
same class, whose number is usually decided according to a prior knowledge of
the anatomy. Some approaches [15] adopt texture content to perform image seg-
mentation and classification: the aim of texture based segmentation method is to
Patient-specific 3D modelling of heart … 391

subdivide the image into region having different texture properties, while in classi-
fication the aim is to classify the regions which have already been segmented. A
texture may be fine, coarse, smooth, or grained, depending upon its tone and struc-
ture, where tone is based on pixel intensity properties in primitive while structure
is the spatial relationship between primitives [16].
Automatic segmentation of medical images is a valuable tool to perform a tedi-
ous task with the aim of making it faster and, ideally, more robust than manual
procedures. However, it is a difficult task as medical images are complex in nature
and often affect by intrinsic issues such as:
ƒ partial volume effects, i.e. artefacts occurring when different tissue types mix
up together in a single pixel and resulting in non-sharp boundaries. Partial-
volume effects are frequent in CT and MRI, where the resolution is not iso-
tropic and, in many cases, is quite poor along one axis of the image (usually
the Z or longitudinal axis running along the patient body);
ƒ intensity inhomogeneity of a single tissue class that varies gradually in the
image, producing a shading effect;
ƒ presence of artefacts;
ƒ similarity of grey values for different tissues.
Many different approaches have been developed for automatic image segmen-
tation that is still a current and active area of research. Classifications of existing
segmentation methods have been attempted by several authors (e.g. [17]). Simi-
larly to other image analysis fields also in medical image segmentation, automatic
methods are classified as supervised and unsupervised, where the main difference
resides in the operator interaction required by the first approaches throughout the
segmentation process. The methods that identifies regions of interest by labelling
all pixels/voxels in the images/volume are known as volume identification meth-
ods. On the contrary, approaches that recognises the boundaries of the different
regions are called boundary identification methods [18]. Low-level techniques
usually rely on simple criteria based on grey intensity values, such as thresholding,
region growing, edge detection, etc. More complex approaches introduce uncer-
tainty models and optimization methods, like statistical pattern recognition based
on Markow Random Field [19], deformable models [20], graph search [21], artifi-
cial neural networks [22], etc. Finally, the most advanced methods may incorpo-
rate higher-level knowledge, such as a-priori information, expert-defined rules,
and models. Methods like atlas-based segmentation [23] and deformable models
belong to this last group. For patient-specific application in surgical planning a
fully and accurate automatic segmentation approach would be desirable to make
the process fast and reliable. Unfortunately, anatomical variability and intrinsic
image issues limit the reliability of fully automatic approaches. At the end of the
segmentation process, the operator interaction is still required for error correc-
tions. Interactive segmentation methods, employing for example manual segmen-
tations in a small set of slices and automatic classification of the remaining vol-
ume using patch-based approach [24], provide promising results and thus seem to
open future research on this field.
392 M. Carfagni and F. Uccheddu

5 3D heart model reconstruction

After segmentation, a surface model should be generated by using, for instance,


a marching cube method [25] or other 3D contour extraction algorithms [26]. The
resultant surface can be used as the starting point for either generation of higher
order representation, such as non-uniform rational B-splines NURBS-based sur-
faces, or for meshing improvement using, for example, mesh growing methods
[27, 28], Delaunay meshing techniques [29], Poisson surface reconstruction
method [30] or other voxel-based methods [31-32].
However, the retrieved 3D model is not suitable for 3D printing for a number
of reasons such as too many mesh units and/or incomplete topological structure.
Therefore, topological correction, decimation, Laplacian smoothing, and local
smoothing [33, 34] are needed to create a 3D model ready for 3D printing. In gen-
eral, the accuracy of the 3D printing object depends on the combination of the ac-
curacy of the medical image, which should be as thin as possible, the appropriate
imaging process for 3D modeling, and the 3D printing accuracy of the system.

Figure 3. Orthogonal sectioning of a 3D CT volume image through MPR. Single orthogonal


plane views: a) axial or XY plane, dividing the body into Superior-Inferior parts; b) sagittal or
XZ plane, dividing the body into Left-Right parts; c) coronal or YZ plane dividing the body into
Anterior-Posterior parts d) Orthogonal planes visualized in the cubic volume.

One major challenge faced in creating physical models lies in disconnection be-
tween the digital 3D surface models and the original 2D image. Currently avail-
able industry specific image-processing software remains limited in its ability to
generate digital 3D models that are directly applicable to rapid prototyping. As a
result, true integration of the raw 2D image data into the generated digital 3D sur-
face models is lost. The post 3D processing (i.e., correction of errant points and
elimination of various artefacts within the digital 3D surface model) therefore re-
lies heavily on the expert clinical and anatomic knowledge of the graphic editor,
Patient-specific 3D modelling of heart … 393

especially because a wide array of structural heart anomalies that significantly de-
viate from conventional cardiovascular anatomy may be present.

6 Additive technologies and 3D Printing

The most common additive technologies used in medicine are selective laser
sintering, fused deposition modelling, multijet modelling/3D printing, and stereo-
lithography. Selective laser sintering (3-D Systems Inc., Rock Hill, SC) uses a
high-power laser to fuse small particles of plastic, metal, or ceramic powders into
a 3D object [35]. Selective laser sintering has the ability to utilize a variety of
thermoplastic powders and has a high geometric accuracy but is generally higher
in cost than other additive methods.
In fused deposition modeling (Stratasys Inc, Eden Prairie, Minn), a plastic
filament (typically acrylonitrile butadiene styrene polymer) is forced through a
heated extrusion nozzle that melts the filament and deposits a layer of material
that hardens immediately on extrusion [36]. A separate water-soluble material is
used for making temporary support structures while the manufacturing is in pro-
gress. The process is repeated layer by layer until the model is complete.
Multijet modeling or 3D printing (Z Corporation, Burlington, Mass) essentially
works like a normal ink-jet printer but in 3D space. In this process, layers of fine
powder (either plaster or resins) are selectively bonded by printing a water-based
adhesive from the ink-jet printhead in the shape of each cross section as deter-
mined by the computer-aided design file. Each layer quickly hardens, and the
process is repeated until the model is complete [37].
In stereolithography, models are built through layer-by layer polymerization of
a photosensitive resin. A computer-controlled laser generates an ultraviolet beam
that draws on the surface of a pool of resin stimulating the instantaneous local po-
lymerization of the liquid resin in the outlined pattern. A movable platform lowers
the newly formed layer, thereby exposing a new layer of photosensitive resin, and
the process is repeated until the model is complete.
Depending on their intended application (i.e. education, catheter navigation,
device sizing and testing, and so on), physical models may be printed in multiple
materials using a variety of 3D printing technologies, each with its own collection
of benefits and shortcomings. For example, multijet modelling technology can be
used to generate full-colour models to highlight anomalous structures or specific
regions of interest. Printing times are fast (approximately 6-7 hours per model)
and cost-effective. Although flexible models may be prototyped by multijet mod-
elling technology, the properties of the material often fail to accurately mimic true
tissue properties. PolyJet Matrix printing technology offer the ability to print
physical models in materials that more closely resemble the properties of native
tissue, thus representing the new direction in rapid prototyping technology with its
ability to print in different materials simultaneously. This unique technology will
394 M. Carfagni and F. Uccheddu

allow most physical models to be printed in durable materials (e.g., plastic),


whereas specified segments (e.g., interatrial septum, septal defects, vascular struc-
tures, and so on) are printed in less durable, but more lifelike, materials (e.g. rub-
ber polymers) for more realistic manipulation.

7 Discussion and conclusions

Figure 4: Sample full-color physical models printed on the left with Multijet modelling
technology, on the right with Polyjet Matrix technology.

With the development of inexpensive 3D printers, 3D printable multi-materials,


and 3D medical imaging modalities, 3D printing medical applications for hearth
diseases among others, have come into the spotlight. Due to the availability of
transparent, full-coloured, and flexible multi-materials, 3D printing objects can be
more realistic, miming the properties of the real body; i.e., not only hard tissue
alone but also hard and soft tissue together. Several major limitations, such as
those associated with the technology and the time and cost of manufacturing 3D
phantoms, remain to be overcome. Development and optimization of the entire
procedure, from image acquisition to 3D printing fabrication, are required for
personalized treatment, even in emergency situations. In addition, to produce an
effective 3D printing object, multidisciplinary knowledge of the entire 3D printing
process chain is needed; namely, image acquisition using a protocol suitable for
3D modeling, post-processing of the medical images to generate a 3D
reconstructed model, 3D printing manufacturing with an appropriate 3D printing
technique, and post-processing of the 3D printing object to adapt it for medical
use.

References

1. Liverani, A., Leali, F., Pellicciari, M., Real-time 3D features reconstruction through monoc-
ular vision, International Journal on Interactive Design and Manufacturing, Volume 4, Issue
2, May 2010, Pages 103-112.
Patient-specific 3D modelling of heart … 395

2. Furferi, R., Governi, L. Machine vision tool for real-time detection of defects on textile raw
fabrics (2008) Journal of the Textile Institute, 99 (1), pp. 57-66.
3. Renzi, C., Leali, F., Cavazzuti, M., Andrisano, A.O., A review on artificial intelligence appli-
cations to the optimal design of dedicated and reconfigurable manufacturing systems Interna-
tional Journal of Advanced Manufacturing Technology, Volume 72, Issue 1-4, April 2014,
Pages 403-418
4. Itagaki, Michael W. “Using 3D Printed Models for Planning and Guidance during Endovascu-
lar Intervention: A Technical Advance.” Diagnostic and Interventional Radiology 21.4
(2015): 338–341. PMC. Web. 4 Apr. 2016.
5. H. Zhang et al., “4-D cardiac MR image analysis: left and right ventricular morphology and
function,” IEEE Trans. Med. Imag. 29(2), 350–364 (2010).
6. Wu, Jia, Marc A. Simon, and John C. Brigham. "A comparative analysis of global shape anal-
ysis methods for the assessment of the human right ventricle." Computer Methods in Biome-
chanics and Biomedical Engineering: Imaging & Visualization ahead-of-print (2014): 1-17.
7. Punithakumar, Kumaradevan, et al. "Right ventricular segmentation in cardiac MRI with mov-
ing mesh correspondences." Computerized Medical Imaging and Graphics 43 (2015): 15-25.
8. Cappetti, N., Naddeo, A., Naddeo, F., Solitro, G.F., 2015, Finite elements/Taguchi method
based procedure for the identification of the geometrical parameters significantly affecting
the biomechanical behavior of a lumbar disc, Computer Methods in Biomechanics and Bio-
medical Engineering, article in press, DOI: 10.1080/10255842.2015.1128529
9. Rohrer, M., Bauer, H., Mintorovitch, J., Requardt, M., & Weinmann, H. J. (2005). Compari-
son of magnetic properties of MRI contrast media solutions at different magnetic field
strengths. Investigative radiology, 40(11), 715-724.
10. Kuppusamy, P., & Zweier, J. L. (1996). A forwardǦsubtraction procedure for removing hy-
perfine artifacts in electron paramagnetic resonance imaging. Magnetic resonance in medi-
cine, 35(3), 316-322.
11. Hill, D. L., Batchelor, P. G., Holden, M., & Hawkes, D. J. (2001). Medical image registra-
tion. Physics in medicine and biology, 46(3), R1.
12. Motwani, M. C., Gadiya, M. C., Motwani, R. C., & Harris, F. C. (2004, September). Survey
of image denoising techniques. In Proceedings of GSPX (pp. 27-30).
13. Draa, A., Benayad, Z., & Djenna, F. Z. (2015). An opposition-based firefly algorithm for
medical image contrast enhancement. International Journal of Information and Communica-
tion Technology, 7(4-5), 385-405.
14. Maini, Raman, and Himanshu Aggarwal. "A comprehensive review of image enhancement
techniques." arXiv preprint arXiv:1003.4053 (2010).
15. Glatard, Tristan, Johan Montagnat, and Isabelle E. Magnin. "Texture based medical image
indexing and retrieval: application to cardiac imaging." Proceedings of the 6th ACM SIGMM
international workshop on Multimedia information retrieval. ACM, 2004.
16. Skorton, D. J., Collins, S. M., Nichols, J. A. M. E. S., Pandian, N. G., Bean, J. A., & Kerber,
R. E. (1983). Quantitative texture analysis in two-dimensional echocardiography: application
to the diagnosis of experimental myocardial contusion. Circulation, 68(1), 217-223.
17. Pham, Dzung L., Chenyang Xu, and Jerry L. Prince. "Current methods in medical image
segmentation 1." Annual review of biomedical engineering 2.1 (2000): 315-337.
18. Withey, Daniel J., and Zoltan J. Koles. "A review of medical image segmentation: methods
and available software." International Journal of Bioelectromagnetism 10.3 (2008): 125-148.
19. Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hid-
den Markov random field model and the expectation-maximization algorithm. Medical Imag-
ing, IEEE Transactions on, 20(1), 45-57.
20. Nealen, A., Müller, M., Keiser, R., Boxerman, E., & Carlson, M. (2006, December). Physi-
cally based deformable models in computer graphics. In Computer graphics forum (Vol. 25,
No. 4, pp. 809-836). Blackwell Publishing Ltd.
396 M. Carfagni and F. Uccheddu

21. Schenk, Andrea, Guido Prause, and Heinz-Otto Peitgen. "Efficient semiautomatic segmenta-
tion of 3D objects in medical images." Medical Image Computing and Computer-Assisted In-
tervention–MICCAI 2000. Springer Berlin Heidelberg, 2000.
22. Furferi, R., Governi, L., Volpe, Y. Modelling and simulation of an innovative fabric coating
process using artificial neural networks (2012) Textile Research Journal, 82 (12), pp. 1282-
1294.
23. Išgum, Ivana, et al. "Multi-atlas-based segmentation with local decision fusion—application
to cardiac and aortic segmentation in CT scans." Medical Imaging, IEEE Transactions on
28.7 (2009): 1000-1010.
24. Coupé, P., Manjón, J. V., Fonov, V., Pruessner, J., Robles, M., & Collins, D. L. (2011).
Patch-based segmentation using expert priors: Application to hippocampus and ventricle
segmentation. NeuroImage, 54(2), 940-954.
25. Lorensen, W. E., & Cline, H. E. (1987, August). Marching cubes: A high resolution 3D sur-
face construction algorithm. In ACM siggraph computer graphics (Vol. 21, No. 4, pp. 163-
169). ACM.
26. Han, Chia Y., David T. Porembka, and Kwun-Nan Lin. "Method for automatic contour ex-
traction of a cardiac image." U.S. Patent No. 5,457,754. 10 Oct. 1995.
27. Di Angelo, L., Di Stefano, P. & Giaccari, L. “A new mesh-growing algorithm for fast surface
reconstruction”. Computer – Aided Design, vol. 43 (6), 2011, p. 639-650.
28. Di Angelo, L., Di Stefano, P. & Giaccari, L. “A Fast Mesh-Growing Algorithm For Manifold
Surface Reconstruction”. Computer – Aided Des. and Applic., vol. 10 (2), 2013, p. 197-220.
29. Young, P. G., Beresford-West, T. B. H., Coward, S. R. L., Notarberardino, B., Walker, B., &
Abdul-Aziz, A. (2008). An efficient approach to converting three-dimensional image data
into highly accurate computational models. Philosophical Transactions of the Royal Society
of London A: Mathematical, Physical and Engineering Sciences, 366(1878), 3155-3173.
30. Lim, S. P., & Haron, H. (2014). Surface reconstruction techniques: a review. Artificial Intel-
ligence Review, 42(1), 59-78.
31. Furferi, R., Governi, L., Palai, M., Volpe, Y. From unordered point cloud to weighted B-
spline - A novel PCA-based method (2011) Applications of Mathematics and Computer En-
gineering - American Conference on Applied Mathematics, AMERICAN-MATH'11, 5th
WSEAS International Conference on Computer Engineering and Applications, CEA'11, pp.
146-151.
32. Governi, L., Furferi, R., Puggelli, L., Volpe, Y. Improving surface reconstruction in shape
from shading using easy-to-set boundary conditions (2013) International Journal of Computa-
tional Vision and Robotics, 3 (3), pp. 225-247.
33. Furferi, R., Governi, L., Palai, M., Volpe, Y. Multiple Incident Splines (MISs) algorithm for
topological reconstruction of 2D unordered point clouds (2011) International Journal of
Mathematics and Computers in Simulation, 5 (2), pp. 171-179.
34. Volpe, Y., Furferi, R., Governi, L., Tennirelli, G. Computer-based methodologies for semi-
automatic 3D model generation from paintings. (2014) International Journal of Computer
Aided Engineering and Technology, 6 (1), pp. 88-112.
35. Di Angelo, L., Di Stefano, P. “A new method for the automatic identification of the dimen-
sional features of vertebrae”. Comp. Meth. and Progr. in Biom., vol. 121 (1), 2015, pp. 36-48.
36. Vandenbroucke, B., & Kruth, J. P. (2007). Selective laser melting of biocompatible metals
for rapid manufacturing of medical parts. Rapid Prototyping Journal, 13(4), 196-203.
37. Mironov, V., Boland, T., Trusk, T., Forgacs, G., & Markwald, R. R. (2003). Organ printing:
computer-aided jet-based 3D tissue engineering. TRENDS in Biotechnology, 21(4), 157-161.
A new method to capture the jaw movement

Lander BARRENETXEA1, Eneko SOLABERRIETA1, Mikel ITURRATE1


and Jokin GOROZIKA1
1
Department of Graphic Design and Engineering Projects, Faculty of Engineering, University
of the Basque Country UPV/EHU, Urkixo zumarkalea z/g, 48013 Bilbao, Spain
* Corresponding author. Tel.: +34-94-601-4184; fax: +34-94-601-4199. E-mail address:
lander.barrenetxea@ehu.eus

Abstract In traditional dentistry, orthodontics and maxillo-facial surgery,


articulators are mainly used to simulate the dental occlusion. Dental implants and
syndromes such as functional occlusion require instrumentation for the planning
previous to the surgery. There are various mechanical articulators on the market.
However, most of them only simulate the rotation of the jaw about an axis running
through the virtual condyles. However, the real movement includes translation and
rotation and differs from one patient to another. Surgeons and dentists require a
comprehensive simulation system as a support for their work. This article
describes the work carried out to develop a method to capture mandibular
movement. Taking into consideration the market proposals and in comparison
with them, this system is intended to be as cheap and simple as possible.

Keywords: Motion sensor, jaw movements, computer program, prostheses’


manufacture, LEAP.

1 Introduction

Within a fully digitalized process to make dentures[1, 2], this study aims to
develop a method to record the mandibular movement performed by a patient.
This method should be cheaper and easier than existing applications. Our goal is
to obtain a registration method with an accuracy inferior to 0.1mm, a maximum
price of 200 €, together with an open system architecture.

In this project, the steps to follow are:


• Development of the movement capturing software
• Design of the mountings for sensors and references
• Analysis of the accuracy of the obtained measurements.

© Springer International Publishing AG 2017 397


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_40
398 L. Barrenetxea et al.

2 State of the Art

The following are the mandibular movement capture contactless techniques


currently available on the market:

• ARCUSdigma:
The ARCUSdigma system [3, 4] uses ultrasound transmission to measure and
reproduce the jaw movements. Its operation is relatively simple. On the one hand,
a bow with four microphones is fixed to the skull and on the other hand, a support
with three pingers is set on the jaw. The intensity of each frequency, captured by
each microphone determines its relative distance. These twelve measurements
allow this device to interpolate the relative position of the support.

• Freecorder BlueFox:
This contactless system [5, 6] tracks a series of encoded visual patterns. First,
to measure the position of the skull, a bow with references is placed on the ears
and then, it is supported on the nose’s bridge. Another light modular arch is
attached to the jaw to capture its movement.
The modularity of the lower arch accelerates and eases the installation and the
recording. Using special cameras, the patterns are captured 100 times per second,
thus achieving very high resolutions (1/1000mm.)

• JMA Zebris:
JMA Zebris system [7] has a customized jaw’s anchor that joins the lower arch
by means of magnets. Another upper arch is placed on the skull and nose’s bridge.
Both of them have electronic sensors that measure relative distances. The system
determines the jaw’s relative position by calculating the flight times of ultrasonic
pulses.

• Research:
At the Kang Cheng University –Taiwan-, Jing-Jing Fang and Tai-Hong Kuo,
researchers of the Department of Mechanics developed a method to record jaw
movement [8, 9]. In this case, customized stents are generated according to dental
molds or directly from teeth to be used as mountings for trace plates. Some
cameras record the movement of those plates and their position is automatically
calculated by interpolating images.

All presented systems measure indirectly jaw movement by means of tracking


plates, modular arches or brackets attached to teeth. To reproduce jaw movement,
dental molds or dentures must be 3D scanned twice, first separately and second,
with the fixed added elements. The first scan captures the teeth surface. The
second one gives the relative distance from the references to the teeth. Therefore,
having the references, the recorded movements and the relative distance,
A new method to capture the jaw movement 399

reproducing the mandibular arch is possible. When recording only relative


positions, it is also necessary to fix and measure an initial position to establish the
relative positions of skull and jaw. This measurement can be carried out at any
time of motion capture. From this position, dental arches can be placed in space.
Each system has its own closed software package that allows the processing the
data. Sometimes it is necessary to take some initial measurement to use it as
reference.

3 Methodology

3.1 Software development

In order to develop the software, the choice of the hardware is a previous step.
According to the premise of the system’s low price, an inexpensive commercial
motion sensor system was selected: the LEAP Motion[10].

This sensor is a small and light USB device that can be attached to mobile
systems, arches, brackets, etc. It scans nearby hemispheric environment distances
between 7cm and 1m by means of two cameras and three infrared LED. Three
hundred readings per second are transmitted in real time to the computer. This
peripheral allows programming in many languages (C++, C#, Unity, Objective-C,
Java, Python…) and operates under different operating systems (Windows, OS X,
Linux) thus meeting the open system requirement. Since C++ is one of the most
widespread programming languages, it has been selected for this project, thus
facilitating the project’s future development. Besides, it can be connected with
hardware without requiring any virtual platform, obtaining high performance
programs.

The LEAP Motion device, designed to capture the movements of fingers and
hands, comes with a "tool" configuration to register physical pointers. The
software development kit (SDK) offers a series of public functions and properties
to determine which of the elements captured by the device will be used. The
following functions have been selected:
• TipPosition: pointer position
• Direction: tool’s direction vector
• Float Length: tool’s estimated length (in mm)
• Float Width: estimated thickness (diameter) of the tool (in mm)
• Int count (): number of visible ítems
400 L. Barrenetxea et al.

This software aims to capture the movement of three cylindrical pointers


attached to the mandibular arch. In order to facilitate this work and after analyzing
this device’s operation, cylinders end in conical surfaces. Besides knowing the
position, it is also necessary to know the time of each of the shots and this time
has to be the same for the three pointers. If one reference is not recorded at a given
time, the data from the other two are purged. In order to quicken the process, the
capture rate is reduced to 30 frames per second (of 300 possible captures) and this
parameter is easily adjustable. The axis vector of each cylinder, as well as the
diameters are also captured. The algorithm to capture the position references was
developed according to these variables. It is possible to use fewer parameters, but
these selected parameters permit the filtering of registered positions if necessary.

Fig. 1. Algorithm flowchart.


A new method to capture the jaw movement 401

As the flowchart shows, once the libraries are inserted, the sensor is initialized.
It starts searching “Pointables” and then, it determines how many of them are
visible. If the number is equal to 3 (number of references), the counter of
identified “Pointables” units is reset. The data of each “Pointables” are
successively extracted and then added to a text file until the three elements are
processed. When the three “Pointables” present in the "t1" time lapse have been
processed, their center of gravity and the vector resulting from adding the three
director vectors are calculated and added to the text file. Once this cycle of "t1"
time finishes, a new cycle of capture begins. All captured or calculated data are
stored cyclically in a text file for a later filtering and processing.

3.2 Design of physical elements

The auxiliary equipment shall consist of two parts. On the one hand, the sensor
support is designed and on the other hand, the reference tool that will be recorded
is designed. The LEAP allows capturing two systems simultaneously: it has been
designed to capture the motion of ten fingers and group them into two hands.
However, analyzing the available systems on the market, several of them simplify
the calculations by fixing them to the skull. In this way, errors are minimized and
only the relative movement between dentures must be measured from an initial
position.

x Sensor’s support:
This piece is responsible for setting the LEAP to the skull. The optimal
distance of this device, together with the orientation relative to the mandibular
arch and any device that will fix the LEAP have been analyzed. The
characteristics of the sensor’s cameras involve a minimum distance limited to
7cm. Furthermore, a fork was inserted in order to allow an angular adjustment for
different physiognomies. After performing some tests, it was found that, when
placed face up, the sensor was found to be more prone to "noise" due to light
pollution. It was decided that the support would be fixed to the front with ribbons
and facing down. Interferences due to the body can be easily removed with a black
cloth.

x Reference tool:
The LEAP sensor reads the support fixed to the mandibular arch. To design this
support and to optimize the capturing process, it is necessary to determine what
the sensor reads and how it performs these readings.

The outer finish should not be reflective (noise, false readings) or too dark (no
catch). Besides, translucent or transparent material cannot be used because the
light emitted by the LED scatters and gives errors. Light colors and mates are best
402 L. Barrenetxea et al.

captured. Physical elements will be built with white plastic using rapid
prototyping machines [8]. This choice facilitates the redrawing of parts based on
previous results.

The LEAP device was designed to capture preferably cylindrical fingers and
pointers. The analyses [11] show no significant variations in the robustness of
catches when varying the diameters between 3 and 10 mm. An average diameter
value of 7 mm was selected. This value ensures rigidity without adding excessive
weight to an element that should be attached to the mandibular arch.

Initially it was decided to place a trihedral formed by cylinders of equal length


in the reference tool. This arrangement resulted in errors because the software
mistook the rods together. The tool was then modified by allocating different
distances to each of the cylinders as well as different angles between each. This
introduces another filtering element that strengthens the system.

Fig. 2. Sensor’s support and reference tool.

3.3 Precision analysis

The nominal accuracy of LEAP is 0.01mm. However, as it happens with all


optical and mobile methods, it will vary depending on environmental conditions
and the extent of this variation must be determined.

The system to determine the accuracy of the method is very simple. The
designed tool consists of three cylinders and the distance between their ends is
known. These data are compared with the distances between ends obtained by the
captured points. To obtain these data, the LEAP is coupled to a dummy and the
reference tool is fixed to a dental mold. This arrangement allows carrying out
calibrations and all kinds of repetitive motion. Between one test and another, the
program was reset and the sensor was recalibrated.
A new method to capture the jaw movement 403

Based on text files, the results were imported into Excel and were filtered to
eliminate false or repeated readings. The distances between points in each of the
temporary sections, as well as these values’ means and standard deviations were
calculated.

Finally, the references were three-dimensionally scanned in different positions


and then, the results obtained were compared with LEAP readings. A structured
light GOM ATOS scanner was used for the three-dimensional scanning [12].

Maximum errors happened in movements parallel to LEAP’s visual. The


focusing distance is modified and the device must fix it in real time. There are two
types of errors:
- Cylinders’ end-points incorrect determination along the axis. It is constant in
each session and proportional in each of the cylinders. Between sessions, similar
and parallel triangles are generated joining the end-points. The maximum distance
obtained from the theoretical triangle has been 0.02369mm.
- Position errors. Comparing LEAP with GOM ATOS the maximum error has
been 1,81mm at the end of a cylinder and perpendicular to it. Applying Thales, the
error at the closest point to the teeth is 0,2245mm.

4 Conclusions and future works

The proposed system has been able to capture the movement of the mandibular
arch. However, the maximum accuracy achieved has only reached 0.2 mm at best.
A number of problems worsened the accuracy of the captures and contaminated
the data:

• Measurements variations: it was observed that without changing the reference


tool, the distances between points varied between different tests while director
vectors remain constant. The error keeps constant throughout each test. After each
calibration, the LEAP does not always give a point for each reference in cylinder’s
end. It tends to move slightly along the axis. An analysis of the data shows that
this variation is proportional to the variations of other two references. If we
generate a triangle with captured points, proportional triangles are created
following the law of Thales. This homogeneity facilitates avoiding the error. If the
three cylinders agree on one point and the original triangle is known, it is possible
to calculate the distance between the theoretical triangle and the captured one and
compensate it.

• Changing the sequence of points: the LEAP does not associate one number to
each pointer detected. Identification varies depending on the order in which they
404 L. Barrenetxea et al.

are registered. This is a known bug that developers hope to be able to correct in
the future with upcoming SDKs. Anticipating this error, we are working with
different parameters like "Float length" and "Float width" to filter and sort the
results before calculating the distances.

• Interferences: sometimes the environment produces false readings giving


more than three points. In these cases, captured extra data allow filtering to
remove incorrect readings.
Although the movement made by the references was captured, this achievement
still needs some improvement to achieve greater accuracy. We believe that the
error in the variation of the measures is responsible for the decrease in accuracy
because there is a proportionality between them. Moreover, the points filtering
system, which is up to now manual, should be automated relying on the captured
data.

Acknowledgments The authors of this paper want to thank the Eusko Jaurlaritza - Gobierno
Vasco SAIOTEK 2013 (SAI13/355) for financing this research project.

References

1. Solaberrieta, E., Mínguez, R., Barrenetxea, L., Otegi, J.R., Szentpétery, A. Comparison of
the accuracy of a 3-dimensional virtual method and the conventional method for transferring
the maxillary cast to a virtual articulator. Journal of Prosthetic Dentistry. Volume 113, Issue
3, 1 March 2015, Pages 191-197.
2. Solaberrieta, E., Otegi, J.R., Goicoechea, N., Brizuela, A., Pradies, G. Comparison of a
conventional and virtual occlusal record. Journal of Prosthetic Dentistry. Volume 114, Issue
1, 1 July 2015, Article number 1650, Pages 92-97.
3. ArcusDigma: (April 2016) http://www.kavousa.com/US/Other-Products/Laboratory-
Products/ARCUSdigma.aspx?sstr=1
4. Cardenas Martos, A et al. Registro de la dinámica témporomandibular mediante ultrasonidos
con ARCUSdigma de KaVo. Av Odontoestomatol [online]. 2003, vol.19, n.3 [citado 2016-
04-22], pp.131-139. ISSN 0213-1285.
5. Freecorder BlueFox: (April 2016) http://www.freecorder.de/
6. Freecorder BlueFox specs: (April 2016) http://www.drdougerickson.com/prosthodontic-
techonology-duluth/freecorder_min_en.pdf
7. JMA Zebris: (April 2016) http://www.zebris.de/english/zahnmedizin/zahnmedizin-
kiefergelenkanalyse.php?navanchor=10017
8. Fang, Jing-Jing; Kuo, Tai-Hong. Modelling of mandibular movement. Computers in Biology
and Medicine , November–December 2008. Volume 38 , Issue 11 , 1152 - 1162
9. Fang, Jing-Jing; Kuo, Tai-Hong. Tracked motion-based dental occlusion surface estimation
for crown restoration. Computer-Aided Design. Volume 41, Issue 4, April 2009, Pages 315–
323.
10. LEAP Motion: (April 2016) https://www.leapmotion.com/
11. Daniel Bachmann, Frank Weichert , Bartholomäus Rudak, Denis Fisseler. Analysis of the
Accuracy and Robustness of the Leap Motion Controller. Sensors 2013, 13(5), 6380-6393
12. GOM ATOS: (April 2016) http://www.gom.com/metrology-systems/system-overview/atos-
compact-scan.html
Computer Aided Engineering of Auxiliary
Elements for Enhanced Orthodontic Appliances

Roberto SAVIGNANO1*, Sandro BARONE1, Alessandro PAOLI1 and


Armando Viviano RAZIONALE1
1
Department of Civil and Industrial Engineering, University of Pisa, Pisa, Italy.
* Corresponding author. Tel.: +39-050-221-8000 ; fax: +39-050-221-8065. E-mail address:
roberto.savignano@for.unipi.it

Abstract Orthodontic treatments based on removable thermoplastic aligners are


becoming quite common in clinical practice. However, there is no technical litera-
ture explaining how the loads are transferred from the thermoformed aligner to the
patient dentition. Moreover, the role of auxiliary elements used in combination
with the aligner, such as attachments and divots, still needs to be thoroughly ex-
plained. This paper is focused on the development of a Finite Element (FE) model
to be used in the design process of shape attributes of orthodontic aligners. Geo-
metrical models of a maxillary dental arch, including crown and root shapes, were
created by combining optical scanning and Cone Beam Computed Tomography
(CBCT). Finite Element Analysis (FEA) was used to compare five different align-
er’s configurations for the same tooth orthodontic tipping movement (rotation
around the tooth’s center of resistance). The different scenarios were analyzed by
comparing the moment along the mesio-distal direction of the tooth and the result-
ing moment-to-force ratio (M:F) delivered to the tooth on the plane of interest.
Results evidenced the influence of the aligner’s configuration on the effectiveness
of the planned orthodontic movement.

Keywords: Orthodontic tooth movement; orthodontic aligner; anatomical mod-


elling; numerical analysis.

1 Introduction

Orthodontics is the branch of dentistry specialized in the correction of malocclu-


sions by using different kind of appliances. Among them, removable thermoplastic
aligners (RTAs) are the latest innovation, even if until the last decade they repre-
sented only a small part of the overall orthodontic treatments due to the highly

© Springer International Publishing AG 2017 405


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_41
406 R. Savignano et al.

specialized and manual processes required [1]. The recent diffusion of CAD/CAE
methodologies allowed for an industrial approach for both design and manufactur-
ing of RTAs, thus increasing their use in common clinical practice. Removable
aligners, made of transparent material and then almost invisible, raised a growing
interest as aesthetic alternatives to conventional fixed devices, especially for adult
treatments. The force-moment system delivered to the target tooth is generated by
the difference between template and dentition geometry since each aligner is
shaped a bit different from the actual target tooth position within the mouth. A set
of different aligners, sequentially worn by the patient, is required to achieve the
final desired outcome since each of them is designed to perform only a limited or-
thodontic movement. The shape of each aligner is designed by a technician
through CAD software tools starting from the original tooth position in the mouth,
obtained by a digitalization process, and knowing the desired target tooth place-
ment at the end of the treatment.
Even if orthodontic treatments based on RTAs are becoming quite common in
clinical practice, there is no technical literature describing how thermoformed
aligners deliver forces and moments to tooth surfaces. Moreover, RTA treatments
are usually associated to the use of auxiliary elements, such as attachments and/or
altered aligner geometries as divots to improve the treatment effectiveness. How-
ever, current literature is mainly based on reporting clinical outcomes without
providing thorough scientific description of their efficacy. Some attempts to eval-
uate loads delivered by the aligner have been made by using multi-axis
force/torque transducers for different orthodontic in-vitro scenarios composed of
replicated polymeric dental arches [2, 3]. These approaches, however, require the
manufacturing of a different resin replica for each different RTA attribute to be
analyzed, thus burdening the RTA optimization in terms of both time and costs.
Moreover, material properties of the resin models are different from those of den-
tal structures and there is no distinction between the different anatomical tissues
(Bone-Ligaments-Tooth).
In the orthodontic research field, the finite element method (FEM) proves to be
an effective non-invasive tool to provide quantitative and detailed data on the
physiological tissue reactions occurring during treatments [4-6]. In particular, Fi-
nite Element Analysis (FEA) has been used in dentistry since 70s [7] since they
are capable of evaluating not only the force system delivered to the tooth, but also
stress and strains induced to the surrounding structures (periodontal ligaments and
bone).
This paper aims at analyzing the influence of auxiliary element’s features on
the the force-moment system delivered to a central incisor by using a finite ele-
ment model. 3D anatomies of a maxillary dental arch, including crown and root
shapes, were modelled by combining optical scanning and Cone Beam Computed
Tomography (CBCT). FEA was used used to compare five different aligner’s con-
figurations for the same tooth orthodontic tipping movement (rotation around the
tooth’s center of resistance).
Computer Aided Engineering of Auxiliary … 407

2 Materials and methods

2.1 Geometrical modelling

Dental data, captured by independent imaging sensors, were fused to create multi-
body orthodontic models composed of teeth, oral soft tissues and alveolar bone
structure. The methodology is based on integrating CBCT scanning and surface
structured light scanning. An optical scanner was used to reconstruct tooth crowns
and soft tissues (visible surfaces) through the digitalization of plaster casts. Tooth
roots were obtained by segmenting CBCT data sets set through the anatomy-
driven segmentation methodology described in [8]. The 3D individual dental tis-
sues obtained by the optical scanner and the CBCT sensor were fused within mul-
ti-body orthodontic models with minimum user interaction. A segment of six
frontal maxillary teeth was selected from the whole maxillary arch. The periodon-
tal ligament (PDL), which is the soft biological tissue located between the tooth
and the alveolar bone, has a variable thickness, with a mean value of 0.2 mm [9].
For this reason, in this paper, it was modelled as a uniform 0.2 mm thick layer be-
tween each tooth and jawbone. The RTA was supposed to have a 0.7 mm constant
thickness, originating from the 0.75 mm thick disk before the thermoforming pro-
cess [10], and was modelled by exploiting CAD tools in order to define a layer
completely congruent with the tooth crown surfaces [4]. The obtained 3D anatom-
ical geometries (Figure 1) were auto patched to create trimmed NURBS surfaces,
finally converted into "IGES" models.

Bone

PDL

Teeth

Aligner

Fig. 1. Geometrical representation of the modelled orthodontic anatomies.

The tooth axes were defined according to the Local Reference System proposed
by [11]. The z-axis is associated with the lower inertia moment of the geometrical
model and is obtained through the Principal Component Analysis of the polyhe-
dral surface by considering the masses associated with the barycenter of the trian-
408 R. Savignano et al.

gles of the polyhedrons, which are proportional to its area. Two sections of the
tooth were created and analyzed to identify the positive direction of the z-axis.
The tooth was sliced by two different planes perpendicular to z-axis and 3 mm far
from the tooth extremities. The section showing the worst approximation of a cir-
cle is considered as located upper (Γc). The mesiodistal (y-axis) and the
labiolingual (x-axis) axes are orthogonal to the z-axis and are obtained by analyz-
ing the principal component of inertia of the planar section Γc.
Attachments and divots were created through Boolean operations between
tooth, RTA and prismatic or spherical volumes respectively, as shown in Figure 2.
They were both located at the center of the tooth crown. The attachment geome-
tries were created on the tooth surface, having sizes on the x, y and z directions of
1×3×1.5 mm for the horizontal attachment and 1×1.5×3 mm for the vertical at-
tachment. The divot spherical geometries, having a radius of 1 mm, were created
on the external surface of the RTA. Therefore, a 0.3 mm of initial penetration was
added to the model.

Fig. 2. Divot and attachment creation workflow.

2.2 Finite element model

Data were imported within the finite element modeler (Ansys® 14). All bodies
were meshed with solid tetrahedral elements resulting in approximately 220000
nodes and 134000 elements. The mesh size varied slightly between different sce-
narios due to introduction of the auxiliary elements mesh. The mechanical re-
sponse of cortical bone, teeth, attachments and RTA was described by using a lin-
ear elastic constitutive model (Table 1). Dental tissue was modelled has a uniform
body without taking into account the division into dentin, enamel and pulp [9]. In
technical literature, different biomechanical models have been proposed to simu-
late the PDL properties [12]. The linear elastic model demonstrated to be appro-
Computer Aided Engineering of Auxiliary … 409

priate to simulate the PDL behavior during the initial phase of the orthodontic
movement when the PDL maximum strains is lower than 7.5% [13]. However,
this requirement was not satisfied by the orthodontic movement simulated in this
paper. For this reason, the volumetric finite strain viscoelastic model was imple-
mented as proposed by Wang et al. [14]. The removable appliances were simulat-
ed as made of a polyethylene terephthalate glycol-modified (PETG) thermoplastic
disc, whose mechanical properties were evaluated through a set of tensile tests
carried out under different experimental conditions. Auxiliary attachments, which
are made of dental composite material, were supposed to have the same tooth’s
material properties.

Table 1. Material properties used for the numerical simulations.

E (MPa) Poisson’s ratio


Tooth 20000 0.3
Bone 13800 0.3
RTA 1400 0.3
Attachment 20000 0.3

The evaluation of the effectiveness of the loads delivered by an orthodontic de-


vice to dentition can result in a challenging task. It is expectable to have a com-
plex load system simultaneously acting in all the three spatial planes. The analysis
of the relationship between the 3D tooth movement and the delivered loads is pos-
sible by comparing moment-to-force ratios (M:F) on the plane of interest [15].
The force system is measured at the tooth center of resistance (CRES). The concept
of the center of resistance of a tooth is analogous to the concept of the center of
mass except for the fact that it is not related to a free body. It is rather related to a
body with constraints, as the tooth in the alveolar complex. If a force is applied on
the CRES the tooth shows a pure translation [16]. In the three-dimensional space,
each M:F is defined by combinations of the forces contained in the plane and mo-
ments perpendicular to it. Moreover, the absolute values of the desired moment or
force needs to be taken into account. The parameter M:F provides a description
about the quality of the force system. A higher M:F value measured at the ex-
pected Center of Rotation (CROT), means that the resulting CROT is closer the ex-
pected one. While the M or F absolute values are related to the quantity of the
force system [17]. Three simulations were run by applying a moment of 1.5 Nmm
parallel to each reference tooth axis in order to find the CRES [16].
Teeth and ligaments were relatively constrained by a bonded contact, which
only allows small sliding movements between joined nodes. The same constraint
was used to join bone and ligaments. The contact surface between teeth and RTA
was set as frictionless. The mesial and distal surfaces of the bone were fixed in all
directions. The creation of an initial penetration between the target tooth and the
aligner is necessary in order to generate the loading condition. For this reason, the
target tooth was rotated around y-axis by 0.3°. The resulting movement is called
410 R. Savignano et al.

bucco-lingual tipping. The solver determined the equilibrium between the bodies,
thus removing the initial geometrical. The final allowed penetration was set at
0.01 mm, which was appropriate considering that the initial penetration on the tar-
get tooth ranged from 0.09 mm to 0.36 mm (Figure 3).

Initial Penetration (mm)


0.36
0.32

Initial Rotation 0.28


0.24
0.20
CRES
0.16
0.12
0.08
0.04
0

(a) (b)
Fig. 3. (a) CRES and rotation imposed to the target tooth in order to create the initial penetration
for an aligner with a single divot (b).

3 Results

Five different aligner’s configurations were considered for the numerical simula-
tions as shown in Figure 4. In particular, an aligner without auxiliary elements
(standard), an aligner with a single or a double divot geometry, and an aligner with
a vertical or a horizontal attachment were considered. The main parameters ana-
lyzed by FEA were:
x maximum tooth displacement;
x force system delivered to the tooth, measured at the CRES.
Figure 4 shows the displacement maps of the target tooth obtained for each
scenario, while Figure 5 summarizes the force system delivered by the appliance
for all the aligner’s configurations. Table 2 reports the resulting force systems and
the M:F values.

Table 2. Force system measure at the CRES for each scenario.


My (Nmm) My/Fx (mm) My/Fz (mm)
Standard 24 12 -26.7
Divot 71.3 9.8 -89.1
2 divots 77.7 9.6 -70.6
Vertical Attachment 37.4 14.4 -37.4
Horizontal Attachment 42.8 15.9 -42.8
Computer Aided Engineering of Auxiliary … 411

Vertical Horizontal
Standard Divot 2 Divot
Attachment Attachment

Displacement (mm)
0.075

0.067

0.058

0.050

0.042

0.033

0.025

0.017

Fig. 4. The five different aligner’s configurations used for the numerical simulations along with
the displacement maps for the target tooth relative to each scenario.

Fig. 5. Summary of the force system elicited by the aligner to the target tooth in the five different
configurations.

The quality of the force system, attested by the M:F parameter, increases using
an attachment independently of its orientation (vertical or horizontal), while it de-
creases by using a divot. The most interesting values are those associated to the
My/Fx parameter since the parameter My/Fz presents high values also by using the
standard aligner configuration. The distance between the expected C ROT and the
actual CROT is defined by the relation D=k/(M:F), where k depends on the specific
tooth morphology and the force system and values greater than 26 (Table 2) can
412 R. Savignano et al.

all be considered adequate in order to obtain the expected movement [17]. Figure
6 reports an example of the inverse relationship between M:F and D for a generic
tooth having k = 10.
D(CROT-CRES) Vs M:F
12
10
8
D (mm)

6
4
2
0
1 2 3 4 5 6 7 8 9 10 11 12
M:F (mm)

Fig. 6. Example of the inverse relationship between M:F and D for a generic tooth (k=10).

4 Discussion

This paper aims at demonstrating how CAD/CAE techniques could be usefully


applied to study orthodontic treatments performed by transparent removable
aligners. In particular, the design and optimization of auxiliary elements for a
bucco-lingual tipping of a maxillary central incisor has been analyzed. The ob-
tained results demonstrate that auxiliary elements can improve the treatment effec-
tiveness. Figure 4 evidences how the use of a single divot causes the highest tooth
movement, with the tooth apex incurring in a 0.075 mm displacement. In particu-
lar, a moment along the y-axis of 71.3 Nmm is obtained, which results about 2
times higher than the one obtained by using an attachment and about 3 times high-
er than the one obtained with a standard RTA (Table 2). This effect can be as-
cribed to the increased initial penetration, which results in a higher load delivered
to the target tooth. The configuration with a double divot produces a better result
with respect to the single divot geometry. The My value increases from 71.3 Nmm
to 77.7 Nmm. The configurations with an attachment provide a more accurate
movement in all the scenarios, as attested by the My/Fx values. The horizontally
disposed attachment provides a higher moment value with respect to the vertical
one, due to the greater initial contact area.
The attachment is placed by the dentist onto the patient dentition through a
template designed by the orthodontist. Therefore, its shape and position are highly
precise. The divot geometry, instead, is manually created by the dentist through a
tong. For this reason, it is possible that the actual divot is not congruent with the
requirements prescribed by the technician. RTAs success and diffusion within the
Computer Aided Engineering of Auxiliary … 413

orthodontic field mainly relies on the aesthetic advantage compared with classic
fixed orthodontic appliances. Even if the attachment color is usually similar to that
of the patient’s dentition, its size and location can undermine the aligner invisibil-
ity. Therefore, compared to the divot, the attachment is less desirable by a patient
looking for an almost invisible appliance.

5 Conclusions

CAD/CAE approaches can improve the knowledge about tooth-appliance inter-


action in orthodontics, thus allowing an enhancement of the effectiveness of cus-
tomized orthodontic appliances. In particular, the use of auxiliary elements repre-
sents the most challenging issue of aligner-based treatments. In this regard, some
conclusions can be drawn:
x Auxiliary elements can improve both the amount and the quality of the
load delivered to the tooth.
x The use of a divot provides a higher load to the target tooth, but with a
lower accuracy.
x The use of two horizontally disposed divots generates an orthodontic
movement slightly better than a single divot.
x The use of attachments increases the movement accuracy, which is
defined by the M:F parameter.
x The horizontal attachment slightly outperforms the vertical one with
regard to the amount of My delivered to the tooth.
Further efforts should be concentrated on the analysis of multiple movements
for different teeth with the aim at obtaining generic rules for the selection of the
most appropriate auxiliary element for each specific condition.

References

1.Kesling H.D. Coordinating the predetermined pattern and tooth positioner with conventional
treatment. American journal of orthodontics and oral surgery, 1946, 32, pp. 285-293.
2.Hahn W., Engelke B., Jung K., Dathe H., Fialka-Fricke J., Kubein-Meesenburg D., and Sadat-
Khonsari R. Initial forces and moments delivered by removable thermoplastic appliances dur-
ing rotation of an upper central incisor. Angle Orthodontist, 2010, 80(2), pp. 239-246.
3.Elkholy F., Panchaphongsaphak T., Kilic F., Schmidt F., and Lapatki B.G. Forces and mo-
ments delivered by PET-G aligners to an upper central incisor for labial and palatal transla-
tion. Journal of orofacial orthopedics = Fortschritte der Kieferorthopadie : Organ/official
journal Deutsche Gesellschaft fur Kieferorthopadie, 2015, 76(6), pp. 460-475.
4.Barone S., Paoli A., Razionale A.V., and Savignano R. Computer aided modelling to simulate
the biomechanical behaviour of customised orthodontic removable appliances. International
Journal on Interactive Design and Manufacturing (IJIDeM), 2014. doi:10.1007/s12008-014-
0246-zpp. 1-14.
414 R. Savignano et al.

5.Martorelli M., Gerbino S., Giudice M., and Ausiello P. A comparison between customized
clear and removable orthodontic appliances manufactured using RP and CNC techniques.
Dental Materials, 2013, 29(2), pp. E1-E10.
6. 6.Barone S., Paoli A., Razionale A.V., and Savignano R. Design of customised orthodontic
devices by digital imaging and CAD/FEM modelling. in BIOIMAGING 2016 - 3rd Interna-
tional Conference on Bioimaging, Proceedings; Part of 9th International Joint Conference on
Biomedical Engineering Systems and Technologies, BIOSTEC 2016,2016,pp. 44-54,
7.Farah J.W., Craig R.G., and Sikarskie D.L. Photoelastic and finite element stress analysis of a
restored axisymmetric first molar. Journal of Biomechanics, 1973, 6(5), pp. 511-520.
8.Barone S., Paoli A., and Razionale A.V. CT segmentation of dental shapes by anatomy-driven
reformation imaging and B-spline modelling. International Journal for Numerical Methods in
Biomedical Engineering, 2016, 32(6), e02747, doi: 10.1002/cnm.2747.
9.Dorow C., Schneider J., and Sander F.G. Finite element simulation of in-vivo tooth mobility in
comparison with experimental results. Journal of Mechanics in Medicine and Biology, 2003,
03(01), pp. 79-94.
10.Ryokawa H., Miyazaki Y., Fujishima A., Miyazaki T., and Maki K. The mechanical proper-
ties of dental thermoplastic materials in a simulated intraoral environment. Orthodontic
Waves, 2006, 65(2), pp. 64-72.
11.Di Angelo L., Di Stefano P., Bernardi S., and Continenza M.A. A new computational method
for automatic dental measurement: The case of maxillary central incisor. Comput Biol Med,
2016, 70, pp. 202-209.
12.Fill T.S., Toogood R.W., Major P.W., and Carey J.P. Analytically determined mechanical
properties of, and models for the periodontal ligament: Critical review of literature. Journal of
Biomechanics, 2012, 45(1), pp. 9-16.
13.Poppe M., Bourauel C., and Jager A. Determination of the elasticity parameters of the human
periodontal ligament and the location of the center of resistance of single-rooted teeth a study
of autopsy specimens and their conversion into finite element models. Journal of orofacial or-
thopedics = Fortschritte der Kieferorthopadie : Organ/official journal Deutsche Gesellschaft
fur Kieferorthopadie, 2002, 63(5), pp. 358-370.
14.Su M.Z., Chang H.H., Chiang Y.C., Cheng J.H., Fuh L.J., Wang C.Y., and Lin C.P. Modeling
viscoelastic behavior of periodontal ligament with nonlinear finite element analysis. Journal
of Dental Sciences, 2013, 8(2), pp. 121-128.
15.Smith R.J. and Burstone C.J. Mechanics of tooth movement. American Journal of Orthodon-
tics and Dentofacial Orthopedics, 1984, 85(4), pp. 294-307.
16.Viecilli R.F., Budiman A., and Burstone C.J. Axes of resistance for tooth movement: does the
center of resistance exist in 3-dimensional space? American Journal of Orthodontics and
Dentofacial Orthopedics, 2013, 143(2), pp. 163-172.
17.Savignano R., Viecilli R.F., Paoli A., Razionale A.V., and Barone S. Nonlinear Dependancy
of Tooth Movement on Force System Directions. American Journal of Orthodontics and
Dentofacial Orthopedics, 2016, 149(6), pp. 838-846.
Finite Element Analysis of TMJ Disks Stress
Level due to Orthodontic Eruption Guidance
Appliances

Paolo NERI1*, Sandro BARONE1, Alessandro PAOLI and Armando


RAZIONALE 1
1
Department of Civil and Industrial Engineering – DICI, University of Pisa
Largo L. Lazzarino 2, 56122 - Pisa. Italy.
* Corresponding author. Tel.: +39-050-221-8019; fax: +39-050-221-0604. E-mail address:
paolo.neri@dici.unipi.it

Abstract In the present work, the effect of Eruption Guidance Appliances


(EGAs) on TemporoMandibular Joint (TMJ) disks stress level is studied. EGAs
are orthodontic appliances used for early orthodontic treatments in order to pre-
vent malocclusion problems. Commercially available EGAs are usually produced
by using standard sizes. For this reason, they are not able to meet all the specific
needs of each patient. In particular, EGAs are symmetric devices, while patient
arches generally present asymmetric conditions. Thus, uneven stress levels may
occur in TMJ disks, causing comfort reduction and potential damage to the most
solicited disk. On the other hand, a customized EGA could overcome these issues,
improving the treatment effectiveness. In this preliminary study, a Finite Element
(FE) model was developed to investigate the effects of a symmetric EGA when
applied to an asymmetric mouth. Different misalignment conditions were studied
to compare the TMJ disks stress levels and to analyze the limitations of a symmet-
ric EGA. The developed FE model can be used to design patient-specific EGAs,
which could be manufactured by exploiting non-conventional techniques such as
3D printing.

Keywords: Eruption Guidance Appliance (EGA); TMJ disorders; Patient-


specific orthodontic appliance; TMJ disks stress; FE model.

1 Introduction

Mandible positioning with respect to maxilla has a great influence on the over-
all patient health [1]. When misalignments or other geometrical defects are pre-
sent, corrective actions must be taken. Eruption Guidance Appliances (EGAs) rep-
resent a widely used orthodontic equipment, which gradually recover mandible

© Springer International Publishing AG 2017 415


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_42
416 P. Neri et al.

position to a healthy condition. Its effectiveness is widely documented in litera-


ture, especially if the treatment is performed during childhood. A silicone rubber
appliance is usually produced by a molding process. Anyway, only standard sizes
are available, corresponding to different misalignment grades or malocclusion sit-
uations, in order to reduce manufacturing costs. This implies that patient-specific
issues (e.g. mandible/maxilla asymmetries or teeth deformities) cannot be taken
into account when choosing the EGA. This leads to non-optimized solutions, low-
ering the appliance efficiency. For this reasons, the design of a patient-specific ap-
pliance could improve the treatment effectiveness. A better fit between EGA and
tooth geometries could be obtained, thus reducing the stress intensity at
TemporoMandibular Joint (TMJ) level. Clearly, the conventional molding process
does not allow the manufacturing of an economically sustainable customized ap-
pliance. 3D printing techniques could rather be used, thus allowing the design of
any shape fitting the specific patient needs. However, a preliminary step is re-
quired to verify the advantages of a customized EGA with respect to standard
symmetric appliances. Several papers regarding the estimation of the forces acting
on the condyles caused by the bite force are available in literature, e.g. [2]. Any-
way, these papers are mainly based on highly simplified models. Moreover, these
analytical approaches are based on experimental measurements of the bite force,
without taking into account any orthodontic appliance. The Finite Element (FE)
method has been successfully applied to biomedical analyses [3], and some FE
simulations of TMJ behavior are also reported in literature [4]. Anyway, few pa-
pers introduce the EGA behavior in the analysis [5].
In the present paper, a FE model was developed to study the effect of a sym-
metric EGA applied to a patient having different malocclusion problems. In par-
ticular, the stress produced on the TMJ disks was taken into account in the case of
II class malocclusion, i.e. the lower jaw in occlusion is positioned further back in
relation to the upper jaw with respect to the ideal antero-posterior occlusion rela-
tionship.
A healthy maxilla and mandible geometry was firstly analyzed, in order to have
a reference value for the TMJ disks stress levels. Then, different malocclusion
levels were simulated by geometrically misaligning the mandible with respect to
the maxilla. This approach allowed to study the effects determined by different
misalignment conditions and to evaluate the stress intensification occurring on the
condyle disks when the symmetric appliance is used on an asymmetric mouth.

2 Finite element model description

The FE model was aimed at estimating the stress intensity values at condyle
disks level corresponding to different mandible misalignment conditions. All the
simulations were performed by using ANSYS Workbench software. The bodies
included in the model were temporal bones (articular fossa), condyles, mandibular
Finite Element Analysis of TMJ Disks Stress … 417

teeth, maxillary teeth, TMJ disks and the EGA. The material of the TMJ disks was
assumed to be linear, homogeneous and isotropic, according to [2], with a
Young’s modulus of 5 MPa, while EGA Young’s modulus was assumed to be 3
MPa. The Poisson’s ratio was set to 0.3 for both materials. Both EGA and TMJ
disks are composed by low stiffness materials, so the large displacement option
was considered for the analysis. This hypothesis requires a non-linear analysis,
with longer computational time, but it also allows the achievement of more realis-
tic results. On the other hand, human bones can be considered rigid bodies since
their Young’s modulus is in the range 1-30 GPa [6], which can be considered
much greater than silicon rubber and TMJ disks Young’s modulus values. For this
reason, they were modeled as surface bodies instead of volume bodies, in order to
reduce their number of elements and thus the computational time. These bodies
only provide boundary and loading conditions to the more compliant parts, thus
the strain and stress solution in their domain is of low interest in the present work.
A preliminary sensitivity analysis has shown that the shell thickness does not in-
fluence the results if a values greater than 1 mm is chosen. For this reason, a value
of 2 mm was set for all the bone bodies.

2.1 Geometry definition

Geometrical information about condyles and temporal bones were obtained by


segmenting Cone Beam Computed Tomography (CBCT) data with 3D Slicer, an
open-source software for medical image analysis [7]. TMJ disks were modeled by
filling the empty space between condyle and articular fossa with an ellipsoid. Ma-
terial penetration was removed by using Boolean operations on the studied geome-
tries, i.e. by subtracting bone geometries from the ellipsoid. Finally, fillets were
added to the obtained disk geometry, in order to avoid fictitious stress intensifica-
tion in numerical results. Figure 1(a) shows the final obtained disk geometry and
its location between the bones structure.

Labial
shield

Lingual
shield

Occlusal
bite
(a) (b)
Fig. 1. CAD models: (a) TMJ disk geometry and (b) EGA geometry.
418 P. Neri et al.

The virtual model design of the EGA was inspired by the standard available
physical model used for the correction of II class malocclusion. The model is
composed of three main geometric elements: occlusal bite, labial shield and lin-
gual shield (Figure 1(b)). The overall size is parameterized on the standard size of
the child’s arches.
In order to simplify the analysis, just one side of the model was created by us-
ing the acquired data and Boolean operations, while the other side was added by
symmetry. Mandibular and maxillary teeth anatomies were also available as sym-
metric geometries representing a healthy condition. This circumstance allowed
performing preliminary simulations without any misalignment at all, thus provid-
ing a reference value for TMJ disks stress levels. Furthermore, this symmetric ge-
ometry allowed the complete control of the desired load. The mandible misalign-
ment and asymmetry was then simulated by geometrically displacing the sub-
assembly composed of mandibular teeth, maxillary teeth and the EGA.

2.2 Connections and contact pairs

The interaction between the different bodies was simulated by using rigid joints
and contact pairs. In particular, the connections between the two condyles and the
mandibular teeth was ensured by using a fixed joint, connecting all their degrees
of freedom. This allowed the distribution of the load introduced by the biting force
(see below) on both teeth and condyles. Several contact pairs were then defined to
connect the simulated bodies. Frictionless contact pairs were defined between
TMJ disks and temporal bones assuring the load transmission along the normal di-
rection and leaving the tangential direction unconstrained. In this way, a dis-
placement of the disks in the articular fossa is allowed as a consequence of the
misalignment between mandible and maxilla teeth. No-separation contact pairs
were defined between the condyles and the TMJ disks, thus constraining both
normal and tangential directions. This choice prevents the relative displacement
between condyles and TMJ disks, thus reducing convergence problems. These
contact pairs still require a linear solver, reducing the computational effort.
Finally, several frictional contact pairs were defined between teeth (both man-
dibular and maxillary) and the EGA, thus increasing the computational time since
a non-linear solution process is required [8]. However, a better reproduction of the
real condition is obtained since the friction coefficient between silicone and teeth
is generally not negligible. A sensitivity analysis was performed, with a friction
coefficient value ranging from 0.1 to 0.3. A variation of 25% of the maximum
Von Mises stress in the disk was found. However, a constant value of 0.2 was
considered for all the performed simulations, since the present study is more
aimed at comparing different configurations rather than obtaining absolute results.
Finite Element Analysis of TMJ Disks Stress … 419

2.3 Boundary and loading conditions

Model boundary conditions were used to set bodies constraints and to impose the
desired misalignment configurations. A fixed constraint was applied to the tem-
poral bones for all the performed simulations. Maxillary teeth boundary conditions
were used instead to set the loading condition. Two different misaligned place-
ments of mandibular and maxillary teeth were considered in this work: displace-
ment along the Y direction and rotation along the Y direction (Figure 2). The ref-
erence system was defined with the Z-axis perpendicular to the occlusal plane and
the Y-axis parallel to the occlusal plane and approximately congruent with the
palato-buccal direction of the anterior teeth. Mandibular teeth were rigidly rotated
with respect to the ideal healthy condition in order to represent the rotational misa-
lignment. This circumstance determined an asymmetric contact during the solution
process causing an uneven contact pressure on the EGA. The whole sub-assembly
composed of mandibular teeth, maxillary teeth and EGA was rigidly moved of the
desired displacement along the Y direction with respect to condyles, temporal
bones and TMJ disks in the CAD model. This choice allowed representing the
translational misalignment still maintaining the correct relative positioning be-
tween teeth and EGA. The effect of this misalignment was then introduced in the
simulation by imposing a fixed displacement along the Y direction to the maxil-
lary teeth only. In this way, the maxilla was forced to move to the original posi-
tion, thus applying a load to the mandibular teeth through the EGA. The fixed
joint applied to the mandibular teeth then allowed the load transmission to the
condyles, and consequently to the TMJ disks.

Fixed
support
Rigid joint

Y misalignment
Biting force

Y rotation

Fig. 2. Model view with applied load and boundary conditions.


420 P. Neri et al.

Finally, the biting force was introduced in the model as a loading condition.
This was practically obtained by applying a remote force to both the condyles. The
remote force application point was chosen to reflect the location of the biting
muscles insertion on the mandible (Figure 2). The force value was gradually in-
creased from 0 N up to a maximum value of 30 N, which was then kept constant
in all the performed simulation in order to compare the obtained results. Non-
linear phenomena are introduced in the model by the large displacement hypothe-
sis and the frictional contact pairs. Thus, it was not possible to directly scale the
TMJ disks stress level with respect to the applied biting force, so that the compari-
son had to be performed at equivalent biting force.

3 Simulated misalignment configurations

Four different teeth configurations were tested in order to compare the results.
Firstly, a situation of an ideal perfect healthy mouth was considered, in order to
have a reference value for disks stress level. No misalignment was introduced in
this first analysis, obtaining a substantially symmetric solution, coherent with the
model symmetry. Then, a symmetric misalignment was tested by introducing 4
mm displacement between maxillary and mandibular teeth along the Y direction.
The third simulation was performed by applying an asymmetric misalignment
consisting of a 2° rotation of the mandible with respect to the maxilla around the
Y-axis. No displacement along the Y direction was added in this simulation. Final-
ly, the two misalignment conditions were combined in the fourth simulation. Ta-
ble 1 summarizes the four misalignment conditions considered in the performed
analyses. The misalignment values were chosen referring to an actual patient mal-
occlusion case, in order to represent a realistic situation.

Table 1. Misalignment conditions for the studied simulations.

Simulation N. Y Displacement (mm) Y Rotation (°)


1 0 0
2 4 0
3 0 2
4 4 2

It is worth noting that in a non-linear analysis the loading history is important


in determining the results. The biting force was kept constant (1 N) until the Y
displacement, when present, was fully recovered in the simulation, thus allowing
to better reproduce the actual loading history. The biting force was then gradually
increased up to the chosen maximum value of 30 N.
Finite Element Analysis of TMJ Disks Stress … 421

4 Results

The described model was developed for comparative purposes. The maximum
equivalent stress in the TMJ disks was computed using the Von Mises criterion.
The stress distributions corresponding to the biting force of 30 N are reported in
Figure 3 and summarized in Table 2. The section plane of each figure was chosen
to show the maximum value obtained in the corresponding misalignment configu-
ration.

Table 2. Results summary corresponding to a biting force of 30 N.

Simulation N. Left disk (MPa) Right disk (MPa)


1 0.37 0.37
2 0.82 0.83
3 0.47 0.38
4 0.62 0.91

The behavior of the maximum stress, with respect to the applied biting force, is
reported in Figure 4. In order to better compare results for the different misalign-
ment configurations, Figure 4 only shows the results from 3 N to 30 N, i.e. when
the Y displacement was recovered and the biting force was gradually increased.

Left Right

Sim. 1
MPa

Sim. 2

Sim. 3

Sim. 4

Fig. 3. Von Mises stress levels: section plane through maximum stress values.
422 P. Neri et al.

Figure 4 evidences that symmetric configurations cause even stress distribu-


tions in left and right disks (simulation 1 and 2). In particular, the plot relative to
left and right disk of simulation 1 are perfectly overlapped. On the other hand,
when an asymmetric misalignment is introduced, the stress distribution is not
symmetric in the two disks (simulation 3 and 4). The reference values obtained
through simulation 1 show that all the misalignment configurations determine
higher stress levels in the disks. In particular, simulation 4, characterized by both
Y rotation and Y displacement between mandibular and maxillary teeth, results in
a difference greater than 30% between left and right disk stress values.

Sim. 1 - L Sim. 2 - L Sim. 3 - L Sim. 4 - L


Sim. 1 - R Sim. 2 - R Sim. 3 - R Sim. 4 - R
1.0
0.9
Maximum Von Mises stress (MPa)

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 5 10 15 20 25 30 35
Biting Force (N)
Fig. 4. Results comparison: maximum Von Mises stress against biting force.

5 Conclusions

The present work was aimed at developing a finite element model for TMJ disks
stress level analysis due to the use of Eruption Guidance Appliances. The maxi-
mum Von Mises stress in the disks was taken into account as a comparison pa-
rameter to study the effect of different misalignment configurations. The analysis
showed that when a symmetric EGA is applied to an asymmetric mouth, uneven
stress distributions in the disks occur, thus proving that a symmetric EGA leads to
asymmetric loading of the TMJ disks. This issue could produce some damage to
the most stressed disk. Moreover, the patient comfort could decrease, thus reduc-
Finite Element Analysis of TMJ Disks Stress … 423

ing the amount of time spent by the patient wearing the appliance and consequent-
ly lowering the treatment effectiveness.
This preliminary study showed that standard designed EGAs, which are not op-
timized for the specific patient anatomy, present critical issues when applied to
generic asymmetric mouths. Further developments could be aimed at designing
patient-specific EGAs to be produced with 3D printing or other non-conventional
techniques. The optimization process of customized appliances could be driven by
the developed FE model in order to evaluate the influence of the different geomet-
rical and anatomical parameters.

References

1. Wang X., Xu P., Potgieter J. and Diegel O. Review of the Biomechanics of TMJ. In 19th In-
ternational Conference on Mechatronics and Machine Vision in Practice, M2VIP, Auckland,
November 2012, pp.381-386.
2. Li G., Sakamoto M. and Chao E.Y.S. A comparison of different methods in predicting static
pressure distribution in articulating joints. Journal of Biomechanics, 1997, 30, 635-638.
3. Ingrassia T., Nalbone L., Nigrelli V., Tumino D. and Ricotta V. Finite element analysis of two
total knee joint prostheses. International Journal on Interactive Design and Manufacturing,
2013, 7, 91-101.
4. Citarella R., Armentani E., Caputo F. and Naddeo A. FEM and BEM Analysis of a Human
Mandible with Added Temporomandibular Joints. The Open Mechanical Engineering Jour-
nal, 2012, 6, 100-114.
5. Tilli J., Paoli A., Razionale A.V. and Barone S. A novel methodology for the creation of cus-
tomized eruption guidance appliances. In Proceedings of the ASME 2015 International De-
sign Engineering Technical Conferences & Computers and Information in Engineering Con-
ference, IDETC/CIE, Boston, August 2015, pp.1-8, doi:10.1115/DETC2015-47232.
6. Odin G., Savoldelli C., Boucharda P. and Tillier Y. Determination of Young’s modulus of
mandibular bone using inverse analysis. Medical Engineering & Physics, 2010, 32, 630-637.
7. Barone S., Paoli A. and Razionale A.V. Computer-aided modelling of three-dimensional max-
illofacial tissues through multi-modal imaging. In Proceedings of the Institution of Mechani-
cal Engineers, Part H: Journal of Engineering in Medicine, 2013, 227(2), 89-104
8. Barzi E., Gallo G. and Neri P. FEM Analysis of Nb-Sn Rutherford-Type Cables. IEEE Trans-
action on Applied Supercoductivity, 2012, 22, 1-5.
TPMS for interactive modelling of trabecular
scaffolds for Bone Tissue Engineering

Fantini M1, Curto M1 and De Crescenzio F1*


1
University of Bologna, Department of Industrial Engineering, Bologna, Italy
* Corresponding author. Tel.: +39 0543374447. E-mail address:
francesca.decrescenzio@unibo.it

Abstract The aim of regenerative medicine is replacing missing or damaged


bone tissues with synthetic grafts based on porous interconnected scaffolds, which
allow adhesion, growth, and proliferation of the human cells. The optimal design
of such scaffolds, in the Bone Tissue Engineering field, should meet several geo-
metrical requirements. First, they have to be customized to replicate the skeletal
anatomy of the patient, and then they have to provide the proper trabecular struc-
ture to be successfully populated by the cells. Therefore, for modelling such scaf-
folds, specific design methods are needed to conceive extremely complex struc-
tures by controlling both macro and micro shapes. For this purpose, in the last
years, the Computer Aided Design of Triply Periodic Minimal Surfaces has re-
ceived considerable attention, since their presence in natural shapes and structures.
In this work, we propose a method that exploit Triply Periodic Minimal Surfaces
as unit cell for the development of customized trabecular scaffolds. The aim is to
identify the mathematical parameters of these surfaces in order to obtain the target
requirements of the bone grafts. For that reason, the method is implemented
through a Generative Design tool that allow to interactively controlling both the
porosity and the pores size of the scaffolds.

Keywords: Bone Tissue Engineering, Scaffold Design, Triple Periodic Minimal


Surfaces, Generative Design.

1 Introduction

Missing or damaged bone tissues of the human body are usually replaced by bone
grafts, which are obtained in an auto-graft approach, or by synthetic grafts, which
are manufactured with biocompatible materials. The second option is obviously
the less invasive one and has been widely studied in order to provide bone substi-
tutes that are engineered to be successfully integrated with the existing tissues of
the patient. Actually, Bone Tissue Engineering (BTE) is the discipline for the de-

© Springer International Publishing AG 2017 425


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_43
426 M. Fantini et al.

sign and manufacturing of interconnected porous scaffolds, which allow the re-
generation of bone tissues, since the cells can gradually and progressively popu-
late the ducts of the lattice structure [1].
Basic geometrical requirements for the design of customized bone scaffolds are
the porosity and the pores size, together with the shape of the individual anatomy
and the specific defect site of the patient.
Computer Aided Design (CAD) and Solid Freeform Fabrication (SFF) technol-
ogies are providing valuable tools to conceive, generate, evaluate and manufacture
such scaffolds in a Computer Aided Tissue Engineering (CATE) approach [2, 3, 4,
5]. Therefore, design methods are being explored to efficiently generating com-
plex surfaces for interconnected porous structures to be produced via Additive
Manufacturing (AM) Error! Reference source not found.6, 7, 8, 9]. For what
concerns the design methods, a well-known approach is to create hierarchical
structures based on unit cells that are replicated in the 3D space in order to obtain
a lattice structure that, intersected with the boundary surface of given individual
anatomies, allows the generation of customized porous scaffolds.
Initially, the idea was to create unit cells libraries, either using image-based de-
sign approaches, or using CAD approaches based on Boundary Representation (B-
Rep) or Constructive Solid Geometry (CSG) [10]. Recently, a significant interest
has increased around hyperbolic functions and, specifically, in Triply Periodic
Minimal Surfaces (TPMS) [11]. Thanks to embedded properties of this class of
surfaces, researchers are focusing on their applicability in the biomedical field,
such as in other domains that can exploit the possibility of designing porous inter-
connected structures based on TPMS surfaces.
Therefore, new methods are needed to make the design of such structures really
interactive in order to give the designer the possibility to explore solutions that
meet specific geometrical requirements that, especially in the biomedical domain,
are completely different from case to case and from patient to patient. Instead of
having a classical CAD approach to model a specific scaffold, we formalized a
workflow that contains the rules to generate the scaffold that meet the porosity and
pores size requirements, together with the boundary surface of the specific defect
site of the patient.

2 TPMS

Minimal surfaces are defined as surfaces with zero mean curvature that minimize
the surface area for given boundary conditions (a closed curve lying on the sur-
face). With planar curves, these surfaces are planar. With three-dimensional
curves, these surfaces do not present discontinuities, thus resulting in extremely
smooth surfaces.
Using a combination of trigonometric functions is possible to generate a wide
range of periodic shapes. Many periodic minimal surfaces have been studied and
TPMS for interactive modelling of trabecular … 427

interesting properties have been proven, expressly by their three dimensional peri-
odicity. Lord and Mackay report a survey about periodic minimal surfaces of cu-
bic symmetry [12] and define this class as the most complex and interesting class
of minimal surfaces. Indeed, a unit cell with cubic symmetry can be used as the
building block of an interconnected porous lattice that can be easily obtained as a
three-dimensional array of unit cells.
There are different approaches for modelling minimal surfaces. One of these is
the use of the implicit method that defines the surface as the boundary of a solid
obtained for that points for which a given function f(x,y,z,t)=0 is satisfied. For a
sphere the function being x2+y2+z2-1=0 means that the space is divided into two
subspaces given by the points inside the sphere x 2+y2+z2<1, the points on the sur-
face x2+y2+z2=1 and the points outside the sphere x2+y2+z2>1. In a cube of unit
side containing a sphere of unit diameter, the unit cell, the solid and the void
points can be easily identified.
Among this class of surfaces, TPMS are triply periodic. Being the aim of this
work the design of porous interconnected scaffolds with a trabecular structure, two
different TPMS unit cell with cubic symmetry, which allow creating TPMS-based
lattices, have been selected and are shown in Fig. 1: the Diamond surface (D) and
the Gyroid surface (G).

‫ܦ‬ǣ‫ ܼ݊݅ݏܻ݊݅ݏܺ݊݅ݏ‬൅ ‫ ܼݏ݋ܻܿݏ݋ܿܺ݊݅ݏ‬൅ ܿ‫ ܼݏ݋ܻܿ݊݅ݏܺݏ݋‬൅


ܿ‫ ܼ݊݅ݏܻݏ݋ܿܺݏ݋‬ൌ ‫ݐ‬ሺ‫ݍܧ‬Ǥ ͳሻ
‫ܩ‬ǣ ܿ‫ ܻ ݊݅ݏ ܺ ݏ݋‬൅ ܿ‫ ܼ ݊݅ݏ ܻ ݏ݋‬൅ ܿ‫ ܺ ݊݅ݏ ܼ ݏ݋‬ൌ ‫ݐ‬ሺ‫ݍܧ‬Ǥ ʹሻ

Fig. 1. Unit cells and interconnected porous lattice for the Diamond (left) and the Gyroid (right).

The Diamond surface was mathematically defined by Schwarz, in the 19th cen-
tury, as a representative of TPMS with cubic symmetry [13]. Its labyrinth graphs
are four-connected diamond networks, since every cell is connected to its four
neighbors in the geometry of a tetrahedron.
The Gyroid surface was discovered by Schoen in 1970 during a study on the
aerospace applications of minimal surfaces [14]. With triple junctions, this surface
divides the space in two distinct regions, both with their own helical character. It
contains no straight lines and the topological symmetry of these sub-volumes is
inversion.
428 M. Fantini et al.

3 CAD generation of TPMS unit cells

The TPMS unit cells were generated by means of K3dSurf. This free software tool
allows visualization and manipulation of mathematical surfaces in three, four, five
and six dimensions, also supporting parametric equations and isosurfaces. Moreo-
ver, it is possible to transform each mathematical surface into a 3D model with a
defined bounding box. Modelling parameters are the grid resolution, the x, y, z
domain and the offset value (t).
The grid resolution has impact on the smoothness of the 3D model, but increas-
ing this value, also the size of the mesh increases, requiring extra computing
memory and time. The maximum value allowed is 100x100x100 and, as trade-off,
the grid resolution is set to 33x33x33 (the same on each axis).
To build a lattice as a three-dimensional array of a reference symmetric struc-
ture, the TPMS unit cell needs a bounded symmetric domain. Selecting different
x, y, z domains for the same mathematical surface, the resulting 3D models are
characterized by the same size of the bounding box (a 650 mm sided cube), but
differs each other in the number of the pores, in the dimension of the pores and in
the size of the mesh. Among different boundary conditions, the [-4π ÷ 4π] domain
is chosen on each axis as a good compromise between the pores size and the
sharpness of the 3D model due to a coarse tessellation of the surface.
The isosurface function corresponding to each TPMS, can be edited by setting
different offset values (t) in the implicit equation of the TPMS surface (Eq. 1 and
Eq. 2). This allows the characterization of the mathematical surface resulting in a
unit cell with different values of porosity. Therefore, a proper t value can be set,
for each desired porosity according to the kind of bone that must be replaced.
Finally, the mathematical surfaces modelled in K3DSurf can be exported in.obj
format as TPMS unit cells with cubic symmetry (Fig. 2) and then can be imported
as mesh models in a CAD environment. Such mesh models can be used as the
building block of the three-dimensional array to obtain an interconnected porous
lattice. Therefore, TPMS unit cells represent the input component for the Genera-
tive Design (GD) process described in the next section.

Fig. 2. Unit cells generated via K3DSurf, setting grid resolution to 33x33x33, x, y, z domain to [-
4π ÷ 4π] and null offset values (t) for the Diamond (left) and the Gyroid (right)..
TPMS for interactive modelling of trabecular … 429

4 Generative Design process

The concept of GD is based on the idea of producing digital shapes that follow
rules that can be written in a source code. First, the designer’s idea has to be for-
malized in the code, and then the computer interprets the code generating the
shape. The designer can modify the code and the parameters after evaluating the
output. This approach is widely expanding since Computer Aided Industrial De-
sign (CAID) provides scripting capabilities and intuitive tools to create scripts
through graphical interfaces. One of these is Grasshopper, the Rhinoceros 5 plug
in, conceived to create scripts in a tabs and canvas interface where the flow to
generate shapes, calculate parameters and evaluate properties can be implemented.
Therefore, for designers and architects, CAID evolved into GD, allowing the gen-
eration of an infinite number of shapes that follow specific rules.
In the design of scaffolds for BTE, such rules have to be identified by studying
the problem of substituting bone defects with synthetic grafts that mimic the pa-
tient’s bone. Tissues information are commonly obtained by means of non inva-
sive imaging methods, such as Computer Tomography (CT) or Magnetic Reso-
nance Image (MRI).
The first information, essential for the surgical planning, evaluates the global
bone properties at macrostructural level assessing the bone porosity (P%). The
second one concerns the microstructural level, and in particular the pores size.
In generating a lattice useful to realize a patient specific scaffold, the GD proc-
ess needs two geometrical input components: the TPMS unit cell, which features
the trabecular pattern, and the patient bone geometry that has to be replaced. In
this case, both are 3D meshes: the first is an .obj file coming from K3dSurf, the
second one, generally, is an .stl file coming from the DICOM data set of the defect
site of the patient.
As sample case study for this work, we considered the replacement graft for a
patient affect by a severe atrophy to the right mandibular ramus (Fig. 3).

Fig. 3. The boundary surface of the scaffold designed for a patient affect by a severe atrophy to
the right mandibular ramus.

To design the scaffold, requirements in terms of desired percentage porosity


(P%) and pores size must be satisfied. First of all, different TPMS unit cells are
imported, as .obj file, in Rhinoceros environment in order to evaluate the percent-
age porosity (P%), related to the offset value (t), set in the implicit equation of the
surface via K3dSurf. Thereafter, the bounding box of each TPMS unit cell is
430 M. Fantini et al.

evaluated. The relative percentage porosity (P%) can be then determined by the
relationship between the volume of the scaffold and the volume of the bounding
box.

ܸ஻௢௨௡ௗ௜௡௚஻௢௫ െ ்ܸ௉ெௌௌ௖௔௙௙௢௟ௗ
ܲΨ ൌ  ‫ͲͲͳ כ‬ሺ‫ݍܧ‬Ǥ ͵ሻ
ܸ஻௢௨௡ௗ௜௡஻௢௫

A GD flow has been formalized in order to automatically compute the percent-


age porosity (P%) of any unit cell generated through TPMS equations by varying
the offset value (t) to obtain the required porosity of the bone. Moreover, the
nominal pores size of each unit cell has been computed according to offset value
(t). Results are reported and discussed in next section.
In order to mimic the patient bone, both at macrostructural and microstructural
level, the pores size has to be constrained in the GD flow, as depicted in Fig. 4.

Fig. 4. Customized porous scaffold generation flow.

Therefore, the input data are the target pores size, the TPMS unit mesh (with
the required porosity and the nominal pores size) and the mesh representing the
patient bone geometry, thus the needed bone graft shape. The TPMS unit cell is
generated via K3DSurf (a 650 mm sided cube) with a measurable nominal pores
size. The target pores size is required to compute the scale factor to be applied to
the TPMS unit cell based on the ratio between the desired and the nominal pores
size of the TPMS unit cell. Thus, a scaled TPMS unit cell with the appropriate
TPMS for interactive modelling of trabecular … 431

pores size is generated. Then, in order to cover the patient bone geometry, the
scaled TPMS unit cell is replicated in a three-dimensional array so that the total
volume is larger than the volume of the bone graft to be produced. Thus, the num-
ber of array elements in x, y, z direction are computed based on the bounding box
of the needed bone graft. Finally, a Boolean intersection between the TPMS lattice
mesh and the patient bone geometry allow obtaining the watertight mesh of the
customized scaffold. For what concerns the computational burden on standard lap-
top, the scaffold of the example (bounding box: 48.05x25.91x24.66 mm) can be
obtained based on a Gyroid TPMS with different pore sizes (mm) and correspond-
ing computational time (s): 4.0 - 29.1; 2.7 - 76.6; 1.3 - 1592.7; 1.0 - 6559.9.

5 Results and discussion

As reported in literature, bone scaffolds play a fundamental role for the regenera-
tion of new bone tissues. In addition, scaffolds act as carriers for morphological
proteins distribution, encouraging the osteoconductive activity [15]. Finally, os-
teogenesis comes after scaffold cell seeding, causing new bone formation. Into the
osteogenesis, scaffolds should mimic bone morphology, structures and functions
with the aim to optimize the integration with the surrounding tissue. Therefore, the
requirements of interests, such as percentage porosity (P%) and pores size have
been extensively studied.
From the literature, the trabecular bone has a porosity variable in the range
[50% ÷ 90%], while the compact bone has a lower porosity [<10%][16], due to
the Heversian channels. For what concern the pores size, the minimum value re-
quired to regenerate mineralized bone is generally considered to be approximately
100 μm [17]. Other investigations have indicated that the appropriate pores size
for attachments, differentiations, ingrowth of osteoblasts and vascularization is
approximately 200-500 μm [18] or 300-400 μm [19] in porous bone substitute ap-
plications. More recently, it has been observed that for metallic scaffolds (Tita-
nium), realized by means of Selective Laser Sintering (SLS), the size of the pores
is one of the critical factors and should be between 100 and 600 μm [20]. Other
studies [21], related to the bone ingrowth, have shown that bone increasing not
only depends on the architecture and pores appearance, but also by a pores size
variable in the range [300 ÷ 800 μm]. It is also reported that for in vivo BTE,
minimum pore sizes of 300 μm are needed for capillary formation instead; up to
800 μm pores size, there are no statistical difference in scaffolds bone ingrowth
and bone formation. Generally, for bone scaffolds, the pores size should be main-
tained in a given range [150 ÷ 600 μm] [22].
According to the data reported above, we have set the following parameters:
• Porosity ≈ 80%
• Pores size ranging between [150 ÷ 600 μm]
432 M. Fantini et al.

The pores size is evaluated as the diameter of sphere that can be inscribed in-
side each cavity (void) of the TPMS unit cell, as shown in Fig. 5.

Fig. 5. Evaluation of the pores size as the diameter of sphere inscribed inside each cavity (void)

Keeping the same grid resolution (33x33x33) and the same x, y, z domain [-4π
÷ 4π], the percentage porosity (P%) and the pores size of the TPMS unit cells were
computed by varying the offset value (t). These values are reported in Fig. 6 show-
ing the linear correlation between such parameters and the offset value (t).

Fig. 6. Diagrams with percentage porosity (P%) and pores size, in relation to the offset value (t),
for different TPMS unit cells

It can be observed that, for what concerns the Diamond and the Gyroid TPMS,
in order to generate lattices with the target porosity of 80%, each trigonometric
equations can be rewritten as follow:

‫ܦ‬ǣ‫ ܼ݊݅ݏܻ݊݅ݏܺ݊݅ݏ‬൅ ‫ ܼݏ݋ܻܿݏ݋ܿܺ݊݅ݏ‬൅ ܿ‫ܼݏ݋ܻܿ݊݅ݏܺݏ݋‬


൅ ܿ‫ ܼ݊݅ݏܻݏ݋ܿܺݏ݋‬ൌ ͲǤ͸Ͷሺ‫ݍܧ‬Ǥ ͷሻ
TPMS for interactive modelling of trabecular … 433

‫ܩ‬ǣܿ‫ ܻ݊݅ݏܺݏ݋‬൅ ܿ‫ ܼ݊݅ݏܻݏ݋‬൅ ܿ‫ ܺ݊݅ݏܼݏ݋‬ൌ ͲǤͺͷሺ‫ݍܧ‬Ǥ ͸ሻ

In Fig. 7 the Diamond and Gyroid TPMS unit cells are shown with the sphere
representing the nominal pores size. For the Diamond TPMS unit cell (t 80% =
0.64), the nominal pores size, thus the diameter of the corresponding sphere, is
118.02 mm; for the Gyroid TPMS unit cell (t 80% = 0.85), the nominal pores size,
thus the diameter of the corresponding sphere, is 141.69 mm. Now, knowing the
right offset value t80% and the corresponding nominal value of the pores size, ac-
cording to the desired pores size of the scaffold, ranging from [150 μm ÷ 600 μm],
it is possible to obtain the TPMS scale factor, used in the GD flow.
In this way, the trabecular structure is modelled as isotropic. However, a gradi-
ent in pore size and porosity could be introduced by adding to the TPMS equation,
for example, a linear term for z-values, instead of a constant offset value (t).

Fig. 7. TPMS cubic unit cell with the sphere representing the nominal pores size

6 Conclusions

This work describes a GD flow to generate customized scaffolds on specific pa-


tient anatomy, which allows to interactively controlling the internal morphology
of the lattices, in terms of porosity and pores size. This method produces a water-
tight mesh of the trabecular scaffold that can be manufactured by means of AM
technologies.
This method can be also applied to further application fields, other than BTE,
and, in general, in the design of customized porous lattices with a given 3D
boundary surface.
Attempts to export TPMS scaffolds towards a 3D printer were successfully per-
formed, while structural analysis, by means of CAE tools, is still to be investi-
gated.
Limits that bound the applicability of this method are those of any porous struc-
ture, thus the minimal dimension of the section of conducts and the maximum
volume of the customized scaffold for a given section. Extreme conditions can af-
434 M. Fantini et al.

fect the computational time and the manufacturability if the scaffold do not meet
the requirements or constraints of the manufacturing process.

References

1 Salgado AJ, Coutinho OP, Reis RL. Bone Tissue Engineering: State of the Art and Future
Trends. Macromol Biosci. 2004;4(8):743-765.
2 Sun W, Starly B, Nam J, Darling A. Bio-CAD modeling and its applications in computer-
aided tissue engineering. Computer-Aided Design. 2005;37(11):1097-1114.
3 Bucklen B, Wettergreen M, Yuksel E, Liebschner M. Bone-derived CAD library for as-
sembly of scaffolds in computer-aided tissue engineering. Virtual Phys Prototyp.
2008;3(1):13-23.
4 Chua CK, Leong KF, Cheah CM, Chua SW. Development of a Tissue Engineering Scaffold
Structure Library for Rapid Prototyping. Part 1: Investigation and Classification. Int J Adv
Manuf Technol. 2003:21(4);291-301.
5 Peltola SM, Melchels FP, Grijpma DW, Kellomäki M. A review of rapid prototyping tech-
niques for tissue engineering purposes. Ann Med. 2008;40(4):268-280.
6 Yang S, Leong KF, Du Z, Chua CK. The Design Of Scaffolds For Use In Tissue Engineer-
ing. Part I Traditional Factors. Tissue Eng. 2001;7(6):679-89.
7 8 Xiao D, Yang Y, Su X, Wang D, Sun J. An integrated approach of topology optimized
design and selective laser melting process for titanium implants material. Bio-Medical Ma-
terials and Engineering. 2013;23(5);433-445.
8 Williams JM, Adewunmi A, Schek RM, Flanagan CL, Krebsbach PH, Feinberg SE, Hollis-
ter SJ, Das S. Bone tissue engineering using polycaprolactone scaffolds fabricated via se-
lective laser sintering. Biomaterials. 2005;26(23):4817-4827.
9 Fantini M, Curto M, De Crescenzio F. A method to design biomimetic scaffolds for bone
tissue engineering based on Voronoi lattices. Virtual Phys Prototyp. 2016;11(2): 77-90.
10 Hollister SJ. Porous scaffold design for tissue engineering. Nat Mater. 2005;4(7):518-524.
11 Yoo DJ. Computer-aided Porous Scaffold Design for Tissue Engineering Using Triply Pe-
riodic Minimal Surfaces. Int J Precis Eng Man. 2011;12(1):61-67.
12 Lord EA, Mackay AL. Periodic minimal surfaces of cubic symmetry. Curr. Sci. 2003;85
(3):346-362.
13 Schwarz HA. Gesammelte Mathematische Abhandlungen vol.1, Springer, Berlin, (1890).
14 Schoen AH. Infinite periodic minimal surfaces without self-intersections. NASA Techn.
note no. D-5541, (1970).
15 Karageorgiou V, Kaplan D. Porosity of 3D biomaterial scaffolds and osteogenesis. Bio-
materials. 2005;26:5474-5491
16 Hollister SJ, Kikuchi N. Homogenization Theory and Digital Imaging: A Basis for Study-
ing the Mechanics and Design Principles of Bone Tissue. Biotechnology and Bioengineer-
ing. 1994;43(7):586-596.
17 Hulbert SF, Young FA, Mathews RS, Klawitter JJ, Talbert CD, Stelling FH. Potential ce-
ramic materials as permanently implantable skeletal prostheses. J. Biomed. Mater. Res.
1970;4(3):433-456.
18 Clemow AJ, Weinstein AM, Klawitter JJ, Koeneman J, Anderson J. Interface mechanics of
porous titanium implants. J Biomed Mater Res. 1981;15(1):73-82.
19 Tsuruga E, Takita H, Itoh H, Wakisaka Y, Kuboki Y. Pore size of porous hydroxyapatite as
the cell-substratum controls BMP-induced osteogenesis. J Biochem. 1997;121(2):317-24.
TPMS for interactive modelling of trabecular … 435

20 Challis VJ, Roberts AP, Grotowski JF, Zhang LC, Sercombe TB. Prototypes for bone im-
plant scaffolds designed via topology optimization and manufactured by solid freeform fab-
rication. 2010;12(11):1106-1110.
21 Melchels FP, Bertoldi K, Gabbrielli R, Velders AH, Feijen J, Grijpma DW. Mathematically
defined tissue engineering scaffold architectures prepared by stereolithography. Biomateri-
als. 2010;31(27):6909-6916.
22 Cornell CN. Osteoconductive materials and their role as substitutes for autogenous bone
grafts. Orthopedic Clinics of North America. 1999;30(4): 591-598.
Mechanical and Geometrical Properties
Assessment of Thermoplastic Materials for
Biomedical Application

Sandro BARONE1, Alessandro PAOLI1, Paolo NERI1, Armando Viviano


RAZIONALE1* and Michele GIANNESE1
1
DICI – Department of Civil and Industrial Engineering, University of Pisa
* Corresponding author. Tel.: +39-050-221-8012; fax: +39-050-221-8065. E-mail address:
a.razionale@ing.unipi.it

Abstract Clear thermoplastic aligners are nowadays widely used in orthodontics


for the correction of malocclusion or teeth misalignment defects. The treatment is
virtually designed with a planning software that allows for a definition of a se-
quence of little movement steps from the initial tooth position to the final desired
one. Every single step is transformed into a physical device, the aligner, by the use
of a 3D printed model on which a thin foil of plastic material is thermoformed.
Manufactured aligners could have inherent limitations such as dimensional insta-
bility, low strength, and poor wear resistance. These issues could be associated
with material characteristics and/or with the manufacturing processes. The present
work aims at the characterization of the manufactured orthodontic devices. Firstly,
mechanical properties of different materials have been assessed through a set of
tensile tests under diơerent experimental conditions. The tests have the purpose of
analyzing the eơect that the forming process and the normal use of the aligner may
have on mechanical properties of the material. The manufacturing process could
also introduce unexpected limitations in the resulting aligners. This would be a
critical element to control in order to establish resulting forces on teeth. Several
studies show that resulting forces could be greatly influenced by the aligner thick-
ness. A method to easily measure the actual thickness of the manufactured aligner
is proposed. The analysis of a number of real cases shows as the thickness is far to
be uniform and could vary strongly along the surface of the tooth.

Keywords: 3D Human Modeling; Virtual Design; Clear Aligner; Thermoform-


ing Process; Mechanical Properties Assessment; Optical 3D Scanner.

© Springer International Publishing AG 2017 437


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_44
438 S. Barone et al.

1 Introduction

Orthodontic treatment with clear aligners is becoming every day more popular
as orthodontic approach, also thanks to the diffusion of the CAD/CAM technolo-
gies in biomedical applications. Compared with traditional fixed appliances with
metallic braces, clear aligners are aesthetically appealing and more comfortable to
wear. These treatments are preferred by adult patients, who don’t want to treat
their malocclusions in a traditional way. However, sometimes it is difficult to re-
solve complicated cases such as severe crowding, deep bites or round teeth rota-
tions with aligners. Often there is the need of case refinement or even to turn back
to braces due to the low efficacy and predictability of aligners [1-2].
The treatment process is based on the use of a digital impression, which could
be obtained with a 3D scan of a physical impression of the patient mouth or by a
direct scan of the desired geometry with an intra-oral scanner. The treatment is
virtually designed with a planning software that allows for the definition of a se-
quence of little movements in order to move the tooth from the initial position to
the final desired one. Every single step is transformed into a physical device, the
aligner, by the use of a 3D printed model on which a thin foil of plastic material is
thermoformed. However, manufactured aligners could have inherent limitations
such as dimensional instability, low strength, and poor wear resistance. These is-
sues could be associated with materials characteristics and/or with the manufactur-
ing processes.
Mechanical performances of aligner’s thermoplastic material play a critical role
in developing continuous orthodontic forces able to yield the desired results. The
orthodontic thermoplastic materials should have particular characteristics includ-
ing transparency, lower hardness, better elasticity and resilience, and resistance to
aging.
The present work aims at the characterization of the manufactured orthodontic
devices. Firstly, mechanical properties of three different materials commonly used
in orthodontic practice has been assessed through a set of tensile tests carried out
under diơerent conditions. The tests have the purpose of analyzing the eơect that
the forming process, and subsequently the normal use of the aligner, may have on
properties of the material. For this reason, the samples were subjected to an aging
treatment in a particular solution that reproduces the biochemical behavior of hu-
man saliva.
On the other way, also the manufacturing process could introduce unexpected
limitations in the resulting aligner [3-4]. This would be a critical element to con-
trol within the aligner in order to explore resulting forces on teeth. Moreover,
force application in the tooth-aligner system is not a fully understood mechanism.
Several studies show that it could be greatly influenced by the aligner thickness
[5-7]. A method to easily measure the actual thickness of the manufactured aligner
is proposed. The method is based on the use of an optical scanner to acquire the
physical model used for thermoforming the device with and without the aligner.
Mechanical and Geometrical Properties … 439

The difference between the two acquisitions depends only on the thickness of the
aligner. The analysis of a number of real cases shows as the thickness is far to be
uniform and could vary strongly along the surface of the tooth.
Results obtained in the present work have allowed to characterize the aligner in
terms of mechanical and geometrical properties with high level of confidence.
This allows for a more accurate definition of the problem and a better design of
the treatment planning. Results could be also used to realize a reliable finite ele-
ment method (FEM) model to use for treatment optimization purposes [8-9].

2 Materials and methods

2.1 Mechanical properties

Aligners in the oral cavity are subjected to aggressive environment that could
lead to a high degradation of their mechanical properties with negative influence
on treatment efficacy. For this reason, tensile tests were extended to different la-
boratory conditions to simulate the actual operating conditions. In particular, an
aging stage was designed to reproduce the influence of the saliva corrosion on the
aligner. An artificial mixture was used to reproduces the biochemical behavior of
human saliva [10-11].

2.1.1 Tensile test

The test campaign was designed on the EN ISO 527-1:2012 [12] technical in-
dications that establishes general conditions and procedures for tensile tests on
plastic materials.
For each of the three materials, tests in original conditions (as from supplier),
after the thermoforming stage and after thermoforming and aging stage, were car-
ried out. All materials were supplied in circular plates (diameter 125 mm) with
thickness of 0.75 and 1 mm. Both shape and size of the specimens were chosen on
this thickness value as suggested by normative.
The thermoforming process was performed under manufacturer’s recommenda-
tions. The thermoformed specimens were obtained by using the same technologi-
cal process adopted to manufacture the aligner. A 3D printed disk of 80 mm diam-
eter and 12 mm height was used as model form for the thermoforming machine
(Ministar S from Scheu Dental ). The parameter used was 70°C for the infrared
‹
440 S. Barone et al.

lamp and 4 bar pressure for 30 seconds. The flat surface on the top of the disk
provides a flat area of thermoformed material, which can be used to obtain the de-
sired specimens. Final specimens were then obtained by high-pressure waterjet
cutting and manually refined. Each specimen was then measured with a microme-
ter (CLM1-15QM, Mitutoyo Co.) to establish the cross section area and prelimi-
nary weighted to evaluate the water quantity absorbed after the aging stage (Table
2). Some samples waere subjected to the aging treatment. The aging stage consist-
ed in a bath in the prepared compound (Table 1) for 7 days at an environmental
temperature of 37°C. This time corresponds to half the normal use time of each
aligner during the actual treatment. It is justified because the water absorption
from plastic materials mainly occurs in the first 72-168 hours [13]. At the end of
the aging stage, the material was placed in a special environment (silica gel bell)
in order to reach the equilibrium condition and its weight variation was evaluated
to establish the absorbed rate of water.
The tensile tests were performed by the Instron Universal Testing Machine
5500R at a temperature of 23°C. A total of 10 specimens for each material were
tested under each condition (figure 1). Two different test campaigns were de-
signed to determine the elastic modulus and the tensile yield stress. The testing
speed was defined as 0.1 mm/min to obtain data for the elastic modulus and 5
mm/min for the stress–strain curves as suggested by the ISO527-1. Mean and
standard deviation values were determined for each item. Comparisons among
materials under each test condition were finally performed.

Table 1. Chemical composition of artificial saliva pH=6.5.

Compound Content [g/l]

NaCl 0.6
KCL 0.72
CaCl2·2H2O 0.22
KH2PO4 0.68
Na2HPO4·12H2O 0.856
KSCN 0.06
NaHCO3 1.5
Citric Acid 0.03

Table 2. Weight variation of material specimen for water absorption.

Material Before aging (g) After aging (g) Variation %

Mat 1 0.1334 0.1339 0.375%


Mat 2 0.1255 0.1261 0.438%
Mat 3 0.0974 0.0981 0.719%
Mechanical and Geometrical Properties … 441

Fig. 1. Sample of the stress-strain curve obtained for a set of specimens of the material “Mat 1”
in original supplier condition.

3.1 Geometrical properties

The manufacturing process could introduce unexpected variations of the geom-


etry of the resulting aligner. The measurement of the actual aligner’s shape would
be a critical element in order to assess the force system delivered by the appliance.
In this work, a method to easily measure the actual thickness of the manufactured
aligner is proposed. The method is based on the use of an optical scanner to ac-
quire the physical model used to thermoform the device with and without the
aligner worn on it. The difference between the two acquisitions depends only on
the thickness of the aligner.

3.1.1 3D optical measurement

In this work, the focus was on the measurement of the thickness of the aligners
to assess its value along the teeth surfaces. At this purpose, several aligners were
manufactured with the same technological approach used for the usual aligner
production. Reference models of an upper and a lower human arch were 3D print-
ed by a Stratasys Eden500V polyjet machine in high quality printing setup (slice
thickness of 16μm, accuracy of 200μm). The material used was the VeroDent
MED670 as construction material and SUP705 as supporting material. The sup-
porting material was finally removed with high-pressure water jet in order to ob-
tain the final models. The models were used to thermoform the aligners samples,
with the same parameters used for the thermoformed samples of the tensile test (§
2.2.1). Finally, the aligners were manually cut and refined with specific milling
tools.
An optical 3D scanner, specifically developed for dental models [14-15], was
used to measure the real shape of the aligner.
442 S. Barone et al.

The acquisition methodology is based on an active stereo vision approach,


which uses a binary coded lighting (fringe projection) to recover 3D points of tar-
get surfaces. The hardware setup is composed of a DLP projector (OPTOMA
‹

EX330e, resolution XGA 1024x768 pixels) and an 8-bit monochrome charge-


coupled device (CCD) digital camera (The Imaging Source DMK 41BF02, reso-
‹

lution 1280x960 pixels) equipped with a lens having focal length of 16mm
(PENTAX, C31634KP 2/3 C-mount). Camera and projector are used as active de-
vices of a stereo triangulation process, and the stereo rig has been configured for a
working distance of 300mm and a working volume of 100×80×80mm, with a re-
sulting lateral resolution of 0.1mm and accuracy of 10μm. A calibration procedure
is required to calculate the intrinsic and extrinsic parameters of the optical devices,
with respect to an absolute reference system. The optical devices have been inte-
grated with a double motorized turntable (figure 2a). The acquisition system al-
lows for a complete automatic measurement of dental model arches.

a) b) c)
Fig. 2. Model placed on the acquisition plate of the 3D scanner (a), detail of the 3D printed mod-
el with aligner (b) and the result of the acquisition (c).

The methodology developed for the measurement of the aligner’s geometry


consists in the acquisition of the model used for the thermoforming process with
and without the aligners fitted on it. The model and the aligner surfaces were uni-
formly covered with an optic spray (Occlu Plus Hager & Werken) in order to have
a uniform non transparent color (figure 2b). The software Raindrop Geomagic
Qualify was used to compare the results. The two acquisition were aligned in a
‹

common reference system by a best fit matching algorithm. Differences were


evaluated in terms of 3D distances between corresponding points of the surfaces
and 2D distances between some main slices of the two digital models.
Mechanical and Geometrical Properties … 443

4 Results and discussion

The mechanical properties of thermoplastic materials used for the production of


clear aligners may greatly vary from the nominal values for what concerns elastic
modulus and tensile yield stress. The variation can be influenced by several fac-
tors, such as manufacturing technologies and use conditions. This study focused
on the assessment of material’s mechanical properties by reproducing some of the
actual use conditions, in order to establish the values of changes rate. Moreover,
an assessment of the geometrical shape of the manufactured aligners has been car-
ried out.

4.1 Tensile tests

Tables 3 and 4 and the graphical plots in the figure 3 show the results for elas-
tic moduli and tensile yield stress for materials analyzed under the different condi-
tions.

Table 3. Results from tensile test for elastic modulus E.

Material Condition E [MPa] σ


Supplier specification 2200 -
Before thermoforming 1531 41
Mat 1
Thermoformed 1693 51
Aged 1368 35
Supplier specification 2050 -
Before thermoforming 1556 48
Mat 2
Thermoformed 1447 42
Aged 1519 62
Supplier specification - -
Before thermoforming 1478 88
Mat 3
Thermoformed 1730 77
Aged 1466 72

Results show how the thermoforming process generally does not lead to a great
decrease in elastic properties of the thermoplastic materials, except for one of the
tested materials (Mat 3), which shows a 30% decrease in tensile yield stress after
thermoforming. For the other materials, a maximum properties variation of 15%,
with respect to the supplier specifications, can be observed. A general reduction of
the elastic modulus has been measured with respect to values declared in the tech-
nical sheet (except for the Mat 3 for which no technical data were found), while
444 S. Barone et al.

very similar values were found for tensile yield stress. In some cases, a little in-
crease in values has been obtained after the thermoforming process.

Table 4. Results from tensile test for tensile yield stress.

Tensile yield
Material Condition σ
stress [MPa]
Supplier specification 53 -
Before thermoforming 49,29 0,45
Mat 1
Thermoformed 53,52 4,84
Aged 49,49 1,76
Supplier specification 50 -
Before thermoforming 52,10 1,49
Mat 2
Thermoformed 48,75 2,57
Aged 50,62 2,88
Supplier specification - -
Before thermoforming 62,37 0,90
Mat 3
Thermoformed 41,92 2,94
Aged 44,61 1,82

Fig. 3. Distribution of elastic modulus and tensile yield stress in the tested conditions.

Intraoral environment could also lead to changes in material mechanical prop-


erties. The main influence is from the rate of water absorbed and the aggressive
characteristic of the human saliva. These effects have been taken into account by
the tensile tests on aged material. The results, in this case, show that the elastic
modulus generally decreases with respect to both the supplier specifications and
the simply thermoformed material. It is also worth noting that water absorption
could affect the sizes of the aligner and have an influence on the fit on the patient
arch, with potential loss of the effectiveness by unpredictable orthodontic force on
the teeth.
Mechanical and Geometrical Properties … 445

4.2 Geometrical measurement

The correct use of thermoplastic aligners in orthodontic practice requires the


knowledge of the real geometry of the actual aligner, particularly about its thick-
ness along surfaces. The change of the shape is surely influenced by the hygro-
scopic property of the material, which needs to be assessed. Indeed, the standard
production technology used for the fabrication of the aligners has the main influ-
ences on the resulting shape of the object. In this work, a method to measure the
resulting size of the aligners was proposed in order to relate the real shape object
with the technological approach adopted to produce the aligner.
The acquisitions show that the thickness of the artefacts is far away from being
uniform and can vary significantly along its surface. Figure 4 shows the measured
shapes of the aligners.

Fig. 4. 3D compare map and 2D measurement on a slice of an upper and a lower arch.

5 Conclusion

Mechanical properties of the materials used for the production of clear aligners
are of utmost importance to evaluate the effectiveness of an orthodontic treatment
with these devices. Technical data supplied along with materials cannot always be
used as a reference, but need to be assessed taking into account the different con-
ditions in which they are used. Results obtained in the present work suggest that
the magnitude of the effects of the different conditions on the mechanical proper-
ties of thermoplastic materials can differ between different materials. Therefore,
materials for orthodontic appliances should be selected after a detailed characteri-
zation of their mechanical properties in the simulated intraoral environment. On
the other way, the force delivery process in the tooth-aligner system is not a fully
understood mechanism. Several studies show that it could be greatly influenced by
the aligner thickness. In this work, a methodology to measure geometrical proper-
446 S. Barone et al.

ties of clear aligners is proposed. The results obtained for real cases show as the
thickness is far to be uniform and could vary strongly along the surface of the
tooth. Future works will be dedicated on the validation of the proposed method by
its application for optimize real orthodontic cases.

References

1. Simon M., Keilig L., Schwarze J., Jung B.A., Bourauel C. Forces and moments generated by
removable thermoplastic aligners: incisor torque, premolar derotation, and molar
distalization, Am. J. Orthod. Dentofacial Orthop., 2014,145:728-736.
2. Miller K.B., McGorray S.P., Womack R., et al. A comparison of treatment impacts between
Invisalign aligner and fixed appliance therapy during the first week of treatment, Am. J.
Orthod. Dentofacial Orthop., 2007,131 :302.el-302.e9.
3. Zhang N., Bai Y., Ding X., Zhang Y. Preparation and characterization of thermoplastic mate-
rials for invisible orthodontics, Dent. Mater. J., 2011,30:954–959.
4. Ma Y.S., Fang D.Y., Zhang N., Ding X.J., Zhang K.Y., Bai Y.X. Mechanical Properties of Or-
thodontic Thermoplastics PETG, The Chinese journal of dental research, 2016, 19 (1), pp.
43-48.
5. Hahn W., Dathe H., Fialka-Fricke J., et al. Influence of thermoplastic appliance thickness on
the magnitude of force delivered to a maxillary central incisor during tipping, Am. J. Orthod.
Dentofacial Orthop, 2009,136:12.e1–12.e7.
6. Martorelli M., Gerbino S., Giudice M., Ausiello P. A comparison between customized clear
and removable orthodontic appliances manufactured using RP and CNC techniques, Dental
Materials, 2013, 29(2), pp. 1-10
7. Hahn W., Engelke B., Jung K., et al. Initial forces and moments delivered by removable ther-
moplastic appliances during rotation of an upper central incisor, Angle Orthod., 2010,
80:239–246.
8. Savignano R., Viecilli R.F., Paoli A., Razionale A.V., Barone S. Nonlinear dependency of
tooth movement on force system directions, American Journal of Orthodontics and
Dentofacial Orthopedics, 2016, 149 (6), pp. 838-846.
9. Barone S., Paoli A., Razionale A.V., Savignano R. Design of customised orthodontic devices
by digital imaging and CAD/FEM modelling. In BIOIMAGING 2016 - 3rd International
Conference on Bioimaging, Proceedings, Part of 9th International Joint Conference on Bio-
medical Engineering Systems and Technologies, BIOSTEC 2016, pp. 44-549.
10. Duffó G. S., Castillo E. Q. Development of an artificial saliva solution for studying the cor-
rosion behavior of dental alloys, Corrosion 60.6, 2004, 594-602.
11. Porcayo-Calderon J., et al. Corrosion Performance of Fe-Cr-Ni Alloys in Artificial Saliva
and Mouthwash Solution, Bioinorganic chemistry and applications, 2015
12. ISO 527-1:2012(en) Plastics – Determination of tensile properties. International Organization
for Standardization, Geneva, Switzerland, 2012
13. Ryokawa H., et al. The mechanical properties of dental thermoplastic materials in a simulat-
ed intraoral environment. Orthodontic Waves 65.2, 2006, 64-72.
14. Barone S., Paoli A., Razionale A.V. Computer-aided modelling of three-dimensional maxil-
lofacial tissues through multi-modal imaging, Proceedings of the Institution of Mechanical
Engineers, Part H: Journal of Engineering in Medicine, 2013, 227 (2), pp. 89-104.
15. Barone S., Paoli A., Razionale, A.V. Multiple alignments of range maps by active stereo im-
aging and global marker framing, Optics and Lasers in Engineering, 2013, 51 (2), pp. 116-
127.
The design of a knee prosthesis by Finite
Element Analysis

Saúl Íñiguez-Macedo1, Fátima Somovilla-Gómez1, Rubén Lostado-Lorza1*,


Marina Corral-Bobadilla1, María Ángeles Martínez-Calvo1, Félix Sanz-Adán1
1
University of La Rioja, Mechanical Engineering Department, Logroño, 26004. La Rioja,
Spain.
* Corresponding author. Tel.: +0034 941299727; fax: ++0034 941299727. E-mail address:
ruben.lostado@unirioja.es

Abstract The purpose of this paper is to study two types of knee prosthesis that
are based on the Finite Element Method (FEM). The process to generate the Finite
Element (FE) models was conducted in several steps. A 3D geometric model of a
healthy knee joint was created using 3D scanned data from an anatomical knee
model. This healthy model comprises a portion of the long bones (femur, tibia and
fibula) as well as by the lateral and medial meniscus, cartilage and ligaments. The
digital model that was obtained was repaired and converted to an engineering
drawing format using CATIA© software. Based on the foregoing format, two
types of artificial knee prostheses were designed and assembled. Mentat Marc©
software was used to model the healthy and artificial knee FE models. The healthy
and artificial knee FE models were subjected to different loads. The an-
thropometry of the human body that was studied and the combination of loads to
apply to the knee were obtained by use of 3D Static Strength Prediction software
(3DSSPP©). The Von Mises stresses, as well as all the relative displacements of
the components of the healthy and artificial knee FE model, were obtained from
the Mentat Marc© software. The Von Mises stresses for both the cortical and the
trabecular bone of the artificial and healthy knee FE model were analyzed and
compared. The stresses that were obtained from the two knee prosthesis that were
studied based on the artificial FE models were very similar to those stresses that
were obtained from healthy FE models.

Keywords: Finite Element Model (FEM), Knee joint, Total Knee Replacement,
Ligaments.

© Springer International Publishing AG 2017 447


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_45
448 S. Íñiguez-Macedo et al.

1 Introduction

Every year, thousands of patients suffer severely diseased knee joints, as in rheu-
matoid arthritis or osteoarthritis. Total Knee Replacement (TKR) can help to re-
lieve pain and to restore functioning of knee joints to return to a more active life.
[1]. Knee arthroplasty (UKA) is an effective treatment for localized osteo-arthritis
of the knee joint, and has demonstrated exceptional success rates in a number of
studies [2]. The knee is one of the most complex joints in the human body. It is
distinguished by its complex geometry and multibody articulations. Optimal joint
stability and compliance during functional activities are provided by anatomical
structures, such as ligaments, menisci, and articular cartilage. However, abnormal-
ities due to age, injury, disease, and other factors can affect biomechanical func-
tioning of the knee joint [3]. This paper analyzes the behavior of two types of arti-
ficial knee prosthesis using the Finite Element Method (FEM). The use of FE
analysis has become increasingly popular, as it allows for detailed analysis of the
behavior of the joint/tissue under complex, clinically-relevant loading conditions.
During the past three decades, a large number of FE knee models of varying de-
grees of complexity, accuracy, and functionality have been reported in the litera-
ture [4, 5]. This study presents a healthy Finite Element (FE) model that com-
prised a portion of the long bones (femur, tibia and fibula), as well as by the lateral
and medial meniscus, cartilage and ligaments. The process to generate the healthy
Finite Element (FE) model was undertaken in several steps. Initially, a 3D scan-
ning of an anatomical knee model generated the first digital model. This digital
model was re-paired and converted into an engineering drawing format using
CATIA© software. A combination of tetrahedral, hexahedral, and line finite ele-
ments (FE) were used with Mentat Marc© software to model the Finite Element
(FE) healthy knee model from the engineering drawing format previously generat-
ed. A combination of loads to apply to the knee were obtained using the 3D Static
Strength Prediction software™ (3DSSPP) and the anthropometry of the patient
studied. A nonlinear FE analysis in which the mechanical contacts among the long
bones, lateral and medial meniscus, and cartilage, as well as the nonlinear behav-
ior of the ligaments, were taken into consideration for obtain the Von Mises
stresses and the displacement of the different component of the healthy knee. Us-
ing the healthy knee model that was generated and CATIA© software, two types
of artificial knee prostheses were assembled to generate the artificial knee model
with the engineering drawing in the new format. The model in which the two arti-
ficial knee prostheses were joined consisted of a group of long bones from which
the meniscus and cartilage were removed. Similar to the process of that was de-
veloped for the assembly of the healthy knee FE model, the two artificial knee
models that were proposed were developed using Mentat Marc© software. In or-
der that the behavior of healthy and artificial knee FE models could be compared,
a group of identical loads, which had been obtained from the 3DSSPP© software,
were applied to both models (healthy and artificial).
The design of a knee prosthesis by Finite Element Analysis 449

2 Material and Methods

2.1 Knee Anatomy

The knee is one of the largest and most complex joints in the body. The knee joins
the thigh bone (femur) to the shin bone (tibia). The smaller bone that runs along-
side the tibia (fibula) and the kneecap (patella) are the other bones that form the
knee joint. Tendons connect the knee bones to the leg muscles that move the knee
joint. Ligaments join the knee bones and provide stability to the knee.

2.2 Total Knee Replacement (TKR)

Knee replacement is surgery for people who have severe knee damage. The
most common cause of chronic knee pain is arthritis. Knee replacement can re-
lieve pain and enable one to be more active. When someone has a total knee re-
placement, the surgeon removes the damaged cartilage and bone from the surface
of the knee joint and replaces them with a prosthesis that is made of metal and
plastic.

Fig. 1. Total Knee Replacement (TKR): (a) Severe osteoarthritis. (b) The arthritic cartilage and
underlying bone have been removed and resurfaced by implants on the femur and tibia [6]

2.2 Healthy and Artificial Model: 3D Scanning

The process to generate the healthy Finite Element (FE) model was developed
in several steps. First, the *.stl files that was generated in the 3D scanning from an
450 S. Íñiguez-Macedo et al.

anatomical knee model (Figure 2a) by Sense 3D Scanner software were imported
into CATIA® CAD software. This digital model was repaired and all of imported
parts were then assembled and converted to an engineering drawing format. The
cartilages that were not reconstructed in the segmentation process were then mod-
elled in order to connect the bones and fill the cartilaginous space. The finished
CAD model was imported and assembled in the non-linear FEA package Mentat
Marc© (Figure 2b). A combination of tetrahedral, hexahedral, and line finite ele-
ments (FE) were used to model the healthy knee Finite Element (FE) model from
the engineering drawing format that was generated previously. All FE models of
both healthy and the artificial knee model had a linear formulation. Also, all exist-
ing pairs of contacts for both healthy and artificial knee, considered segment-to-
segment contact model.

Fig. 2. (a) Anatomical Knee model; (b) Mentat Marc© FE model

2.3Finite Element Model (FEM): Healthy and Artificial Models

Using the generated healthy knee model and CATIA© software, two types of
artificial knee prostheses were assembled in order to generate the artificial knee
model with the new format engineering drawing. The materials used for both pros-
theses were, respectively, titanium and CrCoMo alloy. Figure 3a shows a titanium
alloy prosthesis modeled with CATIA© software. Figure 3b shows a FE model in
which a knee arthroplasty or TKR has been performed.
The design of a knee prosthesis by Finite Element Analysis 451

Fig. 3. (a) Titanium Alloy Prosthesis. (b) Total Knee Replacement

2.4 Material Properties

The properties of the materials of each of the components that are described in
this article and related to the FE modeling of knee arthroplasty have been selected
from the literature [7]. The most relevant properties are summarized in Table 1.

Table 1. Material Properties.

Material Young’s Mod. Poisson Nº Elements Nº Nodes


[MPa]
Femur 12000 0.2 271780 33455
Tibia 12000 0.2 209722 22440
Perone 12000 0.2 85882 9612
Trabecular Bone 100 0.3 113476 13968
Meniscus [8] 250 0.45 65197 7911
Ligaments [9,10, 11] 390 0.4 121803 14375
Titanio Alloy 107000 0.34 25609 7237
CrCoMo Alloy 200000 0.3 25609 7237
HDPE 1000 0.46 58465 7010

2.5 Loads

Using the 3D Static Strength Prediction software (3DSSPP©) [12] and the an-
thropometry of the patient studied, a combination of loads to apply to the knee
452 S. Íñiguez-Macedo et al.

were obtained. The magnitudes of the loads to apply are calculated by the range of
movements provided in the biomechanics of the knee. The 3DSSPP© software
have provided the following loads to apply to the knee of a man of 30 years of age
who has a weight of 120 kg and a height of 1.90 meters. In this case, the resulting
load to apply in the FE model is about 100 kg. Figure 4a shows a three-
dimensional anthropometric model that was obtained with 3DSSPP software when
climbing stairs is considered. Figure 4b shows the forces that are acting on the
model.

2.6 Boundary conditions

Once the loads that are based on the anthropometry studied and the position of
the human body were obtained with the 3DSSPP© software, they were applied to
both the healthy and the artificial FE models. In this case, an embedment at the
lower end of the tibia and the fibula were applied to a group of nodes, while the
load (100kg.) was applied to a group of nodes at the upper end of the femur. Fig-
ure 4c and Figure 4d shows, respectively, the boundary conditions for the FE
model considering both the load and the restriction of movement.

Fig. 4. (a) Antropometry of the body studied. (b) Human Loads arrangement, and (c) Boundary
conditions applied to the FE models: (c) load on the nodes of the femur and (d) embedment on
the nodes of the tibia and fibula

All FE models were simulated in a computer server Intel-Xeon, working in


parallel conditions with 32 GB of RAM and 8 cores. The computational time for
each of the FE models that was analyzed was approximately five hours.
The design of a knee prosthesis by Finite Element Analysis 453

3 Results

Figure 5(a) shows the relative displacements of different parts that make up the
healthy knee FE model. Figure 5(b) shows the relative displacements of titanium
alloy artificial knee model. The two figures show that the relative displacements
of both the healthy and the artificial models were very similar. Similarly, the dis-
placement of the CrCoMo alloy artificial knee model was very similar to the
healthy model. For all cases of artificial FE knee models that were studied, it was
observed that the resultant forces acting on each of the ligaments of the knee were
similar to those obtained in the healthy FE model.

Fig. 5. (a) Relative Displacement of the Healthy FE model. (b) Relative Displacement of the Ar-
tificial FE Model

Figure 6 shows the Von Mises stresses in the femoral head of the healthy knee
model (Figure a) and for the artificial knee model (Figure b). It can be seen that
the stresses in the femur head of the healthy FE model have a value of 3.12 MPa,
whereas the artificial FE model has a highly localized value of 10.42 MPa. This
difference between the stresses on the artificial and healthy models is due mainly
to mechanical contact between the bone and the titanium prosthesis.
454 S. Íñiguez-Macedo et al.

Fig. 6. (a) Von Mises stresses on the femoral head of the healthy knee model (b) Von Mises
stresses on the femoral head of the healthy knee model when the femur is in contact with the
prosthesis.

Also, Figure 7 shows the stresses obtained on each of both the healthy and the
artificial ligaments of the FE models that were studied. The maximum stresses for
the medial collateral ligaments were 1.495 MPa (Figure 7a) while the ultimate
tensile strength for these ligaments are around 39 MPa [9]. In a similar fashion,
the maximum stress for the anterior cruciate ligament is 0.3519 MPa (Figure 7b)
and 0.279 for the posterior cruciate ligament (Figure 7c), while the ultimate tensile
strength for these ligaments are, respectively, 13 and 30 MPa [9]. This suggests
that the resultant forces that act on each of the ligaments of the knee did not ex-
g
ceed its tensile strength.

Fig. 7. (a) Maximum stresses for the medial collateral ligaments (b) maximum stress for the an-
terior cruciate ligament and (c) maximum stress for the posterior cruciate ligament
The design of a knee prosthesis by Finite Element Analysis 455

4 Discussion and Conclusions

The Von Mises stresses for both the cortical and trabecular bones of the artificial
and healthy knee FE models were analyzed and compared. The stresses on the two
knee prostheses that were studied based on the artificial FE models were very sim-
ilar to those stresses on healthy FE models. In addition, we can see that the maxi-
mum Von Mises stresses were registered in the contact zone of the titanium alloy
prosthesis. The stresses in the femur head of the healthy FE model had a value of
3.12 MPa, whereas the artificial FE model had a highly localized value of 10.42
MPa. This difference between the stresses on the artificial and healthy models is
due mainly to localized mechanical contact between the bone and the titanium
prosthesis. In addition, for all cases studied of the artificial FE knee model, the re-
sultant forces that were acting on each of the ligaments of the knee did not exceed
its tensile strength. This study demonstrates that the FEM may be used in combi-
nation with 3D design software as a set of efficient tools for the design of human
prostheses.

References

1. Hopkins, Andrew R., et al. Finite element analysis of unicompartmental knee arthroplasty.
Medical engineering & physics, 2010, vol. 32, no 1, p. 14-21.
2. Argenson, Jean-Noël A.; Chevrol-Benkeddache, Yamina; AUBANIAC, Jean-Manuel. Mod-
ern unicompartmental knee arthroplasty with cement. J Bone Joint Surg Am, 2002, vol. 84,
no 12, p. 2235-2239.
3. Kaul, Vikas, et al. Finite Element Model of the Knee for Investigation of Injury Mechanisms:
Development and Validation.
4. Adouni, M., Shirazi-Adl, A., and Shirazi, R., 2012, “Computational Biodynamics of Human
Knee Joint in Gait: From Muscle Forces to Cartilage Stresses,” J. Biomech., 45(12), pp.
2149–2156
5. Baldwin, M. A., Clary, C. W., Fitzpatrick, C. K., Deacy, J. S., Maletsky, L. P., and
Rullkoetter, P. J., 2012, “Dynamic Finite Element Knee Simulation for Evaluation of Knee
Replacement Mechanics,” J. Biomech., 45(3), pp. 474–483.
6. Total Knee Replacement: http://orthoinfo.aaos.org
7. Carr, Brandi C.; Goswami, Tarun. Knee implants–Review of models and biomechanics. Ma-
terials & Design, 2009, vol. 30, no 2, p. 398-413.
8. Cowin, Stephen C. The mechanical properties of cancellous bone. CRC Press, Boca Raton,
FL, 1989.
9. Woo, S. L. Y., et al. Functional Tissue Engineering of Ligament and Tendon Injuries. Book
Ch. no 9. Translational Approaches In Tissue Engineering And Regenerative Medicine. 2007.
10. Beillas, P., et al. A new method to investigate in vivo knee behavior using a finite element
model of the lower limb. Journal of biomechanics, 2004, vol. 37, no 7, p. 1019-1030.
11. Vairis, Achilles, et al. Evaluation of a posterior cruciate ligament deficient human knee joint
finite element model. QScience Connect, 2014.
12. 3D Static Strength Prediction Program Version. User's Manual. The University of Michigan
Center for Ergonomics. 2016.
Design and Rapid Manufacturing of a
customized foot orthosis: a first methodological
study

Fantini M1, De Crescenzio F1*, Brognara L2 and Baldini N2


1
University of Bologna, Department of Industrial Engineering, Bologna, Italy
2
University of Bologna, Biomedical and Neuromotor Sciences, Bologna, Italy
* Corresponding author. Tel.: +39 0543374447. E-mail address:
francesca.decrescenzio@unibo.it

Abstract A feasibility study was performed in order to demonstrate the benefits


of designing and manufacturing a customized foot orthosis by means of digital
technologies, such as Reverse Engineering (RE), Generative Design (GD) and
Additive Manufacturing (AM). The aim of this work was to define the complete
design-manufacturing process, starting from the 3D scanning of the human foot
anatomy to the direct fabricating of the customized foot orthosis. Moreover, this
first methodological study tries to combine a user-friendly semi-automatic model-
ling approach with the use of low-cost devices for the 3D laser scanning and the
3D printing processes. Finally, the result of this approach, based on digital tech-
nologies, was also compared with that achieved by means of conventional manual
techniques.

Keywords: Reverse Engineering, Generative Design, Computer Aided Design,


Additive Manufacturing, Foot orthosis.

1 Introduction

In general, according to the ISO 8549-1:1989 definition, an orthosis is “an exter-


nally applied device used to modify the structural and functional characteristics of
the neuromuscular and skeletal system”. In particular, within the medical field, the
foot orthotics is the specialty concerned with the design, manufacturing and appli-
cation of foot orthoses, which are the functional devices, conceived to correct and
optimize the foot functions. Nowadays, customized foot orthoses are recognized
as the standard for the treatment of foot and lower limb pathologies.
In clinical practice, the traditional methods for manufacturing this kind of de-
vices are completely manual and are mainly based on plaster casting and hand fab-

© Springer International Publishing AG 2017 457


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_46
458 M. Fantini et al.

rication [1, 2]. For example, a typical workflow can be described by the following
phases. Firstly, in order to produce an effective and comfortable orthosis, which fit
properly and accurately with the patient, a plaster-based impression of the foot in
the neutral position is taken to obtain a consistent cast. Later, a positive replica of
the foot is developed by filling the negative impression cast, typically, using plas-
ter. The next step is the manual modification and smoothing of this replica with
additional plaster material to replicate the soft tissues adaptation on load bearing
and to address the requirements of patient specific problems (Fig. 1). Subse-
quently, a rectangular sheet of Low Temperature Thermoplastic (LTT) material,
with thickness around 2 to 3 mm, is heated in the oven until the plastic reaches its
softening point and becomes pliable. Then, the heated plastic sheet is draped over
the corrected positive cast in a vacuum former and the vacuum is applied in order
to obtain the impression of the foot on the sheet. After cooling, the formed LTT
sheet is removed from the cast and manually cut to obtain the final shape of the
rigid shell for the foot orthosis. Finally, the rigid shell is completed by adding the
hell and by applying a soft top cover, before delivering to the patient the custom-
ized functional foot orthosis (Fig. 2).

Fig. 1. Plaster cast of the foot in the neutral position (left) and that modified with plaster addic-
tion (right).

Fig. 2. Rigid shell for the foot orthosis with the hell (left) and final functional foot orthosis
(right).

Therefore, this conventional approach, widely used among practitioners, is


completely based on manual activities and craft based processes that depend on
the skills and expertise of individual orthoptists and podiatrists that need consider-
able training and practice in order to reach optimal results. Moreover, this ap-
proach is also unpleasant for patients during the cast impression and frequently
Design and Rapid Manufacturing of a customized foot orthosis ... 459

needs to reiterate the process if the orthosis has a poor fit on the foot, thus result-
ing time-consuming and material-wasting.
On the other side, novel approaches for design and manufacturing customized
foot-wrist orthoses by means of digital technologies are recently reported as an al-
ternative method to overcome these limitations [3,4,5,6,7,8]. These are generally
based on Reverse Engineering (RE), Computer Aided Design (CAD) and Additive
Manufacturing (AM) with the following three main activities:
1) positioning the patient in a way that is suitable for 3D laser scanning and
creating a full point cloud of the foot;
2) processing the data to generate the 3D model of the desired foot orthosis ac-
cording to the clinical needs;
3) manufacturing the customized functional foot orthosis using a 3D printer.
However, for what concerns the use of digital technologies for producing cus-
tomized orthoses, some disadvantages that limits the spread of this approach are
also reported [9]. First, the investment required for the devices needed for RE and
AM (3D laser scanning and 3D printer) could be considerable. Then, the 3D mod-
elling process is very hard for practitioners that have not enough knowledge and
skill about CAD applications, so that the training required making them able to
autonomously complete a proper design project could be time and cost prohibitive.
Fortunately, in recent years we have assisted to the proliferation of many manu-
facturers of 3D laser scanners and 3D printers, thus resulting in a significant costs
reduction of these devices. Again, in the design processes, tools and methods are
quickly evolving from Computer Aided Design (CAD) into Generative Design
(GD) that allows the user to obtain complex design tasks by a semi-automatic
modelling process, and to customize the resulting models by interactively modify-
ing certain parameters.
For these reasons, this first methodological study aims to combine the use of
low-cost devices for 3D laser scanning and 3D printing with a semi-automatic
modelling approach to assess the feasibility of a user-friendly and cost-effective
solution to improve the traditional design-manufacturing process of customized
functional foot orthoses. Therefore, the study has been carried out in an interdisci-
plinary cooperation approach between the staff of Design and Methods in Indus-
trial Engineering and the staff of Podology. More specifically, a Generative De-
sign (GD) workflow was expressly developed to enable the practitioners without
enough CAD skills to easily design and interactively customize foot orthoses. Ad-
ditionally, low-cost devices for reverse engineering and additive manufacturing
that have been acquired by the Podology Lab, were also tested and compared with
the high-cost ones of the Department of Industrial Engineering.
460 M. Fantini et al.

2 Methods

The whole methodology can be divided into three main processes. The first proc-
ess regards the digitization of the foot of the patient by means of 3D laser scanner
devices to produce a digital model. Then, the Generative Design process, which
has been expressly developed for this purpose, allows interactively generating a
customized foot orthosis, also adjusting several features, and exporting the water-
tight mesh in STL format. Finally, the last process involves the Additive Manufac-
turing of the physical prototype.

2.1 Digitizing process

The digitizing process was carried out by means of two capturing laser scanner
devices (Fig. 3).
Initially, a method for scanning the foot of a patient-volunteer that minimizes
the time needed to obtain the point cloud and the errors, while aiming at the re-
peatability of the measure, was set. First, the foot of the patient was kept fixed and
stable with the help of the practitioner; then, the scan was focused only on the
lower part of the sole that is used for the design of the custom foot orthosis. To
validate the result, the point cloud obtained by the scanning of the posterior plan-
tar surface of the foot in the neutral position (direct approach) was compared with
the scanning of the plaster cast of the same foot taken in the neutral position (indi-
rect approach).

Fig. 3. Vivid 9i laser scanner from Konica Minolta (left) and Sense 3D scanner from 3D Systems
(right).

For this validation, the digitizing process was carried out by means of Vivid 9i
laser scanner (Konica Minolta, Tokyo, Japan). This is a tripod mounted non-
contact 3D digitizer (approximately 55.000€) that provides high-speed and high-
Design and Rapid Manufacturing of a customized foot orthosis ... 461

accuracy 3D measurements, based on the principle of laser triangulation. It is pro-


vided with the Polygon Editing Tool software to control the scanner and acquire
the data.
Afterwards, the models, coming from the direct and indirect approach were up-
loaded into the open source software MeshLab (MeshLab, Visual Computing Lab
–ISI – CNR) [10], version 1.3.3, and the Iterative Closest Point (ICP) algorithm
was applied to automatically align the two meshes. Then, for measuring the differ-
ence between the two meshes, the Hausdorff distance filter was applied. To better
visualize the error, the computed distance values were also visualized using a
quality colour filter. It was observed that in the almost lower part of the sole, the
error is under 1.5 mm (Fig. 4). Differences resulting by the comparison of the
point clouds obtained with the direct and indirect are mainly due to a geometrical
discrepancy between the foot plaster model and the real patient foot during the
scanning. Differences that can be ascribed to the non-stationarity of the patient
during the acquisition process were limited by cutting the upper part of the sole
with the toes that is more susceptible to involuntary movements of the patient.

Fig. 4. Digital models of the foot after indirect (left) and direct (centre) laser scanning with Vivid
9i laser scanner from Konica Minolta; Hausdorff Distance between the two meshes (right).

Additionally, a second capturing device was used; the Sense 3D scanner (3D
Systems, Rock Hill, South Carolina, USA). This is a low-cost handheld device
(approximately 400€) that projects a pattern onto the surroundings using an infra-
red laser. The comparison was carried out by scanning the foot plaster cast, in-
stead of the real foot itself, to avoid errors due to the different position of the foot
potentially held by the patient-volunteer.
After laser scanning, the digital models of the foot plaster cast obtained by
Vivid 9i laser scanner (575.408 vertex; 287.706 faces) and by Sense 3D scanner
(12.044 vertex; 24.049 faces) were compared in MeshLab. Once aligned the two
meshes by applying the ICP algorithm, the Hausdorff distance between the two
462 M. Fantini et al.

meshes was computed and visualized applying a quality colour filter. An error un-
der 1.5 mm was observed in the plantar surface of the foot (Fig. 5).

Fig. 5. Digital models of the foot after laser scanning with Vivid 9i laser scanner from Konica
Minolta (left) and with Sense 3D scanner from 3D Systems (centre); Hausdorff Distance be-
tween the two meshes (right).

2.2 Generative Design process

In the last years, from the perspective of the designers, tools and methods are
quickly evolving from Computer Aided Design (CAD) into Generative Design
(GD) in different application fields, such as architecture, jewellery and industrial
design.
The most important aspect of GD is that it allows the generation of an infinite
number of shapes that follow specific rules, since GD is not about designing a
shape, but it is about designing the process that builds the shape. This approach al-
lows the user to obtain complex design tasks by a semi-automatic modelling proc-
ess, and to customize the resulting geometrical models by interactively modifying
certain parameters.
In practice, while Rhinoceros 3D (McNeel, Seattle, WA, USA) is a CAD envi-
ronment, widely used for industrial design, which allows freeform modelling at
any level of size and complexity, Grasshopper is the graphical algorithm editor,
thoroughly integrated with the Rhinoceros 3D modelling tools. It is conceived to
create parametric 3D geometries by dragging components onto a tabs-canvas in-
terface and then visualise the modelling results within Rhinoceros 3D CAD envi-
ronment.
Design and Rapid Manufacturing of a customized foot orthosis ... 463

Therefore, for what concerns the design process, instead of having a classical
CAD modelling approach, we formalized a GD workflow that contains the rules to
generate a foot orthosis that is customized on the specific patient anatomy, accord-
ing to the clinical needs (Fig. 6a). Moreover, this method allows interactively
modifying the geometrical features of the foot orthosis (shell thickness, hell size,
etc.) by simply moving the sliders of the control panel (Fig. 6b), and producing in
a semi-automatic approach the watertight mesh ready for the next AM process.
In the GD workflow, the input data are the mesh of the foot and three reference
points, corresponding to the first and the fifth metatarsophalangeal joints and to
the sustentaculum tali that must be marked by the practitioner on the plantar sur-
face of the mesh. This is the only activity requested to the practitioner, since these
three points allow the automatic orientation of the mesh in the CAD environment
for starting the automatic design process. Therefore, the outline of the mesh is
used to represent the contour of the foot, while the first and the fifth metatarsopha-
langeal joints are also used to mark the line that indicates the anterior border of the
shell of the orthosis (Fig. 7b). Subsequently, two orthogonal series of lines are
projected onto the plantar surface of the mesh and then used to create a blended
surface according to the morphology of the foot (Fig. 7c). Then, this blended sur-
face is thickened and trimmed to create the shape of the shell of the orthosis
(Fig. 7d). The customization process continued by allowing the practitioner to ad-
just several features of the foot orthosis such as the hell size, chamfer and cut, and
the shell slant bevel (Fig 7e). Finally, this method allows obtaining the watertight
mesh of the customized foot orthosis (in STL format) that can be directly manu-
factured by means of AM technologies (Fig. 7f).

a b
Fig. 6. Starting part of the GD workflow with the input data (a) and sliders of the control panel
for interactively modifying the foot orthosis geometrical features (b).

a b c
464 M. Fantini et al.

d e f
Fig. 7. Some steps of the modelling process: input data: mesh of the foot and three reference
points, (a), outline of the foot and border of the shell (b), blended surface on the mesh (c), basic
shape of the shell (d), final model of the shell (d) and watertight mesh of the customized foot or-
thosis in STL format (f).

2.3 Additive Manufacturing process

For what concerns the AM process, the customized foot orthosis was manufac-
tured by means of two different 3D printers (Fig. 8), both based on Fused Deposi-
tion Modelling (FDM). This technology allow building parts by heating and ex-
truding a thermoplastic filament. In general, the whole process is divided into
three steps:
1. Pre-processing: the 3D printing preparation software orients and slices the
mesh (in STL format), defines any necessary support material and calculates
the extrusion path of the thermoplastic filament.
2. Production: the 3D printer heats the thermoplastic filament to a semi-molten
state and deposits it layer by layer along the extrusion path. Where needed,
the 3D printer deposits also a removable material (soluble or not) that acts as
support for the building part.
3. Post-processing: the user breaks away any support material or, if soluble, dis-
solves it in hot water-soap bath, to obtain the part ready to use.

Fig. 8. Stratasys Fortus 250mc (left) and Wasp Delta 40 70 (right).

First, the mesh of the customized foot orthosis was manufactured in ABSplus
(Acrylonitrile Butadiene Styrene) by means of Fortus 250mc (Stratasys Inc., Eden
Prairie, MN, USA). Previously, pre-processing was carried out by Insight, the
Stratasys job processing and management software (Stratasys Inc., Eden Prairie,
Design and Rapid Manufacturing of a customized foot orthosis ... 465

MN, USA). This system (approximately 40.000 €) is provided with two extruding
nozzles and works with soluble support material for an hands-free removal post-
processing that allow easily obtaining the final shell.
In addition to this, the same mesh was also manufactured using a PLA (Poly-
lactic Acid) filament (1.75 mm diameter) by means of Wasp Delta 40 70 (CSP
s.r.l., Massa Lombarda, Italy), a low-cost 3D printer (approximately 5.500 €). The
open source software Cura, version 15.04.5 (Ultimaking Ltd., Netherlands) was
used for the pre-processing. Since this system has just one extruding nozzle, the
support material is the same of the building material and in the post-processing
was manually removed. However, due to the simple shape of the customized foot
orthosis, no particular effort was needed to obtain the final shell.
No remarkable difference can be observed with respect to the previous 3D
printed model (Fig. 9).

Fig. 9. Rigid shell for customized foot orthosis manufactured using ABSplus filament by means
of Stratasys Fortus 250mc (left) and PLA filament by means of Wasp Delta 40 70 (right).

3 Discussion

This methodological study describes a novel approach for design and manufactur-
ing customized foot orthosis and some points can be included in the discussion.
First, for what concerns the digitizing process, the point cloud, resulting from
the laser scanning by means of the low-cost system (Sense 3D scanner), appears
accurate enough for the present practical purposes. However, the acquisition proc-
ess should be completed in a short time, since the patient has to be relaxed and his
foot firmly fixed in the neutral position. Therefore, some practical sessions would
be very useful for the operators before facing a real patient.
Then, with respect to the Generative Design process, the proposed workflow in
Grasshopper is intuitive and allows easily and interactively customizing the final
foot orthosis. Moreover, this workflow could be modified and improved in order
to semi-automatically design specific devices to satisfy the demand of patients
with specific pathologies, for example, with accentuate valgus or varus deformity.
Finally, regarding the Additive Manufacturing process, the low cost 3D printer
(Wasp Delta 40 70) is capable to provide adequate results for the shell of the foot
orthosis. Moreover, this system appears more versatile in reason of the ability to
print in a wide range of different filaments. Therefore, since the market of 3D
466 M. Fantini et al.

printing filaments is rapidly growing, further test with different materials (both
flexible and rigid) can be performed to find the most proper one. In addition, both
the patient-volunteer and the practitioners were asked for feedback and positive
responses were collected.
Besides the advantages due to the better fitting of the foot plant to the orthosis,
a number of advantages under the Design for Manufacturing and the Design for
Environment aspects can be also highlighted. Just to mention some, this practice is
less invasive and more comfortable for the patient, it is a cleaner process to deal
with for the practitioner, while dramatically reducing the waste material. More-
over, the digital model can be used to design further developments in the integra-
tion with electronic components for smart technology testing.

4 Conclusions

This first methodological study has validated, in terms of feasibility, that the use
of a GD modelling approach, in combination with low-cost devices for 3D laser
scanning and 3D printing, is a real alternative to conventional processes for creat-
ing customized foot orthosis.
The study was carried out in an interdisciplinary cooperation approach between
the staff of Design and Methods in Industrial Engineering and the staff of
Podology, also to transfer skills and knowledge to all the practitioners involved.
Some feasibility tests involving the staff of the medical field indicated that a cus-
tomized foot orthosis can be designed in a very intuitive way by a non-
experienced user in less than 20 minutes.
Moreover, the low-cost devices for reverse engineering (Sense 3D scanner) and
additive manufacturing (Wasp Delta 40 70) that have been acquired by the Podol-
ogy Lab, were also were demonstrated suitable for this kind of applications.

References

1. Phillips J.W. The Functional Foot Orthoses, 1990 (Chrchilhill Livingstone, New York).
2. Michaud T.M. Foot orthoses and other forms of conservative foot care, 1993 (Williams &
Wilkins, Baltimore).
3. Jain M. L., Dhande S. G. and Vyas N. S. Virtual modeling of an ankle foot orthosis for correc-
tion of foot abnormality. Robotics and Computer-Integrated Manufacturing, 2011, 27(2),
257-260.
4. Mavroidis C., Ranky R. G., Sivak M. L., Patritti B. L., DiPisa J., Caddle A., Gilhooly K.,
Govoni L., Sivak S., Lancia M., Drillio R., Bonato P. Patient specific ankle-foot orthoses us-
ing rapid prototyping. Journal of NeuroEngineering and Rehabilitation, 2011, 8(1), 2-11.
5. Telfer S., Pallari J., Munguia J., Dalgarno K., McGeough M. and Woodburn J. Embracing ad-
ditive manufacture: implications for foot and ankle orthosis design. BMC Musculoskeletal
Disorders, 2012, 13(84), 2-9.
Design and Rapid Manufacturing of a customized foot orthosis ... 467

6. Alam M., Choudhury I. A. and Azuddin M. Development of Patient Specific Ankle Foot
Orthosis through 3D Reconstruction. In 3rd International Conference on Environment Energy
and Biotechnology, Singapore, 2014, pp.84-88.
7. Palousek D., Rosicky J., Koutny D., Stoklásek P.and Navrat T. Pilot study of the wrist orthosis
design process. Rapid Prototyping Journal, 2014, 20(1), 27-32.
8. Dombroski C. E., Balsdon M. E. and Froats A. The use of a low cost 3D scanning and printing
tool in the manufacture of custom-made foot orthoses: a preliminary study. BMC Research
Notes, 2014, 7(1), 443.
9. Cazon A., Aizpurua J., Paterson A., Bibb R. and Campbell R.I. Customised design and manu-
facture of protective face masks combining a practitioner-friendly modelling approach and
low-cost devices for digitising and additive manufacturing. Virtual and Physical Prototyping,
2014, 9(4), 251-261.
10. Cignoni P., Callieri M., Corsini M., Dellepiane M., Ganovelli F. and Ranzuglia G. (2008),
MeshLab: an Open-Source Mesh Processing Tool. In Sixth Eurographics Italian Chapter
Conference, Salerno, 2008, pp.129- 136.
Influence of the metaphysis positioning in a new
reverse shoulder prosthesis

T. Ingrassiaa)*, L. Nalboneb), Vincenzo Nigrelli a) , D. Pisciottaa), V. Ricottaa),

a) Università degli Studi di Palermo, Dipartimento di Ingegneria Chimica, Gestionale,


Informatica, Meccanica, Viale delle Scienze – 90128 Palermo, Italy,
b) Ambulatorio di Ortopedia e Traumatologia, Azienda Ospedaliera Universitaria Policlinico
Paolo Giaccone di Palermo, 90100 Palermo, Italy
* Corresponding author. Tel.:+39 091 23897263; E-mail address:
tommaso.ingrassia@unipa.it

Abstract Aim of this work is to investigate the behaviour of a new reverse


shoulder prosthesis, characterized by a humeral metaphysis with a variable offset,
designed to increase the range of movements and to reduce the impingement. In
particular, by means of virtual prototypes of the prosthesis, different offset values
of the humeral metaphysis have been analysed in order to find the best positioning
able to maximize the range of movements of the shoulder joint. The abduction
force of the deltoid, at different offset values, has been also estimated. The study
has been organized as follows. In the first step, the point clouds of the surfaces of
the different components of the prosthesis have been acquired by a 3D scanner.
This kind of scanner allows to convert camera images into three-dimensional
models by analysing the moiré fringes. In the second step, the acquired point
clouds have been post-processed and converted into CAD models. In the third
step, all the 3D reconstructed models have been imported and assembled through a
CAD system. After, a collision analysis has been performed to detect the
maximum angular positions of the arm at different metaphysis offset values. In the
last step, FEM models of shoulder joint with the new prosthesis have been created.
Different analyses have been performed to estimate how the deltoid abduction
force varies depending on the offset of the humeral tray. The study allowed to
understand how the offset of the metaphysis affects the performances of the
shoulder. The obtained results can be effectively used to give surgeons useful
guidelines for the installation of these kinds of implants.

Keywords: Reverse Engineering, CAD, Reverse Shoulder Prosthesis, Range of


Movements

© Springer International Publishing AG 2017 469


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_47
470 T. Ingrassia et al.

1 Introduction

Many pathologies of the shoulder joint are, nowadays, more frequently treated
using partial or total prosthesis. That has imposed an increasing attention to the
study of the shoulder prostheses both from the clinical and from the engineering
point of view.
In this context, the reverse shoulder prosthesis represents, nowadays, the most
common solution for patients with disabling arthrosis and/or severe injury of the
rotator cuff. In these cases, the reverse shoulder prosthesis allows to cover the
muscle-tendon deficit by taking advantage of the large surface muscles, such as
the deltoid, to enable the abduction of upper limb.
Paul Grammont’s original reverse prosthesis, introduced in 1985,
revolutionized the traditional shoulder arthroplasty thanks to its novel design. This
system focused on four key principles [1], necessary to provide a stable construct
and to allow the deltoid to cover a serious injury of the rotator cuff:
x the centre of rotation must be fixed, lowered and medialized;
x the prosthesis must be inherently stable;
x the lever arm of the deltoid must be effective from the start of the
movement;
x the glenosphere must be large and the humeral cup small to create a semi-
constrained articulation.
The main components of a reverse shoulder prosthesis are (Fig. 1): a humeral
stem and a metaphysis, a polyethylene insert, a metaglene and a glenosphere.

Fig. 1. Components of a reverse shoulder prosthesis.

The main difference between a conventional (or anatomical) shoulder


prosthesis and a reverse one is related to the way of reproducing the joint
geometry. Reverse shoulder prosthesis (Fig. 1), in fact, inverts the anatomical
shoulder configuration by fixing a glenosphere to the scapula and a polyethylene
concave insert (cup) to the humerus.
In this way, the anatomical gleno-humeral joint is inverted by creating a
concavity in the humerus and a convexity in the scapula. This inversion allows to
lower and to medialize the position of the rotational centre [2,3] so increasing,
during the arm abduction, the efficiency of the deltoid [4,5] which, when a severe
Influence of the metaphysis positioning in a new … 471

rotator cuff arthropathy occurs, is the only muscle able to ensure the stability of
the shoulder joint and the movements of the upper limb [6,7].
Many researchers have studied, during last years, the characteristics and the
performances of different reverse prostheses, also by introducing innovative
solutions. The most common studies are related to the: Range Of Movement
(ROM) [8], scapular notching [9], loosening of the glenoid prosthetic component
[10] and intrinsic instability [11-13] of the prosthetic system. Recently, innovative
reverse shoulder prostheses have been introduced. These new systems have off-
axis pins that allow to obtain different offset of the humeral tray (or metaphysis).
In literature, there is not much information about the influence of the humeral
metaphysis positioning on the reverse shoulder prosthesis performances.
For these reasons, in this work the effect of the humeral tray position has been
analysed to understand how it affects the range of movement and the abduction
force.
To this aim, digital models of the shoulder bones and the prosthetic
components have been created and virtual [14-17] simulations have been
performed.

2 Case study: a new reverse shoulder prosthesis

The innovative reverse shoulder prosthesis studied in this work is the Aequalis
AscendTM Flex by Tornier (Fig.2-left).

Fig. 2. Reverse shoulder prosthesis Aequalis AscendTM Flex (on the left); limit positions and
back side of the humeral tray (on the right).
The innovative characteristic of the Aequalis AscendTM Flex is due to the
humeral tray that, thanks to an off-axis pin (Fig.2-right), can assure a variable
offset. Before assembling the prosthesis components, in fact, the chosen position
of the metaphysis can be fixed through a graduated scale (Fig. 2-right).
472 T. Ingrassia et al.

2.1 3D acquisition and CAD modelling

The shapes of the prosthetic components have been acquired by the COMET 5,
a 3D scan system. This system is composed of an 11 mega-pixel camera, a laser
source, a workstation and a software, the COMET Plus, that manages all the data,
from the scanning phase to the exporting of the point clouds.
The laser source projects fringe patterns over the object to acquire and the
digital camera captures the related image. The projected fringe patterns are
deformed coherently with the shape of the external surfaces of the object. The
analysis of the fringe allows, according to the moiré principle [19], to translate the
acquired images into three-dimensional point clouds. The system has a measuring
volume that can vary from 80 to 1,000 mm3, an accuracy threshold of about 5 μm
and a very reduced acquisition time (about 1 s).
The developed acquisition procedure is here quickly summarized. At first, the
surfaces of the prosthesis components have been opacified with a mat white colour
spray in order to minimize reflective spurious phenomena. After, regular fringe
patterns have been projected on the objects surfaces by means of the laser source.
Multiple images have been acquired by rotating the objects around a vertical axis.
All the images have been processed in order to obtain a point-by-point description
of the scanned surfaces.
The point clouds have been post-processed and interpolated into NURBS
surfaces. In the final step of the process, the acquired surfaces have been
converted into CAD solid models (Fig. 3).

Fig. 3. a) Point cloud, b) NURBS surfaces, c) CAD model of the acquired humeral stem
Influence of the metaphysis positioning in a new … 473

2.2 Virtual assembling

To perform the virtual biomechanical study of the shoulder joint, the digitalized
prosthetic components have been assembled with the CAD models of the shoulder
bones.
The assembly of the CAD models of the bones and the prosthesis has been
made following, step-by-step, the surgical guidelines given by Tornier to perform
a total reverse shoulder arthroplasty. All the parts have been assembled using a 3D
parametric CAD software. The final assembly model (fig. 4) has been fully
parametrized in order to be able to modify the positioning of all the components in
a very simple and fast way during the virtual simulations.

Fig. 4. Assembly of the shoulder joint components (on the left) and rendering (on the right).

3 Kinematic study of the new reverse shoulder prosthesis

To understand how the metaphysis positioning influences the range of


movement of the shoulder joint, several collision analyses have been performed in
a virtual environment. In particular, the assembled model of the shoulder and the
prosthesis has been studied by simulating the arm movements into a 3D CAD
software. The (allowable) extreme positions of the arm have been detected as soon
as two components of the shoulder joint have collided themselves (fig. 5).
474 T. Ingrassia et al.

Fig. 5. Collision observed during the measuring of the maximum abduction angle
Four different configurations were studied (fig. 6): a) lateral offset (grade 6), b)
posterior offset (grade 9), c) medial offset (grade 12), d) anterior offset (grade 3).

Fig. 6. Lateral (a), posterior (b), medial (c) and anterior (d) offset

For all the analyzed configurations of the metaphysis, the ROM of the shoulder
joint has been investigated by finding the limit positions of the arm during
different movements: abduction, adduction, internal and external rotations.
The limit positions have been identified by measuring the maximum angle
values considered as follows: for abduction and adduction, by measuring the
angles between the humerus axis and, respectively, the sagittal and transversal
planes; for internal and external rotations, by considering the absolute value of the
angle between the sagittal plane and the projection of the metaphysis axis on the
transverse plane.
The obtained results are summarized in table 1.

Table 1. Maximum angle values measured for each metaphysis offset.

Offset (grade) Abduction Adduction Internal rotation External rotation


Medial (12) 62.08° 86.38° 25.77° 68.97°
Anterior (3) 67.26° 85.00° 25.96° 68.02°
Lateral (6) 73.23° 87.05° 25.08° 68.36°
Posterior (9) 69.02° 87.30° 24.99° 67.95°
Influence of the metaphysis positioning in a new … 475

It can be observed that internal and external rotations are substantially not
influenced by the metaphysis positioning. As regards the adduction movement, it
can be lightly improved by choosing a posterior offset. The most considerable
improvement is related to the abduction. A change of the metaphysis offset from
medial to lateral offset, in fact, can increase the maximum angle value from
62.08° to 73.23°, with an increment of about 18%.

4 Numerical study of the new reverse shoulder prosthesis

To understand how the humeral tray position affects the force required to
abduct the arm, non-linear [20-22] FEM numerical simulations have been
executed. For each analysed humeral tray offset, comparative analyses have been
performed by imposing a 10 N vertical load on the elbow and by measuring the
force needed to maintain the arm abducted. The FEM model of the shoulder joint
has been created by importing the CAD assembly into Ansys Workbench and
meshing it with about 60.000 eight-node solid elements (Fig. 7).

Fig. 7. FEM model of the shoulder joint


After, the contact (between all the assembled components) and the boundary
conditions have been imposed. In particular, the scapula has been fully
constrained and a frictional contact, based on the augmented Lagrange algorithm
[23], has been imposed between the glenosphere and the polyethylene insert. All
remaining contact-pairs have been considered as bonded. As suggested in
literature [6, 7], it has been assumed that the deltoid is the only muscle used
during the abduction of the arm. It has been modelled as an inextensible spring
whose ends correspond to the anchor points of the deltoid. The positions of the
virtual anchor points [24] on the scapula and humerus sides have been located
using common kinematic models in literature [24]. The considered abduction
angle is 80°.
The obtained results are summarized in table 2.
476 T. Ingrassia et al.

Table 2. Deltoid abduction force for each metaphysis offset.

Offset (grade) Abduction Force (N) Deltoid length (mm)


Medial (12) 21.34 138.07
Anterior (3) 21.78 140.73
Lateral (6) 22.46 143.90
Posterior (9) 22.08 140.43

It can be noticed that the abduction force slightly varies depending on the
metaphysis offset. The extreme values have been found for the medial (21.34 N)
and the lateral (22.46 N) offset. Moreover, it can be observed that the humeral tray
position affects the equivalent length of the deltoid. From medial to lateral offset,
in fact, the deltoid length increases of about 6 mm. This information could be very
useful especially when elderly people, having not very elastic tendons and
muscles, has to be treated with a reverse shoulder arthroplasty.

5 Conclusions

In this work has been presented the study of a new reverse shoulder prosthesis,
characterized by a metaphysis with a variable offset, to detect how its positioning
can modify the ROM of the shoulder and can affect the force to abduct the arm.
The geometries of the prosthesis components have been acquired by 3D scanning
techniques. Digital models of the shoulder bones and the prosthetic parts, obtained
by interpolation of point-by-point raw acquisition data, have been imported into a
CAD software and parametrically assembled. Virtual simulations have been setup
to measure the limit positions of the arm, during abduction, adduction, internal
and external rotations, depending on the offset values of the metaphysis.
Interesting information has been also obtained, by FEM numerical simulations, as
regards the force that the deltoid should apply to abduct the arm.
The study allowed to highlight how the offset of the metaphysis affects the
performances of a shoulder with a reverse prosthesis. It has been found that the
internal and external rotations are not influenced by the metaphysis offset that,
instead, strongly affects the maximum abduction angle. It emerged, also, that a
higher force is needed to abduct the arm when the humeral tray is positioned with
a lateral offset. Moreover, this kind of offset requires a larger elongation of the
muscle and tendons. That could be taken into account when patients with a severe
injury of the rotator cuff must be treated.
The obtained information can be useful and may constitute important
guidelines for surgeons during the installation of this kind of prostheses. The
proposed procedure, moreover, could be automatized to perform customized
analyses in order to find the optimal humeral tray position depending on the
particular shape and dimensions of the shoulder.
Influence of the metaphysis positioning in a new … 477

References

1. Berliner, J. L., et al., Biomechanics of reverse total shoulder arthroplasty. J Shoulder


Elbow Surg (2015) 24, 150-160
2. Grammont, P.M., Baulot, E.: Delta shoulder prosthesis for rotator cuff rupture.
Orthopedics, 16, 65–68 (1993)
3. Hoenecke, H., et al., Reverse total shoulder arthroplasty component center of rotation
affects muscle function, J Shoulder Elbow Surg (2014) 23, 1128-1135
4. Walker, D.R., Struk, A.M., Matsuki, K., How do deltoid muscle moment arms change
after reverse total shoulder arthroplasty?, J Shoulder Elbow Surg, 2015, 1-8.
5. Henninger, H.B., Barg, A., Anderson, A.E., Bachus, K.N., Effect of deltoid tension
and humeral version in reverse total shoulder arthroplasty: a biomechanical study, J
Shoulder Elbow Surg (2012) 21, 483-490.
6. Giles, J. W., et al., Implant Design Variations in Reverse Total Shoulder Arthroplasty
Influence the, Required Deltoid Force and Resultant Joint Load. Clin Orthop Relat
Res (2015) 473:3615–3626 DOI 10.1007/s11999-015-4526-0.
7. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., Numerical study of the
components positioning influence on the stability of a reverse shoulder prosthesis
(2014) International Journal on Interactive Design and Manufacturing, 8 (3), pp. 187-
197
8. Frankle, M.A., Cuff, D., Levy, J.C., Gutiérrez, S.: Evaluation of abduction range of
motion and avoidance of inferior scapular impingement in a reverse shoulder model. J.
Shoulder Elbow Surg. 17(4), 608–615 (2008).
9. Simovitch, R.W., Zumstein, M.A., Lohri, E., Helmy, N., Gerber, C.: Predictors of
scapular notching in patients managed with the Delta III reverse total shoulder
replacement. J. Bone Joint Surg. 89, 588–600 (2007).
10. Franklin, J.L., Barrett, W.P., Jackins, S.E., Matsen, F.A.: Glenoid loosening in total
shoulder arthroplasty. Association with rotator cuff deficiency. J. Arthroplasty 3, 39–
46 (1988).
11. Nalbone, L., et al., Optimal positioning of the humeral component in the reverse
shoulder prosthesis, 2014, Musculoskeletal Surgery, 98 (2), pp. 135-142
12. Gutiérrez, S., Levy, J.C., Lee, W.E.: Center of rotation affects abduction range of
motion of reverse shoulder arthroplasty. Clin. Orthop. Relat. Res. 458, 78–82 (2007).
13. Frankle, M.A., Luo, Z.P., Gutiérrez, S., Levy, J.C.: Arc of motion and socket depth in
reverse shoulder implants. Clin. Biomech. 24(6), 473–479 (2009).
14. Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear underrun
protective device for truck. Proceedings of the 8th International Symposium on Tools
and Methods of Competitive Engineering, TMCE 2010, 2, 713-725.
15. Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multi-layer
structures by laser-based ultrasonic technique, 2008, Journal of Adhesion, 84 (10), pp.
811-829
16. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique simultaneous
approach for the design of a sailing yacht, 2015, International Journal on Interactive
Design and Manufacturing, DOI: 10.1007/s12008-015-0267-2
17. Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical redesign of a
semitrailer, 2005, WIT Transactions on the Built Environment, 80, pp. 359-369
18. Chen, F., Brown, G.M., Song, M.: Overview of three-dimensional shape measurement
using optical methods, Opt. Eng. 39(1), 10–22 (2000).
19. http://www.geomagic.com/en/products/studio/overview.
20. Fragapane, S., Giallanza, A.,Cannizzaro, L., Pasta, A., Marannano, G., Experimental
and numerical analysis of aluminum-aluminum bolted joints subject to an indentation
process, International Journal of Fatigue (2015), 80, pp. 332-340
478 T. Ingrassia et al.

21. Ingrassia T., Nigrelli V., Buttitta R., A comparison of simplex and simulated annealing for
optimization of a new rear underrun protective device. Engineering with Computers, 2013,
29, 345-358.
22. Marannano, G., Mariotti, G.V., Structural optimization and experimental analysis of
composite material panels for naval use, Meccanica (2008) 43 (2), 251-262.
23. Cerniglia, D., Ingrassia, T., D'Acquisto, L., Saporito, M., Tumino, D., Contact between the
components of a knee prosthesis: Numerical and experimental study, 2012, Frattura ed
Integrita Strutturale, 22, pp. 56-68
24. Pennestrı, E., et al., Virtual musculo-skeletal model for the biomechanical analysis of
the upper limb, Journal of Biomechanics 40 (2007): 1350-1361.
Digital human models for gait analysis:
experimental validation of static force analysis
tools under dynamic conditions

T. Caporaso1*, G. Di Gironimo1, A. Tarallo1 , G. De Martino2 , M. Di


Ludovico2 and A. Lanzotti1* .
1
University of Naples Federico II – Fraunhofer JL IDEAS, DII, P.le Tecchio 80, 80125Napoli,
Italy
2
University of Naples Federico II, DIST, Via Claudio 21, 80125 Napoli, Italy
* Corresponding author - E-mail address: teodorico.caporaso@unina.it

Abstract This work explores the use of an industry-oriented digital human mod-
elling tool for the estimation of the musculoskeletal loads corresponding to a
simulated human activity. The error in using a static analysis tool for measuring
articulations loads under not-static conditions is assessed with reference to an ac-
curate dynamic model and data from real experiments. Results show that, for slow
movements, static analysis tools provide good approximation of the actual loads
affecting human musculoskeletal system during walking.

Keywords: Gait analysis; Virtual simulation; Biomechanics; Kinematics; Dy-


namics

1 Introduction

Gait analysis is the systematic study of human locomotion that provides quantifi-
cation of body movements and biomechanics [1]. It uses measurements from mo-
tion capture systems (MoCap) and other devices (i.e. force platforms, inertial sen-
sors, surface electromyography systems) to evaluate human walk cycle. As a
result, kinematic and kinetic parameters of human gait and their relationships with
neuromuscular functions can be analysed. The study of human locomotion has
been widely used for various purposes such as diagnosis and treatment of neuro-
muscular diseases (i.e. cerebral palsy, apoplexy, multiple sclerosis, Parkinson’s
disease), sports rehabilitation and performance, design and evaluation of orthosis
and even security [2, 3].
Virtual reality is another powerful tool to investigate human locomotion. It indeed
allows for reproducing and simulating human patterns through the so-called digital

© Springer International Publishing AG 2017 479


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_48
480 T. Caporaso et al.

human modelling (DHM) that in brief is the technique of simulating body move-
ments of people of different height, weight, age and sex. For instance, the most
common human motion patterns (i.e. fit, reach, grasp, balance, move, lift and
walk) can be simulated along with the articulations movements and the loads af-
fecting the musculoskeletal system.
DHM is already used for very different applications, such as vehicle ergonomics,
cycle time estimation of manual productions and follow-up of medical treatments
in individuals with pathological gait [4]. The use of digital humans in such differ-
ent fields has led to the birth of several software applications that pay more and
more attention to biomechanics. Santos [5] and Anybody [6], are two software
packages that integrate editable musculoskeletal models and allow making dy-
namic simulations of a wide variety of movements [7]. However, these software
tools are not commonly used in industry. Other software applications (such as
Human solutions Ramsis, Siemens Jack and Dassault Systèmes Human builder)
are indeed preferred because they are mainly aimed at the study of human-product
and human-machine interactions and also provide data exchange with the most
widespread industrial CAD tools. For the same reason, these applications gener-
ally do not implement physical-based dynamics, but just a kinematic engine,
which can be used to estimate the joint torques for static postures.
The present study aims at exploring how static force models can be effectively
used to estimate the musculoskeletal loads corresponding to a simulated human
activity (dynamic conditions). In particular, Siemens Jack is one of the most used
DHM tools across the industry [8, 9], and therefore was selected for our purposes.
It is understood that other software tool could have been considered as well and in
facts they will be matter of further studies.
Jack can perform static analyses with the Force Solver tool. Ground reaction
forces are derived based on foot support configuration and applied loads (i.e. self-
weight and possible external further loads), then the torques for each joint and the
affected muscles are computed. The error in using such a static analysis tool for
measuring musculoskeletal loads in not-static conditions is here assessed against
an accurate dynamic model based on the velocity data from a real motion se-
quence and the actual ground reaction forces measured through a force platform.
The authors developed a proper motion analysis protocol based on Newton-Euler
dynamics equations and a musculoskeletal model reported in the published litera-
ture. The results from this protocol have been taken as a reference. Then, the same
MoCap data set (without ground force reactions) have been processed with Jack
and the two results have been eventually compared.

2 Materials and methods

The development of proper motion analysis protocol and Jack approach is de-
scribed in this section.
Digital human models for gait analysis ... 481

2.1 Experimental set-up

The experiments were conducted at the Laboratory of Advanced Ergonomics


Measures of the University of Naples Federico II (Fraunhofer Joint Lab IDEAS –
MISEF). The laboratory is equipped with a reflective optical motion capture sys-
tem (MOCAP) and a force platforms system, both provided by BTS SpA. The first
one (BTS SMART-DX 4000) consists of ten infrared digital cameras. Their sam-
pling frequency was fixed to 340 Hz (to achieve the maximum image resolution of
2048×1088 pixel). The second one is a BTS P-6000 system endowed with eight
force platforms. Its sampling frequency was set to 680 Hz (maximum value). The
software used for system calibration and tracking and processing and analysis of
data was also provided BTS. The measurement volume resulted from calibration is
3.29×2.47×9.08 [m] with a mean error of 0.26 mm (Figure 1.a). The y axis is the
vertical direction, while x and z axes are the medio-lateral and the anteroposterior
directions respectively. Siemens Jack v.8.2 was used for virtual simulations.

Figure 1 - a) Lab layout: calibration volume (A); force platforms (B); infrared digital
cameras (C). b) Lateral view of a gait test

The experiments involved a male volunteer, 176 cm tall, with body mass of 69 kg
that is representative of the fiftieth percentile of the random variable ‘height of
Italian population’ and ‘weight of Italian population” (fixed the age) [10]. He is
also normal weight according to its body mass index (BMI=22.27 kg/m 2 [11]).
The well-known motion analysis protocol by Davis [12] was selected for this
study. The Davis protocol, compared to those providing the same accuracy, limits
indeed the discomfort caused by optical markers attached to the skin, and makes
the identification of anatomical reference points easier.

Moreover Davis's protocol is proven to provide valid and reliable results and is
widely used for gait analysis. The protocol is based on eleven anthropometrical
measurements: twenty-two markers must be used for the static acquisition whilst
just twenty-one are needed for the gait analysis. However, to accurately reproduce
the patient movements in virtual reality, the authors increased the number of sites
being observed. In particular, eleven functional markers were added. As a result,
twenty-six anthropometrical measures were collected (Figure 2). At the beginning
of the experiment, the participant was requested to maintain an orthostatic position
482 T. Caporaso et al.

for about 10 seconds. This acquisition (so-called “standing”) was to set a refer-
ence posture for his joint angles. Then, he was asked to walk on the force plat-
forms inside the MoCap measurement volume (Figure 1.b) to acquire ten full
strides. At the same time ground forces data were collected.

Figure 2 - Markers-set used for the experiments.

2.2 Data processing

Discrete digital data from tracking system were interpolated with a third-order
polynomial and then treated with a second-order Butterworth low-pass filter [13]
to reduce digital artifacts and noise (i.e. skin motion). A cut-off frequency of 6Hz
(i.e. six-times the frequency of the stride) was set. On the other hand, a threshold
of 5N was applied to the ground force data to remove some known noise (e.g. en-
vironmental, electrical, electronic, computer, physiological, etc.). Other spikes in
the digital data were smoothed with a moving averaging triangular window filter.

A proper data processing protocol based on a musculoskeletal model was devel-


oped. This protocol involves kinematic data, ground force data, anthropometrical
measurements and a specific anatomical table by Zatsiorsky et al. [14] with the
correction of De Leva [15] to derive joints angles and internal loads affecting the
musculoskeletal system. The estimation strategy for internal-body forces and
torques is more extensively discussed in section 2.4. Given the heel-strike (HS)
and toe-off (TO) events based on the signals from the ground force platform, fur-
ther parameters such as cadence stride, stance time, swing time, double support
Digital human models for gait analysis ... 483

time and stride length were computed. Then the MOCAP data of standing and
walking acquisitions were imported in Jack through an open data exchange format
(i.e. C3D). It is worth noticing that the data exchange implied also the re-sampling
of the signal from 340 Hz to 30 Hz (as required by Jack).

Figure 3 - a) front view of real marker set, b) the optimal markers set imported in vir-
tual realty; c) the manikin’s front view with markers

A digital human with the same anthropometric characteristics of the volunteer was
created in Jack. The reference posture for the virtual manikin was set on the basis
of the standing posture data. As mentioned, new tracking sites were added to those
already provided by Davis's protocol to get a faithful reproduction of the motion
sequence acquired. The complete marker-set selected for the experiments is shown
in Figure 3.

Figure 4 - Lateral view of walking test in Jack

Finally, MoCap data were imported in Jack and the Force Solver tool was used to
estimate joint angles and related torques. Jack uses a 25-sites marker-set, therefore
8 markers used just for the motion protocol calculations were not imported in it
(i.e. the sites named in Figure 2 as r_elbow2, l_elbow2, r_wrist2, l_wrist2, r_bar1,
r_bar2, l_bar1, l_bar2 respectively). To have consistent values for ground reaction
484 T. Caporaso et al.

forces, walking mode was set as force distribution strategy for the swing phase.
Moreover, temporal gait events (TO and HS) were also identified through a frame
by frame video analysis (Figure 4).

2.3 Human Model: Estimation of internal forces and torques

Human body is here modelled as a system of rigid segments connected together by


proper joints (Link Segment Model, LSM) that represent articulations. For in-
stance, the lower limb is schematized as a set of three rigid segments (representing
thigh, shank and foot respectively) connected together through frictionless hinges
(Figure 5).

Figure 5 - Lower limb: a) Anatomical model; b) Link segment model; c) free body

As is known, the dynamics of a rigid body is driven by the Newton-Euler equa-


tions:
­f mv
® (1)
ω  ω u Iω
¯τ Iω
Where f and W are the resultant force and the associated torque acting on the center
of mass (CoM) of each rigid segment respectively and v, Z and I are the velocity
and angular speed of the CoM and the moment of inertia of any rigid segment.
Therefore, if we are given v(t) and Z t from a motion sequence and the ground
reaction forces from a force platform, we can use these equations to derive the
overall force system acting on each segment (inverse dynamics) and thus the cor-
responding stress in any point of the kinematic chain. In particular, inverse dy-
namics can be computed efficiently by exploiting the recursive structure of an ar-
ticulated rigid body system. A well-known recursive algorithm allows indeed
computation of inverse dynamics in linear time proportional to the number of links
in the articulated system [16]. The authors have implemented this algorithm in
Digital human models for gait analysis ... 485

their motion analysis protocol to accurately estimate the joint torques in the kine-
matic chain, based on the data from the MoCap system and the external loads
measured with the force platform. However, in a state of static mechanical equilib-
rium, the problem becomes even easier, because the joint torques can be computed
all at once through the inversion of the Jacobian matrix related to the particular
posture considered [16]. This is particularly important, because slow movements
(e.g. walking) could be viewed as a sequence of quasi-static postures. To estimate
the error due to such an approximation (i.e. neglecting inertial effects) a static so-
lution algorithm has been also implemented in the protocol.
As mentioned, most of industry-oriented DHM software (like Jack) does not im-
plement a true inverse dynamics, since simulation environments often miss a full
physics engine, but nevertheless allow for estimating joint torques related to a
static posture, as in the case of Jack Force Solver tool. The latter is based on the
3D Static Strength Prediction Program (3DSSPP) developed by the University of
Michigan [17]. 3DSSPP algorithm neglects any inertial effect (static approach)
and uses an "up-to-down" approach for the solution. This means that the analysis
starts from the hand segment and continues down the body, up to assess the GRF
at the end of force balance. Thus, the GRF is the output of the analysis rather than
the input.

3 Results and discussions

As a first step, the developed motion analysis protocol has been validated. The re-
sults for torques measured with both dynamics and static model are shown in Fig-
ure 6.

Figure 6 - Ankle (a) and knee (b) torques - static vs dynamic models results. Vertical dotted
line indicate the end of stance phase.
As expected, no significant differences can be appreciated for the ankle, whereas
the curves for knee diverge significantly. At the level of ankle inertial effects are
actually negligible, while they become more important for the correct determina-
tion of the torques at knee level. Then, spatial-temporal parameters and joint an-
gles were estimated. The mean values for the two analyses (with Jack and with our
dynamic protocol respectively) with the absolute and relative deviations are listed
in table 1. As mentioned, the results from the developed motion analysis protocol
486 T. Caporaso et al.

are considered as the “true” reference. Results show that the static model of Jack
provides a quite good estimation of temporal parameters of gait such as stance and
swing phase (deviation below 1%). The deviations for double support phase and
mean velocities and stride length are instead slightly higher (values ranging be-
tween 6% and 7%).

Table 1 Mean value of gait spatial-temporal parameters and Jack results


Parameter Jack result Real value Abs Error Rel. Error (%)
Stride Time (s) 1.10 1.12 - 0.02 1.79
Stance Phase (%) 60.9 61.1 0.20 0.33
Swing Phase (%) 39.1 38.9 0.20 0.51
Double Support Phase (%) 12.0 11.3 0.70 6.19
Stride Length (m) 1.25 1.35 - 0.10 7.41
Cadence Stride (cycle/s) 0.91 0.89 0.02 2.25

Figure 7 - Mean joint angle of ankle (a) and knee (b) in the sagittal plane during the stride.
Vertical black lines indicate the end of the stance phase.

Table 2 - Mean joint angles for virtual analysis and real analysis
Parameter Jack result Real value Abs. Error Rel. Error (%)
Max Flex/Est Knee[deg] -12.1 -3.3 -8.85 269
Min Flex/Est Knee[deg] 54 64.9 -10.9 16.7
Range Max/Min Knee [deg] 66.1 68.2 -2.02 2.96
Max Flex/Est Ankle[deg] -15.2 -9.6 - 5.6 58
Min Flex/Est Ankle[deg] 13.1 16.1 -3.0 18.7
Range Max/Min Ankle [deg] 28.3 25.7 2,56 9.96

As shown in Figure 7, also the estimation of joint angles provided by Jack is in


good accordance with our model. However, their range of motion is generally just
slightly underestimated; except in the case of knee, where the deviation measured
is significant (Table 2).
Digital human models for gait analysis ... 487

The lower accuracy in the evaluation of stride length and double support can be
ascribed to the error about the ankles.

Figure 8 - Mean joint torques (static model) at ankle (a) and knee (b) level measured in the
sagittal plane, normalised to stride time (from HS to HS of the same foot). Red vertical
lines indicate the end of the stance phase.

As expected, our static model provides results very close to Jack's ones (Figure
8). The estimation becomes inaccurate just around the beginning of the swing
phase due to an incorrect temporal identification (Jack seems to offset the time
window of the swing phase).

4 Conclusions

In this work the authors assessed the static force prediction tool of Siemens Jack
under dynamic conditions (walking). Results show that, generally, when the speed
of the movements is low enough, Force Solver tool gives a reasonable estimation
of the actual musculoskeletal loads, while the values predicted for ankle loads are
slight less accurate. However, this work is only a first step in the evaluation of
tools for virtual gait analysis. Future works will involve a larger sample of indi-
viduals with new marker-sets to better assess the movements of foot articulation.
Walking tests at different speed will be also conducted to evaluate the influence of
velocity on results. Moreover, torques estimation will be improved with a proper
definition of swing and stance phases inside Jack. Other commercial DHM tools
are currently under study.

Acknowledgments The authors would like to thank Annachiara Schettino and Roberta Antonia
Ruggiero for their precious help during the development of the work.
488 T. Caporaso et al.

References

1. Ghoussayni, S.; Stevens, C.; Durham, S.; Ewins, D. Assessment and validation of a simple
automated method for the detection of gait events and intervals., 2004, Gait Posture (20), pp.
266–272.
2. Castelli A. , Paolini G. , Cereatti A., Bertoli M., Della Croce U. Application of a markerless
gait analysis methodology in children with cerebral palsy: Preliminary results, September
2015, Gait & Posture, 42 (2), pp. S4–S5
3. Sigurnjak S., Twigg P., Bowring N. Development of a Virtual Gait Analysis Laboratory,
2008, Measurement and Control
4. Chaffin, Don B. Digital Human Modeling for Vehicle and Workplace Design, 2001, Society
of Automotive Engineers, Inc.
5. T. Marler, S. Beck, U. Verma, R. Johnson, V. Roemig, B. Dariush. A Digital Human Model
for Performance-Based Design, 2015, Lecture Notes in Computer Science, (8529) pp. 136-
147
6. Damsgaard, M. Rasmussen, J. Christensen, S.T.; Surma, E.; Zee, M.D. Analysis of musculo-
skeletal systems in the anybody modeling system. 2006, Simul. Model. Pract. Theory 2006,
(14), pp. 1100–1111.
7. A.I. Purdue , A.I.J. Forrester, M. Taylor, M.J. Stokes, E.A. Hansen, J. Rasmussen . Ef-
ficient human force transmission tailored for the individual cyclist, June 2010, 8th Confer-
ence of the International Sports Engineering Association (ISEA) - Procedia Engineering (2),
(2), pp. 2543-2548
8. Di Gironimo G., Di Martino C., Lanzotti A., Marzano A., Russo G. Improving MTM-UAS
to predetermine automotive maintenance times, 2012, International Journal on Interactive
Design and Manufacturing. 6(4), pp. 265-273.
9. Di Gironimo G., Mozzillo R., Tarallo A. From virtual reality to web-based multimedia
maintenance manuals. 2013, International Journal Interactive Design Manufacturing, 7:183–
190.
10. Cacciari E, Milani S, Balsamo A and SIEDP Directive Council 2002-03. Italian cross sec-
tional growth charts for height, weight and BMI (6-20yr ), 2002, Eur J Clin Nutr. , 56: 171-
80
11. BMI classification of World Health Organization ,Web access on 6 May 2016
http://apps.who.int/bmi/index.jsp?introPage=intro_3.html
12. Davis, R. B., Ounpuu, S., Tyburski, D., & Gage, J. R. A gait analysis data collection and re-
duction technique, 1991 ,Human movement science, 10(5), pp. 575-587
13. Kirtley, C. Clinical gait analysis: theory and practice, 2006, Elsevier Health Sciences.
14. Zatsiorsky, V. and Seluyanov, V. Estimation of the mass and inertia characteristics of the
human body by means of the best predictive regression equations, 1985, Biomechanics IX-
B, pp. 233-239.
15. De Leva, P. Adjustments to Zatsiorsky-Seluyanov's segment inertia parameters, 1996, Jour-
nal of biomechanics, 29(9), pp. 1223-1230.
16. Di Gironimo G., Pelliccia L., Siciliano B., Tarallo A.. Biomechanically-based motion control
for a digital human. 2012, International Journal on Interactive Design and Manufacturing.
6(1)1, pp. 1-13.
17. Chiang, J., Stephens, A. and Potvin, J. Retooling Jack’s static strength prediction tool, 2006 ,
No. 2006-01-2350 SAE Technical Paper.
Using the Finite Element Method to Determine
the Influence of Age, Height and Weight on the
Vertebrae and Ligaments of the Human Spine
Fátima Somovilla-Gómez1*, Rubén Lostado-Lorza1, Saúl Íñiguez-Macedo1,
Marina Corral-Bobadilla1, María Ángeles Martínez-Calvo1, Daniel Tobalina-
Baldeon 1.
1
University of La Rioja, Mechanical Engineering Department, Logrono, 26004, La Rioja, Spain
* Corresponding author. Tel.: +0034 941299727; fax:++0034 941299727. E-mail address:
fatima.somovilla@unirioja.es

Abstract: This study uses the Finite Element Method (FEM) to analyze the
influence of age, height and weight on the vertebrae and ligaments of the human
functional spinal unit (FSU). Two different artificial segments and the influence of
the patient’s age, sex and height were considered. The FSU analyzed herein was
based on standard human dimensions. It was fully parameterized first in
engineering modelling format using CATIA© software. A combination of
different elements (FE) were developed with Abaqus© software to model a
healthy human FSU and the two different sizes of artificial segments. Healthy and
artificial FSU Finite Element models (FE models) were subjected to compressive
loads of differing values. Spinal compression forces, posture data and male/female
anthropometry were obtained using 3DSSPP© software Heights ranging from
1.70 to 1.90 meters; ages, between 30 and 80 years and body weights between 75
and 90 kg were considered for both men and women. Artificial models were based
on the Charité prosthesis. The artificial prosthesis consisted of two titanium alloy
endplates and an ultra-high-molecular-weight polyethylene (UHMWPE) core. An
analysis in which the contacts between the vertebrae and the intervertebral disc, as
well as the behavior of the seven ligaments, were taken into consideration. The
Von Mises stresses for both the cortical and trabecular bone of the upper and
lower vertebrae, and the longitudinal stresses corresponding to the seven
ligaments that connect the FSU were analyzed. The stresses obtained for the two
geometries that were studied by means of the artificial FE models were very
similar to the stresses that were obtained from healthy FE models.

Keywords: Finite Element Model (FEM); prosthesis; age; height; weight.

© Springer International Publishing AG 2017 489


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_49
490 F. Somovilla-Gómez et al.

1 Introduction

Lower back pain is a widespread problem that is associated with deterioration


of the intervertebral disc. It requires medical treatment as it considerably
constrains normal daily activities. This, results in economic losses in the millions
for both industry and healthcare. This health problem is the second leading cause
of disability worldwide [1, 2]. This study seeks to compare the biomechanical
alterations of a Functional Spinal Unit (FSU) in the lumbar back area, specifically
the L4-L5 vertebrae, of a healthy model and a model that has an artificial disc
prosthesis under conditions of extreme load. Compression in addition to the
prosthesis should exhibit a range of movement within physiological and functional
limits. The behavior of finite element models that behave similarly to a human
intervertebral disc was analyzed. Stress in the contact areas of the cortical and
trabecular bones was examined, as well as ligament stress. Finite element models
were applied to the healthy model and to two different sizes of the artificial
prosthesis model. The artificial FEM utilized a Charité prosthesis, which consisted
of two titanium alloy endplates and an ultra-high-molecular-weight polyethylene
(UHMWPE) core [3].

2 Functional Spinal Unit (FSU) and the Intervertebral Disc

The spinal column consists of overlapping functional spinal units (FSU), as


shown in Figure 1. The basic structural component of the spinal column is called
the Functional Spinal Unit. It is formed by two adjacent vertebrae that are
separated by an intervertebral disc, along with the surrounding ligaments [4]. An
intervertebral disc is located between each vertebrae. The disc consists of two
main parts. One part is a gelatinous substance that is called nucleus pulposus. It
cushions the spinal cord. The other part is the annulus fibrosus, which is a
cartilage ring that surrounds the nucleus pulposus and remains intact when force is
applied to the spinal column [5].
Using the Finite Element Method to Determine … 491

Fig. 1. (a) Functional Spinal Unit (FSU) and (b) Behavior of the ligaments in this
study [6].

The ring’s collagen fibers are arranged in concentric layers. These fibers
anchor the disc to the endplate and constitute a layer of cartilage of approximately
0.5 mm in thickness that covers the surface of the vertebral body to the bony rim
that surrounds it. The intervertebral discs endow the spinal column with flexibility
and act as a cushion during such daily physical activities as walking, running and
jumping. Ligaments play an extremely important role in the spinal column’s
biomechanics and stability. By assuming a limited range of physiological
movements, many numerical models have concluded that ligaments exhibit linear
behavior. This approach, however, may lead to significant errors in the results. For
example, this could result in an error in the radius of the curvature of the stress-
strain curve, which is very important to the physiological range. Thus, the great
majority of studies have adopted a model of linear stress-strain behaviour [6], such
as that depicted in Figure 1. The following seven ligaments were incorporated in
the finite element model and designed as axial connectors: posterior ligament
(PLL), anterior ligament (ALL), transversal ligament (TL), flavum ligament (FL),
capsular ligament (CL), supraspinous ligament (SLL) and interspinous ligament
(ISL) [7].

3 Arthroplasty: Total Disc Replacement (TDR).

Artificial disc replacement (ADR), or total disc replacement (TDR), is a type


of arthroplasty. TDR is a surgical technique that involves replacing a damaged or
injured intervertebral disc with a specialized prosthesis that is made of metal and
polyethylene. The objective of this procedure is to restore the functionality of the
functional vertebral unit (FSU) and to maintain its mobility. The prosthesis
492 F. Somovilla-Gómez et al.

traditionally consists of two metal plates that are anchored to the superior and
inferior vertebrae. A polyethylene nucleus in the center serves as the joint. The
damaged or injured intervertebral disc can be partially or completely removed.
This study examined the case of partial replacement to improve the stability of the
combination of vertebrae, prosthesis and ligaments [8].

4 Influence of age and gender on the loss of bone density in the


cortical and trabecular bone.

An increase in skeletal porosity is associated with a reduction in bone mass.


This, in turn, compromises skeletal biomechanical integrity, making porosity the
primary cause of risk of bone fracture. An extremely common location of fractures
is the vertebrae. Other influential factors include obesity. It has been shown that
obese women have a higher bone density than do women of average body weight.
Meanwhile, anorexia causes lower bone density, whereas physical activity has a
positive effect on bone mass. It is estimated that a woman loses about 35% of their
cortical bone and 50% of their trabecular bone during their lifetimes, whereas men
lose about two thirds as much during their lifetimes. During a person’s first years
of life, adolescence and early adulthood, bone mass increases until it reaches a
maximum and stabilizes. This apex is most likely reached during the third decade
of life and may be followed by a period of relative stability until the bone mass
begins to decline during the fourth and fifth decades. Although the loss of bone
mass does not follow a well-established pattern, evidence suggests that trabecular
bone loss precedes that of the cortical bone and increases in rapidity among
women during menopause. Various studies have confirmed that trabecular bone
density varies according to age [9].

5 Material Properties. Cortical and Trabecular Bone.

The cortical bone is quite thin, measuring only about 2 mm in thickness. Cortical
bone thickness does not have a specific relationship to gender, although its
thickness does decrease with age. In this work, the thickness was considered to be
constant in the FE models for age between 30 and 80 years. Although the majority
of studies highlight the importance of the trabecular bone for age-related vertebral
fragility, more recent research suggests that the cortical layer plays a significant
role, especially for older individuals who have a decreased volume of trabelucar
bone [10], [11]. A property of trabecular bone material that has received relatively
little attention is that of the Poisson relationship. It is difficult to measure this
parameter in experiments for a solid material such as trabecular bone. The Poisson
values for the trabecular bone range between 0.2 and 0.5. The present study has
assumed constant values for the Young’s Module (E) of the cortical bone, as well
as the Poisson Module [12]. The mechanical characteristics of the trabecular bone
have been selected in accordance with the literature and the existing experimental
Using the Finite Element Method to Determine … 493

research. The bone density of the trabecular bone depends on age and was
calculated for the different cases [13]. Table 1 displays a summary of the
mechanical characteristics of the trabecular bone and the cortical bone below, as
well those that were used in the Finite Element Models in this study.

Table 1. Mechanical characteristics of the trabecular and cortical bone according to age.

MALE / FEMALE
Age (years) 30 80 30 80
Height (m) 1.75 1.75 1.90 1.90
Weight (kg) 70 70 95 95
Young’s Modulus 386. 31/428.06 168.08/154.95 386.31/428.06 168.31/154.95
trabec. (MPa)
Poisson trabec. 0.2 0.2 0.2 0.2
Density trabec. 142.88/157.55 64.68/59.85 142.88/157.5 64.68/59.85
(mg/cm3)
Yield Stress trabec. 2.5/2.86 0.8/0.74 2.5/2.86 0.8/0.74
(MPa)
Young’s Modulus 12000 12000 12000 12000
cort. (MPa) [14]
Poisson cort. [14] 0.3 0.3 0.3 0.3

The material properties of the remaining components of the FSU, mentioned in


this section were selected from the existing literature related to FEM modeling of
the spinal column. The most relevant data is summarized in Table 2.

Table 2. Summary: FSU Material Properties.

Materials FSU Young Module (MPa) Poisson


Endplate 24 0.4 [15]
Posterior bone 3000 0.3 [16]
Annulus Fibrosus 4.2 0.45 [17]
Outer Fiber 550 0.3 [18]
Inner Fiber 360 0.3 [19]
Nucleus Pulposus 0.1 0.499 [20]
UHMWPE 200000 0.3 [21]
(*) The characteristics of the trabecular bone depend on age as noted in the previous section.

6 Finite Element Models (FEM)

CATIA© was the software that was used to design the prosthesis. Different
geometries were also created. ABAQUS© was the calculation program that was
494 F. Somovilla-Gómez et al.

utilized to simulate the intervertebral disc prosthesis. It is an efficient and highly


flexible tool that enables one to interactively create finite element models and
visualize the results of the analysis. The Finite Element Model has been applied to
a Healthy Model (with a healthy intervertebral disc) and to a Mobile Model (with
a prosthetic intervertebral disc). A combination of tetrahedral, hexahedral, and line
finite elements (FE) were used to model both, the healthy and artifical FE models.
All FE models had a linear formulation, and for all existing pairs of contacts for
both healthy and artificial, considered segment-to-segment contact model. The
final models that were used in this study appear below.

Fig. 2. (a) FEM healthy intervertebral disc. (b) FEM artificial intervertebral disc

Fig. 3. (a) and (b) Healthy Model. (c) Mobile or artificial Model.

8 Simulation Parameters: Loads and Boundary Conditions

It has been proven in prior research that compression load is the most
significant load for the biomechanics of the spinal column and that it is
transmitted primarily through the endplates and intervertebral disc [22]. Therefore,
a compression load was applied in the various simulations. The magnitude of the
loads that was applied was calculated according to the ranges of movement
Using the Finite Element Method to Determine … 495

provided by the biomechanics of the lumbar spine. The 3DSSPP© software was
used to calculate the loads. It provided the following data for males and females of
30 and 80 years of age and different heights and weights. The loads vary between 1.2
and 1.6 MPa for men and women. Table 3 provides a summary of loads applied in
the simulation. The boundary conditions were applied to the lower vertebrae in the
FSU. An embedding condition was applied to the entire surface to completely
immobilize any range of freedom of movement or rotation. The compression load was
applied to the upper vertebrae. Fig 4a shows the boundary conditions that were applied
to the FSU. Fig 4b shows the pressure that was applied in the top of the vertebrae,
whereas Fig 4c. shows the embedment of the lower vertebrae.

Table 3: Summary of loads applied in the simulation

MALE/FEMALE
Height 1.75 1.75 1.90 1.90
Weight 70 70 95 95
Age 30 80 30 80
Compression (MPa) 1.2/1.25 1.2/1.25 1.67/1.59 1.67/1.59

Fig. 4. (a) Boundary conditions. (b) Load pressure, top vertebrae. (c) Embedment, lower vertebrae

9 Results

Figure 5 shows the Von Mises stresses for the healthy model (a), the cortical
and trabecular bone (b), the mobile model and (c) a male who is 1.70 m. in height,
75 kg in weight and 80 years in age.
496 F. Somovilla-Gómez et al.

Fig. 5. Von Mises stresses in the (a) Healthy Model, (b) Cortical bone, and (c) Mobile
model

Figure 5a shows the Von Mises stresses that appear in the FSU healthy FE
model. This figure shows that the value of this stress is approximately 84 MPa. In
addition, it can be seen in Figure 5b that the Von Mises stresses for both the
cortical and trabecular bones do not exceed 36 MPa. According to several
researchers [10-14], the maximum stresses on the cortical and cancellous bone
respectively are approximately 38 and 90 Mpa. In this case and for any of the
types of bones analyzed, its maximum stresses have not been exceeded. Figure 5c
shows the Von Mises stress in the artificial FE model (mobile model) . This figure
shows that the maximum stress is found on the endplates, and that its value is
543.5 Mpa, which is much loss than the yield stress of the material (655 MPa).
Table 4: Summary of Von Mises Stresses obtained

MALE/FEMALE
(Height-Weight-Age) Cortical Bone Trabecular Fibbers
170-75-30 32.4/33.1 2.34/2.88 36.52/32.88
Healthy Model

170-75-80 36.13/34.2 2.93/2.16 16.64/32.84


190-95-30 101.3/100.2 4.04/4.8 45.97/45.97
190-95-80 105.7/106.8 3.03/2.19 45.9/48.9
170-75-30 64.07/63.05 2.037/2.25 7.03/7.02
Prosthesis D23

170-75-80 62.8/63.8 2/1.83 7.03/7.02


190-95-30 57.2/58.5 3.21/3.53 6.08/6.03
190-95-80 89.11/89.11 2.85/2.71 9.8/9.5
Titanium Alloy Core(UHWMPE)
170-75-30 543.5/544.5 24.88/24.5
170-75-80 543.5/544.8 24.88/24.72
190-95-30 499.5/501 30.92/31.2
190-95-80 757.6/756 34.62/35.1

10 Conclusions

The results indicate that the maximum amount of stress in the healthy model was in the
cortical bone of the vertebrae. It was 105.7MPa and 106.8MPa for male and female
Using the Finite Element Method to Determine … 497

respectively with 1.90 m in height, 95 Kg in weight and 80 years of age. The stress on
the trabecular bone appeared to increase with a person's height and weight. For
example, the maximum stress for a male/female of 1.70m and 75Kg was, respectively,
2.93 and 2.88 Mpa and for a male/female of 1.90m and 95Kg were, respectively, 3.21
and 3.53 Mpa. Age also exercises a clear influence. This was evident in its relationship
with the bone’s mechanical characteristics and, specifically, in the Young module
(according to Table 1). For the mobile model that was analyzed, a trend that was
similar to that of the healthy model was observed. In this case, for a prosthesis with 23
mm of core, the maximum stress that was obtained was 35.1 MPa for female of 1.90m,
95 Kg and 80 years of age. One can see the influence of a person's weight in the
mobile model, where stress moves the models closer to the point of fracture for the
prosthetic and bone. As well, especially in those cases of greatest height and increased
age. In no case of the healthy model were the ligaments observed to exceed the
fracture load. It should be noted that the calculated values were overestimated to
improve the static equilibrium of the FSU. The significant difference between the
calculated force and the fracture load for ligaments reveals that the intervertebral
connectors in the healthy model were capable of withstanding greater loads. Due to the
significant increase in mobility in the mobile model, the force generated by the
intervertebral ligaments for this configuration is considerably greater.

5. References

1. Whatley, Benjamin R.; WEN, Xuejun. Intervertebral disc (IVD): structure,


degeneration, repair and regeneration. Materials Science and Engineering: C, 2012,
vol. 32, no 2, p. 61-77.
2. Epifanio, V.A., Adrián, E.. B.Diseño de prótesis para disco intervertebral. Memorias
del XV Congreso Internacional Anual de la Somim, 2009.
3. Trincat, S., et al. Two-level lumbar total disc replacement: Functional outcomes and
segmental motion after 4 years. Orthopaedics & Traumatology: Surgery & Research,
2015, vol. 101, no 1, p. 17-21.
4. Lan, Chin-Chun, et al. Finite element analysis of biomechanical behavior of whole
thoraco-lumbar spine with ligamentous effect. The Changhua Journal of Medicine,
2013, vol. 11, p. 26-41.
5. Gutiérrez, Ramiro Arturo González. Biomechanical study of intervertebral disc
degeneration. 2012. Tesis Doctoral. Universitat Politècnica de Catalunya.
6. Weiss, J. A., Gardiner, J. C., Ellis, B. J., Lujan, T. J., & Phatak, N. S. (2005). Three-
dimensional finite element modeling of ligaments: technical aspects.Medical
engineering & physics, 27(10), 845-861.
7. Rohlmann, Antonius, et al. Analysis of the influence of disc degeneration on the
mechanical behaviour of a lumbar motion segment using the finite element
method. Journal of biomechanics, 2006, vol. 39, no 13, p. 2484-2490.
8. Beristain lima, S. A. U. L. Diseño de una prótesis articulada para disco intervertebral.
2010. Tesis Doctoral.
9. Hoffler, C. E., et al. Age, gender, and bone lamellae elastic moduli. Journal of
Orthopaedic Research, 2000, vol. 18, no 3, p. 432-437.
10. Chen, Huayue, et al. Age-related changes in trabecular and cortical bone
microstructure. International journal of endocrinology, 2013, vol. 2013.
498 F. Somovilla-Gómez et al.

11. Christiansen, Blaine A., et al. Mechanical contributions of the cortical and trabecular
compartments contribute to differences in ageǦrelated changes in vertebral body
strength in men and women assessed by QCTǦbased finite element analysis. Journal of
Bone and Mineral Research, 2011, vol. 26, no 5, p. 974-983.
12. Keaveny, Tony M.; Hayes, Wilson C. A 20-year perspective on the mechanical
properties of trabecular bone. Journal of biomechanical engineering, 1993, vol. 115,
no 4B, p. 534-542.
13. Ebbensen, Ebbe N., et al. AgeǦand GenderǦRelated Differences in Vertebral Bone
Mass, Density, and Strength. Journal of Bone and Mineral Research, 1999, vol. 14, no
8, p. 1394-1403.
14. Guan, Y., Yoganandan, N., Moore, J., Pintar, F. A., Zhang, J., Maiman, D. J., & Laud,
P. (2007). Moment–rotation responses of the human lumbosacral spinal
column. Journal of biomechanics, 40(9), 1975-1980.
15. Lu, Y. M., Hutton, W. C., & Gharpuray, V. M. (1996). Do bending, twisting, and
diurnal fluid changes in the disc affect the propensity to prolapse? A viscoelastic finite
element model. Spine, 21(22), 2570-2579.
16. Denozière, G., & Ku, D. N. (2006). Biomechanical comparison between fusion of two
vertebrae and implantation of an artificial intervertebral disc. Journal of
biomechanics, 39(4), 766-775.
17. Pitzen, T., Geisler, F., Matthis, D., Müller-Storz, H., Barbier, D., Steudel, W. I., &
Feldges, A. (2002). A finite element model for predicting the biomechanical behaviour
of the human lumbar spine. Control Engineering Practice, 10(1), 83-90.
18. Shirazi-adl, S. A., Shrivastava, S. C., & Ahmed, A. M. (1984). Stress analysis of the
lumbar disc-body unit in compression a three-dimensional nonlinear finite element
study. Spine, 9(2), 120-134.
19. Yao, J., Turteltaub, S. R., & Ducheyne, P. (2006). A three-dimensional nonlinear
finite element analysis of the mechanical behavior of tissue engineered intervertebral
discs under complex loads. Biomaterials, 27(3), 377-387.
20. Eberlein‫כ‬, Robert., Holzapfel†, G. A., & Schulze-Bauer‡, C. A. (2001). An
anisotropic model for annulus tissue and enhanced finite element analyses of intact
lumbar disc bodies. Computer Methods in Biomechanics and Biomedical
Engineering, 4(3), 209-229.
21. Vacas, F. G., Juanco, F. E., de la Blanca, A. P., Novoa, M. P., & Pozo, S. P. (2014).
The flexion–extension response of a novel lumbar intervertebral disc prosthesis: A
finite element study. Mechanism and Machine Theory, 73, 273-281.
22. White, Augustus A., et al. Clinical biomechanics of the spine. Philadelphia:
Lippincott, 1990.
Part IV
Nautical, Aeronautics and Aerospace
Design and Modelling

It is well known that the technological gap between the aerospace field and naut i-
cal one has significantly reduced in the last years. However, due to economical
reasons such a gap reduction has mainly interested the competition field (e.g.
America's cup, Volvo Ocean race, Rolex circu it). While the commercial sector got
a limited benefit. Recently, thanks to a cost decrease of the Computer A ided Engi-
neering tools, the aerospace knowhow has been applied to others industrial field
like, for instance, the nautical one.
Differently fro m an airp lane or a submarine, the yacht moves into water and
air at the same time giv ing rise to interface problems (free surface, flu id-structure
interaction, etc.) which are still comp lex to be solved even if the papers address
the reader toward the state of the art of the related topic.
The papers presented in this chapter are focused in problem solving, in mos t of
the cases concerning real problems, with a substantially degree of technological
innovation. The papers cover a wide range of topics mainly focused in yacht d e-
sign.

Alain Daidiè - INSA

Antonio Mancuso - Univ. Palermo


Numerical modelling of the cold expansion
process in mechanical stacked assemblies

Victor ACHARD1*, Alain DAIDIE1, Manuel PAREDES1 and Clément


CHIROL2
1
Université de Toulouse, ICA (UMR CNRS 5312), 3 Rue Caroline Aigle, 31400, Toulouse,
FRANCE
2
Airbus Operations S.A.S, 316 route de Bayonne, 31060, Toulouse Cedex 9, FRANCE
* Corresponding author. E-mail address: victor.achard@insa-toulouse.fr

Abstract The cold expansion process is a technology that is widely used to en-
hance the fatigue resistance of aircraft metallic parts. The issue analysed in this
paper concerns the case when the expansion is carried out through an assembly
composed of several sheets. The numerical work conducted was intended to un-
derstand the phenomenology of the process within a stiff assembly. In particular, it
aimed to analyse the deformation of the sheets and the residual stress fields gener-
ated by the process. For this purpose, an axisymmetric finite element model of the
split sleeve process was developed, simulating a single expansion performed
through a stack of two titanium holes (Ti-6Al-4V). The sheets to be expanded
were positioned between two steel plates to simulate the assembly. The model
could predict the shape and intensities of the fields within the expanded sections
and their global outer shapes. We particularly focused on the phenomena prevail-
ing between sheets. The simulation showed that deformations at the interface were
greatly reduced when the stack was stiffened axially. Moreover, high circumferen-
tial and axial residual stresses were generated in the sheets. Results were com-
pared with a single hole expanded without axial stiffening.

Keywords: Aerospace alloys; Cold expansion process; Mechanical joints; Axi-


symmetric FEM; Assembly technology.

1 Introduction

In the aeronautical field, the design of ever more efficient and reliable structures
remains a technical challenge. In mechanical assemblies, hole edges are the seats
of high stress concentrations and are major risk sites for crack initiation under cy-
clic loading [1]. To fight against this damage, manufacturing technologies such as

© Springer International Publishing AG 2017 501


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_50
502 V. Achard et al.

cold expansion have become widespread. Basically, cold expansion is performed


by inserting a tapered mandrel into an initial hole with high interference fit. The
aim is to generate deep, intense compressive residual stress fields induced by
hardening and elastic feedback of the surrounding material [2] [3]. Industrially,
processes that use a split sleeve inserted between the mandrel and the hole are the
most common. Due to production or maintenance requirements, it often appears
mandatory for the expansion to be carried out through an assembly composed of
several sheets, where the stack may be mono or multi-material (Figure 1).

Fig. 1. “Split sleeve” cold expansion process applied in a stacking, from [4]

The issue analysed in this paper concerns the cold expansion of a joint composed
of aerospace titanium alloys as the behaviour of these materials when subjected to
cold expansion is relatively unknown, much more so within mechanical assem-
blies. Within joints, it appears important to master both the residual fields that en-
hance resistance to fatigue and the residual deformation of the sheets to be joined
[5] [3]. In the literature, we identified the out-of-plane deformation generated at
the edge of the hole after cold expansion, which is often called a “volcano” be-
cause of its shape. According to Boni et al, the reaming generally performed after
cold expansion is unable to remove it completely [6]. During the final assembly of
sheets containing cold expanded holes, this defect can prevent good contact be-
tween the surfaces of the joints. Consequently, the load transferred through fric-
tion in the contact areas of the bolted section may be strongly reduced and fret-
ting-fatigue issues may be aggravated [7]. When expansion is performed in hard
metals, the expansion ratios required to reach significant fatigue benefits may be
higher than in aluminium alloys and involve even greater “volcano” defects. This
phenomenon was well measured and forecasted using numerical tools in a previ-
ous study carried out on a titanium alloy [8]. Also, because of the high stiffness
and mechanical strength of these materials, even a slight flatness defect on the
contact surfaces may strongly impact the fatigue performance of the joint. So it
appears very interesting to look for efficient solutions to reduce the axial defor-
mation of the holes using the split sleeve process. During experimental tests, sig-
nificant differences were observed in terms of deformation when performing cold
expansion through stacks of titanium sheets, whether axial preload of the joints
Numerical modelling of the cold expansion ... 503

was considered or not. Mann et al. also observed these reduced surface defects at
the interface of plates that were expanded at the same time [4]. We can also as-
sume a priori that the application of clamping during the cold expansion may gen-
erate strong disparities in the residual fields generated in comparison with uncon-
fined expanded sheets. The objective is to find an efficient methodology that
allows the “volcano” defect to be reduced and ensures that beneficial compressive
fields are generated. For this, we need to analyse the potential distribution of de-
formation fields and residual stresses generated when expansion is performed
through a stiffened stack. We chose to study the expansion of two titanium sheets,
clamped in an axially stiffened assembly, i.e. between two steel plates. In this pa-
per, we will explain the modelling strategy chosen, present the impact of clamping
on the “volcano” effect and give an overview of the various stress fields generated
after expansion of the hole in a stack.

2 Modelling strategy for stacking

After cold expansion, high stress gradients are generated at the edge of the hole.
Moreover, the fields are triaxial (circumferential, radial and axial) and thus not
obvious to identify. An excellent tool to provide an understanding of the complex
phenomena involved in cold expansion is finite element simulation. We identified
various strategies in the literature to simulate the “split sleeve” cold expansion
process but noticed that only 2D axisymmetric and 3D modelling, such as those
proposed by Maximov et al [9] and Yuan et al. [10], were able to include the axial
dimension of the plates and the representative kinematics of the tooling. In previ-
ous works, we have sought to develop new axisymmetric models dedicated to the
split sleeve process in hard metals that can give an accurate account of the operat-
ing conditions of the process. Thus, this modelling strategy has already proved its
soundness in two studies where expansion was simulated in a titanium hole [8]
[11]. The models are able to handle very high expansion ratios (up to 8% tested),
high plastic flows of the parts and various hardening laws. They also consider the
deformations of the tools, the plasticity of the sleeve and all the contact surfaces
between the parts. Finally, the reaming process is simulated in order to describe
the redistribution of the stress and strain fields after the expansion step.

The model presented in this paper will use the same characteristics to simulate the
split sleeve process in a stack and therefore provide information regarding the
phenomenology of expansion through various parts. In order to decrease the size
of the model, we will first consider two sheets that share one common interface.
Geometries, loading environment and boundary conditions for each step of the
process must be as close as possible to those sustained by the various “real” parts.
Given the large contact areas, the number of parts and the high plasticity rates re-
504 V. Achard et al.

quired for the simulation, the size of the model becomes dramatically increased, as
does the computing time required to solve it.

For this process “characterization” model, an axisymmetric strategy seems more


suitable than a three-dimensional one. In the 2D axisymmetric model chosen, each
element required for the discretization is an axisymmetric body. The properties
and associated loading (radial and axial) are also axisymmetric. The finite element
shown in Figure 2 was built using the commercial software Abaqus 6.12. The first
step was to build the representative axisymmetric sections defined according to
their polar coordinates. The radial component was represented by the 11 direction
in the Abaqus coordinate system and the axial component by 22. Consequently,
the 33 direction expressed the circumferential components. In the model, two tita-
nium sheets were expanded using a single tapered mandrel and without changing
the sleeve between the two holes. To simulate the axial stiffening of the joint, two
steel plates, called stiffeners, were positioned one on either side of the titanium
stack. No additional pressure was added between the plates and only a locking of
the axial direction of the outer faces of the stiffeners was selected, as shown in
Figure 2. The initial gap between the various plates was 10-3 mm. This configura-
tion can be assimilated to a rigid tool ensuring strong holding of the stack. Geome-
tries were chosen to simulate the expansion of a 9.525 mm diameter hole (after
reaming), considering a 6% initial expansion ratio and a width-to-diameter ratio of
3. Each sheet was 5 mm thick and the stiffeners were 6 mm thick. The dimensions
of the tooling (mandrel, sleeve and jaw) were based on industrial tools. The mate-
rial used for the simulation was a Ti-6Al-4V alloy for the two sheets to be ex-
panded, with a linear kinematic hardening law. High strength steel was used for
the sleeve, with an isotropic hardening law, and linear elastic steel was chosen for
the mandrel and the stiffeners.

The coefficient of friction between the sleeve and the sheets was taken to be 0.1.
A value of 0.15 was chosen between the jaw and the sleeve. Between the titanium
sheets, the coefficient was 0.36. In all the other contacts, friction was neglected.
Meshing of the sheet and the sleeve was performed using 8-node biquadratic ax-
isymmetric quadrilateral elements, including full integration in the high plasticity
rate areas. Meshing of the mandrel, the jaw and the stiffeners used 4-node bilinear
axisymmetric quadrilateral elements. The minimum size of the elements was 8.10-
3 mm close to the hole edge. The total number of elements was 65218, including
9261 that were dedicated to the contacts. The mandrel passed through the two
holes in succession. To achieve proper sequencing, 4 steps and 24 displacement
conditions were required in the model. First of all, the mandrel was pulled through
the stack using a displacement condition at its bottom (as in the real process). The
mandrel passed through the first sheet then crossed the interface into the second
sheet and finally exited the stack. When the mandrel had been released from the
holes, the two sheets were removed from the joint and reamed. The reaming was
simulated by deactivation of a layer of elements, followed by a step to reach the
Numerical modelling of the cold expansion ... 505

final static equilibrium of the sheets. In the industrial process, the two sheets are
not removed from the assembly to perform the reaming but, here, we wanted to
check the shape and size of the volcano without axial pre-stressing. The stress
state within the joint immediately after cold expansion and reaming was also as-
sessed with this model. The results of the simulations were obtained using the
ABAQUS Standard implicit solver. Given the size of the contact surfaces and the
high plasticity rates, 33 hours and 40 minutes of computation time was necessary,
using 24 CPUs.

Fig. 2. Axisymmetric modelling of the expansion of a clamped stacking

3 Cold expansion of the titanium stack

First of all, we measured the height of the “volcano” defect to see if the axial stiff-
ening could be a good option for reducing it. The heights were measured by plot-
ting the axial displacement of the nodes of the external faces of the sheets, i.e.
where the mandrel entered (entrance face) and where it exited (exit faces). The re-
sults presented in Figure 3 compare two cases. First, the heights are compared
immediately after cold expansion between sheets 1 and 2 (stack) and with a single
506 V. Achard et al.

unclamped hole. We note that the “stiffening” generates burrs at the entrance face
of sheet 1 and at the exit of sheet 2. Aside from this burr, we can see that stiffen-
ing dramatically reduced the “volcano” height (by a factor of 10). However, flat-
ness of the surfaces is not perfect and defects with heights of 0.01 to 0.02 mm re-
main after expansion. The results were also analysed after the axial locking had
been released and after reaming. Here, Figure 3 shows that the defects are even
more reduced and are now always below 0.01 mm.

Fig. 3. Evolution of the “volcano” height after expansion and after expansion and reaming

Releasing and reaming the plate gave us the final stress state generated in the two
titanium sheets. We will now observe whether the expansion of a stack involved
different final stress states from those found with a single expanded hole. To ana-
lyse their distribution and intensity and to emphasize the differences, we have
chosen to present the results in the full section of the sheets. Figure 4 shows the
circumferential (S33), radial (S22) and axial (S11) stress fields in the section of
the sheets of the stack (left column) and in a single hole (right column). We have
plotted only the compressive fields. We can first observe that, in each sheet of the
stack, strong circumferential compressive stresses (S33) are generated in the edge
of the hole. This circumferential component is responsible for the moderation of
stress concentration at the hole edge under fatigue loading. Although the reaming
greatly reduced the heterogeneity of the stress close to the hole edge, differences
are still visible in the thickness. In the midsection, the compressive peak (-1150
MPa) is a little more intense in the sheets from the stack than in the single hole (-
1050 MPa) and the compressive area is larger. At the entrance faces, although the
stress peak is equivalent (-900 MPa), the extent of the compressive stresses is
slightly reduced and we note a significant localized “weak” point at the entrance
face of the first sheet (-200 MPa). Finally, at the exit faces, the compressive fields
are greatly increased in the first sheet, where the stress peak is wide and very in-
tense (-1250 MPa), but they are reduced in the second sheet. The stress peak
measured is -935 MPa but a weak point is observed where the stress falls to less
than -450 MPa. In order to gain a better understanding of the differences between
Numerical modelling of the cold expansion ... 507

the two cases, we can analyse the axial and radial residual stress fields. In fact, it
is clear that the main differences in terms of shape and intensity of the stress are
expressed in the axial component (S22). The final axial stress state is highly com-
pressive in the case of the stack whereas comparatively negligible stresses exist in
the single hole. On the other hand, the stiffening has significantly affected the ra-
dial stress, which is no longer symmetrical with respect to the midsection. The cir-
cumferential final state may have been strongly impacted by these radial and axial
stress states. The fatigue behaviour of the future joint under this triaxial compres-
sive state may be different from that of the single hole after cold expansion.

Expansion of the stacking Expansion of a single hole

Fig. 4. Full section distribution of the triaxial stress components after expansion of clamped
plates (left) and of a single hole (right)
508 V. Achard et al.

4 Conclusion

This paper has discussed the consequence of the cold expansion process carried
out through an assembly composed of two titanium sheets. In particular, we were
interested in reducing the “volcano” defect in the interfaces. Dedicated numerical
axisymmetric modelling of the split sleeve process was established to analyse the
residual deformations and stresses of the sheets. From the numerical results ob-
tained, we noticed that the expansion carried out in a stacked assembly involves a
strong reduction of the “volcano” heights. After reaming, this defect was lower
than 0.01 mm. In addition, circumferential stress fields generated after split sleeve
cold expansion in stacked sheets are strongly compressive. The residual stress
field distribution is very different from that with a single hole. Notably, strong ax-
ial residual stress remains in the hole. We can deduce that the existence of axial
stiffening during expansion has a strong impact on the phenomenology of the pro-
cess. After finding that the volcano may be reduced, the next step will be to vali-
date the impact of clamping by experimental fatigue tests on bolted high-load-
transfer specimens.

References

1. Lemaignan C. La rupture des matériaux, EDP Sciences, 2003.


2. Reid L. Beneficial Residual Stresses at Bolt Holes by Cold Expansion. Rail Quality and
Maintenance for Modern Railway Operation, Springer Netherlands, 1993, pp. 337-347.
3. Leon A. Benefits of split mandrel coldworking. International journal of fatigue, 20, 1998.
4. Mann J., Sparrow J. and Beaver P. Fatigue characteristics of joints with holes cold-expanded
in a multi-layer stack. International journal of fatigue, Bd. 11, 4, pp. 214-220, 1989.
5. Ofsthun M. When fatigue quality enhancers do not enhance fatigue quality. International jour-
nal of fatigue, 2003, 25, pp. 1223-1228.
6. Boni L., Lanciotti A. and Polese C. Some contraindications of hole expansion in riveted joints.
Engineering Failure Analysis, 2014, 46, pp. 140-156.
7. Benhaddou T., Chirol C., Daidie A., Guillot J., Stephan P. and Tuery J.B. Pre-tensioning ef-
fect on fatigue life of bolted shear joints. Aerospace Science and Technology, 2014, 36, pp.
36-43.
8. Achard V., Daidie A., Paredes M. and Chirol C. Cold expansion process on hard alloy holes -
experimental and numerical evaluation. Mechanics and Industry, 2016, 17.
9. Maximov J. T., Duncheva G. V. and Kuzmanov T. V. Modelling of hardening behaviour of
cold expanded holes in medium-carbon steel. Journal of Constructional Steel Research, 2008,
Nr. 64, p. 261–267.
10. Yuan X., Yue Z., Wen S., Li L. and Feng T. Numerical and experimental investigation of the
cold expansion process with split sleeve in titanium alloy TC4, 2015, 77, p. 78–85.
11. Achard V., Daidie A., Paredes M. and Chirol C. Optimization of the Cold Expansion Proc-
ess for Titanium Holes. Advanced engineering materials, 2016, DOI 10.1002/adem.201500626.
A preliminary method for the numerical
prediction of the behavior of air bubbles in the
design of Air Cavity Ships

Filippo CUCINOTTA1*, Vincenzo NIGRELLI2, Felice SFRAVARA1


1
Department of Engineering, University of Messina
2
DICGIM, University of Palermo
* Corresponding author. Tel.: +39-090-3977292. E-mail address: filippo.cucinotta@unime.it

Abstract Air-cavity ships (ACS) are advanced marine vehicles that use air injec-
tion under hull to improve the vessel’s hydrodynamic characteristics. Although the
concept of drag reduction by supplying gas under the ship’s bottom was proposed
in the 19th century by Froude and Laval, at this time there are not many systemat-
ic studies on this subject. This paper is a preliminary work with the purpose of be-
ing a basic tool for the design of the ACS with computational fluid dynamic meth-
ods. The study aims to conduct a series of computational tests to compare the
numerical models of bubble with experimental data. The first step of this study
was to investigate the behavior of free bubble in water, considering as parameters
the critical mass of air, the rising speed and aspect ratio of the bubble. Then it is
evaluated the interaction bubble-flat plate in order to obtain a reliable prediction of
the behavior of air bubbles under the hull.

Keywords: Air Cavity Ship, CFD, hull ventilation, nautical design, high-speed
craft design.

1 Introduction

In planing hulls, a phenomenon of great importance is the ventilation. Thanks to it,


it’s possible to reduce the friction component of resistance.
Ventilation, in planing hull, is an extremely complex phenomenon of great im-
portance. It’s derived by the establishment of air channels under the hull that serve
the dual purpose of reducing frictional resistance, due to lower viscosity of the air
than water, and of reducing the aft low pressure region, decreasing the component
of form drag. Ventilation is both exploited in a conventional manner, i.e. through
geometries that facilitate the generation of these air channels (examples of this are
the hulls with spray rails and steps) and, ultimately, by forced insufflations of air

© Springer International Publishing AG 2017 509


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_51
510 F. Cucinotta et al.

compressed by special nozzles (e.g. ACS). Several theoretical and experimental


studies on ACS are conducted by authors as Mateveev [1] or Foeth [2].
Moreover, the study of other biphasic phenomena, in which is present an air-water
mixing, such as the spray, which often implies, in planing hulls, about a 20% share
of resistance, follows the same laws [3] and [4].
Studies in the past were based on experimental experience, trying to reproduce the
phenomenon in scale (on a model in tank testing) and then estimate the results in
one to one scale. This procedure, widely used in the naval field, has proved un-
suitable to investigate a phenomenon whose implications are manifold. In fact, the
parameters governing the biphasic phenomenon, under the hypothesis of negligi-
ble thermal phenomena, are free flow speed, linear dimension of body, accelera-
tion of gravity, density, pressure, surface tension and viscosity [5].
As is evident, in a model test in Froude similarity, it is impossible to take into ac-
count the similarity of other parameters like Euler, Weber and Reynolds numbers.
Generally, the similarity of Euler and Weber are neglected in model tank tests be-
cause ventilation phenomena are not important. In this case the geometry of the
cavity, the air flow and pressure of the bubble are the parameters that affect the
generation and development of a stable cavity. The cushion of air under the hull
also interacts with the wave generated from the boat and its wake.
So, for ventilated hulls, it's extremely important to use very high scales factor or
alternative methods. Computational Fluid Dynamic (CFD) methods are a good al-
ternative in order to test virtual models on a one to one scale, Ingrassia et al. re-
ported an example [6] where the CFD is an important tool for resistance prediction
that could be used for a design optimization loop.

2 CFD model of ventilation

To take into account the viscous phenomena, which are necessary to evaluate the
frictional resistance, RANSe (Reynolds Averaged Navier Stokes equations) meth-
ods have been used with a k-epsilon model for the study of turbulence.
The surface tension phenomena, those in which the We <<1, like bubbles and
sprays, need a mesh for calculating well enough to describe the geometry of the
interface. The free surface is studied with the Brackbill model that add a source
term in the momentum equation, considering surface tension constant along the in-
terface [7]. This term involves a pressure drop across the surface. It depends upon
the surface tension coefficient, and the surface curvature as measured by two radii
in orthogonal directions. Since the surface tension forces are proportional to the
radius of curvature of interface, it’s important to create a mesh that has a size of
between 5% and 10% of the local radius of curvature of interface, otherwise it’s
difficult to achieve convergence of the result and good results (see Fig. 1). It is
therefore essential to have a mesh small enough in all areas of the hull involved in
spray and ventilation.
A preliminary method for the numerical … 511

Figure 1. Mesh near the interface.

Bubbles dynamics is a rather complex phenomenon. When the bubble is at rest, it


tends to assume a spherical shape, due to the forces of surface tension. But when
the bubble is moving, the shape is altered due to the flow field. In particular, we
can detect the simultaneous presence of force due to hydrostatic pressure, inertial
force due to buoyancy, force of surface tension, viscous drag force and centrifugal
force due to internal circulation of gas. These forces alter significantly the geome-
try of bubbles as a function of Weber and Reynolds. Hill's model describes the in-
ternal circulation of gas [8]. It generates a centrifugal force acting in opposition to
the force of surface tension. These two forces, alternately prevail over one anoth-
er, generating most of the dynamic phenomena seen during the ascent of the bub-
bles (distortion, asymmetry, pulse). Because of these forces the bubbles below 6
mm in diameter tend to maintain a spherical shape, while those above 14 mm in
diameter take the so-called cap shape (Fig. 2). The bubbles above 200 mm instead
break due to the centrifugal forces.

Figure 2. Gas velocity vectors inside bubble (a) and vorticity inside and below the bubble (b).
It’s possible to note the cap shape.

The resultant of these forces causes distortion in bubble's shape that assume, dur-
ing motion, a pulsating ellipsoidal shape. This shape can be described by the as-
pect ratio E and the distortion factor γ, defined in [9] as:
௕ାఉ௕ ଶ
‫ܧ‬ൌ ; ߛൌ (1)
ଶ௔ ଵାఉ

In which a is the major axis and the minor axes are b and βb like in Fig. 3.
512 F. Cucinotta et al.

Figure 3. Ellipsoidal shape of bubble.

CFD results are compared in Table 1 with experimental results obtained by Davies
and Taylor [10].

Table 1. Shape parameters and rise velocity (R.v.) of bubbles. R.v. is compared with Davies &
Taylor formulation.
R.v. R.v. Velocity
‫׎‬ E γ CFD D&T difference
mm m/s m/s %
6 0.49 1.02 0.117 0.114 2.5 %

10 0.44 1.26 0.140 0.148 5.2 %

14 0.38 1.42 0.160 0.170 5.8 %

‹™Š‹…Š‫”‡–‡ƒ‹†‡Ž„„—„Žƒ…‹”‡Š’•‡Š–•‹׎‬ǤAccordance between experimental


and CFD results is very good (under 6% in velocity magnitude). The contact angle
is obtained by properly setting the wall adhesion model. It's really important to
create a structured mesh with a max size of 5 % of bubble's radius, for free bub-
bles, and 5 % of curvature radius of contact angle for bubble on wall.

3 FLAT PLATE’S MODEL

The fundamental requirements for meshing are mainly two:

1. It should be structured hexahedral.


2. It should be easily editable.

The first one is to the needs of the Brackbill's model simulation of surface tension.
This model also imposes a maximum size of the cell, in order to obtain conver-
gence of the solution in the simulations with boundary. The second one is to the
need to carry out a very extensive campaign of simulations, varying some control
A preliminary method for the numerical … 513

parameters. The system used is based on a calculation journal, obtained with pages
of text containing command lines in sequential order for the processing software.
Journal is written in function of the parameters of interest, using the language of
its own software.
From a practical point of view, the most critical parameters for the structured
mesh are Skew EquiSize and Aspect Ratio. Both were established, through the ex-
periences of the simulations, as EquiSize Skew <0.2 and Aspect Ratio <18. Be-
yond these limits, in two-phase turbulent flow, it is difficult to achieve conver-
gence of the calculation.
Flat plates tested had a square base of side 1 m x 1m and a thickness of 0.1 m. The
volume control is also a square base of side 10 m and height of 3 m. The size of
the mesh is designed so as to achieve higher definition in the areas of interest. In
correspondence to the bottom area, it was possible to obtain cells with maximum
size of 2.5 mm. In order to avoid having an excessive number of cells, were used
hanging nodes (Fig. 4).

Figure 4. Mesh on symmetry plane (a). Particular of the mesh under the plate (b).

The growth of cells along the three directions x, y and z is obtained through the
use of size function, created to allow the adjustment of the size of the single cell.
This growth was designed in order to not lead to the excessive increase of the pa-
rameter of Aspect Ratio on the periphery of the volume control. For this reason,
the growth factor has been set to a maximum value of 1.15, i.e. an increase of up
to 15% from cell to the next.
The computational model used is the transient VOF model with k-epsilon turbu-
lence. The free surface was treated with a function of geometric reconstruction. In
the geometric reconstruction approach, the standard interpolation schemes are
used to obtain the face fluxes whenever a cell is completely filled with one phase
or another. The geometric reconstruction scheme is used only for the cells near the
interface between two phases. This scheme represents the interface between fluids
using a piecewise-linear approach and, in this way, allows to obtain a free surface
very accurate, with a very little diffusion of water into the air only limited to the
thickness of a cell. The scheme used is that of Youngs [11]. It assumes that the in-
terface between two fluids has a linear slope within each cell, and uses this linear
shape for calculation of the advection of fluid through the cell faces. The first step
in this reconstruction scheme is calculating the position of the linear interface rela-
514 F. Cucinotta et al.

tive to the center of each partially-filled cell, based on information about the vol-
ume fraction and its derivatives in the cell. The second step is calculating the
advecting amount of fluid through each face using the computed linear interface
representation and information about the normal and tangential velocity distribu-
tion on the face. The third step is calculating the volume fraction in each cell using
the balance of fluxes calculated during the previous step.
The nozzle is positioned in the middle of the plate, along its entire width. The
boundary condition imposed is a velocity inlet. The air flow is the same in all sim-
ulations and is 1.2 m/s. Many testes are effectuated varying vector orientation.
Best results are obtained with a vector orientated at 33°. For larger angles, the
flow tends to not adhere to the solid wall creating turbulence and without covering
all the surface correctly (see Fig. 5 a).
A too little angle is not a good condition, as it is essential to create a cavity to ac-
commodate nozzles. But in this model, the nozzles are flush, so that the surface is
fully lubricated by an air layer without altering the flow with geometrical discon-
tinuity. The simulations are carried out by setting an initial flow, with speed de-
pending on the Froude of interest, in absence of ventilation. Once the flow field
becomes stable, with resistance values that oscillate around a mean value with a
small deviation, journal activates the ventilation on the bottom of the plate. In this
step transients arise, mainly due to the sudden change in the pressure range and
speed. At the end of the transitory, the phenomenon tends to evolve towards a new
situation of regime. As can be seen in Fig. 5 b, the trend of the CD has two flat are-
as corresponding to the period without ventilation and the period with ventilation.

NO AIR
AIR

Figure 5. Flat plate scheme (a). CD vs. flow time. It’s possible to note the two phases with and
without ventilation. (b)

The system is based on the following cycle of operations:

1. Choice of simulation parameters (τ, CV)


2. Structuring the mesh and setting solver by journal
3. Numerical analysis
4. PostProcessing
A preliminary method for the numerical … 515

4 Results

Model validation has allowed a series of tests on a ventilated flat plate. This is be-
cause the basic knowledge of the phenomena is essential for the proper design of
ACS. The campaign was conducted by varying the flow velocity and angle of at-
tack, the flat plates were not free to trim, heel and heave.
The results were included in graphs according to dimensionless parameters that
are a speed, a drag and a lift coefficient defined as:
௩ ஽ ௅
‫ ݒܥ‬ൌ ; ‫ܥ‬஽ ൌ భ ; ‫ܥ‬௅ ൌ భ (2)
ඥ௚௕ ఘ௕మ ௩ మ ఘ௕మ ௩ మ
మ మ

in which b is the plate breadth, v the velocity, D the drag and L lift. Curves were
plotted as a function of trim angle of the plate (τ).
0.05 3.5 No Air
3.5 Air
0.045 4.25 No Air
4.25 Air
0.04 5.0 No Air
5.0 Air
0.035
0.03
CL

0.025
0.02
0.015
0.01
0.005
0
2.4 CV
3.4 4.4
Figure 7. CD vs. CV varying trim angle τ (a). CL vs. CV varying trim angle τ (b). Continuous and
dashed lines are, respectively, with and without ventilation.

Table 2. Drag and lift coefficient difference between ventilated and unventilated plates.

CD CL
τ τ
Cv 3.5 4.25 5.0 3.5 4.25 5.0
2.55 -37.5% -27.8% -28.6% 4,8% -2,9% 3,7%
3.19 -36.9% -23.3% -27.3% 6,4% -4,6% 0,7%
3.83 -21.7% -21.4% -13.3% -1,0% -2,5% 3,1%
4.47 -12.5% -19.9% -13.5% 0,5% -0,4% 4,8%

The Table 2 summarize the percentage gain obtained on the total drag and lift, in
the case of ventilated plate compared to the unventilated one.
516 F. Cucinotta et al.

5 Conclusions

CFD allows to overcome the problems of scale, becoming an indispensable tool in


all applications where natural or forced ventilation is important. In fact, the CFD
simulations are made in one to one scale. The error, in free rising bubbles, has
never exceeded the threshold of 6% compared to Davies & Taylor theoretical ap-
proach.
CFD is an instrument suitable to the study of highly complex flows, provided they
have the necessary theoretical knowledge for the optimal use of available empiri-
cal models for the description of phenomena and in order to construct a suitable
mesh to obtain the convergence of differential equations.
Automating the process, obtained by the exploitation of journals, has proven to be
essential for the competitiveness of the method for the drastic computation time
reduction.
Curves obtained may be a good tool for preliminary design for the study of ACS.
Air lubrication contributes to reduced resistance flat plate up to 40%. The ad-
vantage is reduced significantly for high CV for the increase of the wave phenom-
ena. Air lubrication don’t change significantly lift that does not vary more than
7%.

6 Reference

[1] Matveev, K.I., (1999), Modeling of vertical plane motion of an air cavity ship in waves, Fifth
International Conference on Fast Sea Transportation, FAST, Seattle, USA.
[2] Foeth E. J., (2008), Decreasing frictional resistance by air lubrication, 20th International
Hiswa Symposium on Yacht Design and Yacht Construction
[3] Larsson L., Raven H. C. (2010). The principles of naval architecture series: Ship resistance
and flow. J. Randolph Pauling, Editor.
[4] Mäkiharju1 S. A., Perlin M. and Ceccio S. L., (2012). On the energy economics of air lubri-
cation drag reduction. Inter J Nav Archit Oc Engng.
[5] Mäkiharju S. A., Elbing B. R., Wiggins A., Schinasi S., Vanden-Broeck J., Perlin M.,
Dowling D. R., Ceccio S. L., (2012). On the scaling of air entrainment from a ventilated par-
tial cavity. J. Fluid Mech.
[6] Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique simultaneous ap-
proach for the design of a sailing yacht, (2015) International Journal on Interactive Design
and Manufacturing, DOI: 10.1007/s12008-015-0267-2
[7] Brackbill J. U., Kothe D. B., Zemach c. (1992) A continuum method for modeling surface
tension. Journal of computational physics 100, 335-354.
[8] Lamb, H., (1932). Hydrodynamics. sixth ed. Cambridge University Press, Cambridge.
[9] Tomiyama, A., Kataoka, I., Zun, I., Sakaguchi, T., (1998). Drag coefficients of single bubbles
under normal and micro gravity conditions. JSME Int. J. Ser. B. 41 (2), 472–479.
[10] Davies, R.M., Taylor, G.I., (1950). The mechanics of large bubbles rising through extended
liquid in tubes. Proc. R. Soc. Ser. A 200, 375–390.
[11] Youngs D. L. (1982), Time-Dependent Multi-Material Flow with Large Fluid Distortion. In
K. W. Morton and M. J. Baines, editors, Numerical Methods for Fluid Dynamics. Academic
Press, 1982.
Stiffness and slip laws for threaded fasteners
subjected to a transversal load

Rémi THANWERDAS1,2*, Emmanuel RODRIGUEZ1,2 and Alain DAIDIE2


1
Icam site de Toulouse, 75, avenue de Grande-Bretagne, CS 97615, 31076 TOULOUSE
Cedex 3, France
2
Université de Toulouse, INSA/ICA (ICA, CNRS UMR 5312), 3 rue Caroline Aigle 31400
TOULOUSE, France
* Corresponding author. Tel.: +33-534-505-016 ; E-mail address: remi.thanwerdas@icam.fr

Abstract This article focuses on improving the design methods for simplified
models of screwed connections (spring, beam or bar elements), especially for the
space sector. A detailed 3D model of a generic screwed connection has been
developed using industrial finite elements (FE) software. An approach based on
multiscale numerical designs of experiments (DOE) was used to obtain
metamodels of stiffness and slip depending on geometric, material and contact
parameters. In order to improve the integration of friction, an existing contact
model was adapted and used to supply a modified Coulomb model. Metamodels
from these numerical works were compared and correlated with experimental
double shear tests on a specimen loaded by an imposed transverse displacement.

Keywords: Threaded fasteners, Shear stiffness, Microslip, Design of


experiments

1 Introduction
In the space industry, finite element models (FEMs) are systematically used
during the iterative design process of structures. The screwed connections are
modeled in a very simplified way (e.g. springs, rods). After numerical analyses,
forces in the connections are injected into calculation margins to validate or
invalidate the design. Currently, there are margins of "zero-gap" and "zero-slip" at
the interface of the clamped parts [1]. They ensure that the preload level in the
connection is sufficient to keep the parts in contact with no relative movement
during the external loading.
This study is part of an evolution process currently taking place in design methods
and calculation margins. It concerns screwed connections subjected to in-plane
shear. In practice, the numerical stiffness associated with shear is arbitrarily set in
the 108 N/m range for all simplified connections (screwed and bolted). Thus, the
forces arising are potentially very high, which may cause oversizing problems.
Besides, no slip between parts is accepted, whereas, in fact, some slip tolerance

© Springer International Publishing AG 2017 517


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_52
518 R. Thanwerdas et al.

may be considered, depending on the criticality of the connections. This study is


about providing stiffness and slip models that take these relaxed requirements in
account, based on a more realistic behavior of screwed connections.
Semi-empirical formulations already used in the industry (e.g. Swift, Huth [3],
Boeing) can give the shear flexibility for certain types of connections. However,
these formulations do not take the stiffness of the contact interfaces into account
(e.g. clamped parts and screw/washer), although the behavior of these areas
strongly influences the overall stiffness of the connection before screw/hole
interference, i.e. during the slip phase [3, 4]. The microslips (and sometimes
macroslips) which can then occur at these interfaces are not considered either.
However, they are the source of many nonlinear phenomena in the connections,
such as damping or self-loosening [5, 6]. In order to correctly model slip, it is
necessary to consider the microscopic behavior of the contact interfaces. A model
of fretting, the Eriten-Polycarpou-Bergman (EPB) model [7], was then selected in
the literature, adapted and used in this study.
Double shear tests are presented in Part 2 in order to introduce the work done.
Then, a numerical study presented in Part 3 aims to provide metamodels of
stiffness and slip from a design of experiments approach. Finally, test/model
comparisons are presented in Part 4 and some prospects are proposed in Part 5 at
the end of this work.

2 Shear tests
Double shear tests were carried out (see Fig.1, a) to study the behavior of
screwed connections subjected to shear. They used two aluminum 2017A
specimens assembled with two stainless steel 316L M6x12 ISO 4762 screws
(Re0.2,screw = 205 MPa).
Screws, washers and test specimens were cleaned in successive baths with acetone
and ultrasound. A 3250 N preload (equivalent to 80% of Re0.2,screw) was introduced
in every screw by a tightening torque (torque wrench Facom E350, 2% accuracy).
The torque was estimated through 10 preliminary tightenings with a force washer
sensor under the screw head. The mean is C = 6.98 N.m, with 24% dispersion,
which is acceptable for this type of measurements, and will likely to be improved.
The test was displacement driven at 5 μm/s. The tangential force (Instron cell 5kN,
accuracy 0.5 %) and slip between specimens (capacitive sensor: Fogale MCC10,
accuracy +/- 1 μm) were measured during the test and used to draw the
corresponding results curve (see Fig.1, b). The outputs considered were the initial
stiffness, k0, and the residual slip, gp,res. Several configurations were tested, using
two roughnesses Ra = 0.39 μm and Ra = 1.51 μm (rugosimeter PAV-CV-PGK,
probe MFW-250 Contour, gaussian filter, standard cutoff), obtained by manual
polishing, and two amplitudes of machine displacement um = 0.3 mm and um = 0.5
mm. Each configuration was repeated 5 times.
Stiffness and slip laws for threaded fasteners … 519

Machine displacement um
Æ Machine force
Capacitive sensor
Æ Slip
Aluminum specimen 1
Slip interfaces

k0

gp,res
M6x12 316L M6 acier 316L
screw (x2) washer (x2)
ISO 4762 ISO 7091
Slip between specimens (mm)
Aluminum specimen 2 (a) (b)

Fig. 1. Double shear test set-up (a) and typical results curve (b)

All test results are presented in Part 4 and served as the reference in the
confrontation with the metamodels of stiffness and slip from the numerical study
presented in Part 3.

3 Numerical study of a generic screwed connection

3.1 Contact model


An existing contact model (EPB, [7]) was adapted to supply an equivalent
penalty Coulomb friction model [8] defined by two parameters: the coefficient of
friction μ and the critical slip γ (see Fig.2). This modified Coulomb model was
used to define the tangential contact between the clamped parts in the numerical
study of Part 3.

1st modification

The basic EPB model assumes an evenly distributed normal load on the contact
surface. However, in a screwed connection, the contact pressure distribution p at
the clamped parts interface depends on the contact radius r [2, 9]. Fernlund’s
model [9] was used to predict this distribution. It is defined by relation (1).
pFernlund (r) = Ar4 + Br3 + Cr2 + Dr + E (1)
Variables A-E depend on the initial preload P0, the diameter of the screw head dtv
and the angle φ of the compression cone of the assembly. The latter can be
calculated from the VDI 2230 [10]. The basic EPB model is then reformulated
using pressure instead of force. The normal approach d between the contact
surfaces then depends on the radius r of the contact zone.
520 R. Thanwerdas et al.

f
pFernlund (r ) K ³ Pasp ( z  d (r )) I ( z )dz (2)
d

with η the density of asperities on the contact surface, Pasp the normal force on a
single asperity, z the asperity height and Φ(z) the probability density of asperity
heights.

Fig. 2.Definition of the adapted EPB model and of the parameters μ and γ of the modified
Coulomb model

Relation (3) then connects the shear stress τ and the tangential displacement δ.
f
W (r) K ³ Qasp ( z  d (r ), G ) I ( z )dz (3)
d

with Qasp the tangential force on a single asperity. The Pasp and Qasp forces are
determined by relationships taking the geometry and the elastic-plastic behavior of
the asperities into account [7]. These relationships are not developed here. The
total tangential force Ft is obtained by integrating relation (3) over the whole
contact surface.
2S rc
Ft ³ ³
0 0
W (r )rdrdT (4)

2nd Modification

The basic EPB model needs three parameters to define the roughness of one
surface. These parameters are: the RMS roughness Rq, the mean radius of
curvature of asperities R and the density of asperities η. They were linked to Ra,
the arithmetic mean of the absolute asperity heights, through empirical
Stiffness and slip laws for threaded fasteners … 521

relationships [7, 11, 12], as detailed below. The knowledge of Ra alone is then
sufficient to determine Rq, R and η.

Fig. 3. Definition of surface parameters Ra, Rq, Rz, Sm and mRMS

For a gaussian distribution of asperity heights, the RMS roughness Rq is connected


to Ra by relation (5) [11].
Rq | 1.25 Ra (5)
The mean radius of curvature of asperities R is connected to the average period of
the profile Sm and to Ra by relation (6) [12].
R
0.05 Sm / Ra
2
(6)
For a periodic triangular approximation of the real gaussian profile (see Fig.3), the
RMS slope of the asperities mRMS, the mean period of the profile Sm and the
maximum height of the profile Rz are linked by relation (7).
mRMS 2 Rq / Sm / 2 (7)
The RMS slope mRMS of the asperities is finally connected to Ra by relations (8)
[11].
Ra d 1.6 P m
0.743
mRMS 0.183 Ra
(8)
Ra ! 1.6 P m
0.4
mRMS 0.208 Ra
For common surface finishes, the roughness parameter E Rq R K (unitless) is
between 0.02 and 0.06 [7]. As no direct relationship between η and Ra could be
found, an average value E 0.04 was chosen. η is then given by relation (9).
K 0.04 / R u Rq (9)
522 R. Thanwerdas et al.

The coefficient of friction μ and the critical slip ߛ are then identified on the
modified EPB model curve using the initial slope and the maximal tangential load
of this curve (see Fig.2).

3.2 Parametric study


In order to identify stiffness and slip models, a 3D static nonlinear parametric
model of a generic screwed connection was developed. It was made with a
commercial FEM software (Abaqus 6-13) and was composed of four parts: 1 ISO
4762 screw, 1 ISO 7091 washer, 1 top part, 1 bottom part. The parts were circular
to respect the shape of the compression cone of the assembly (see Fig.4). The
material was the same for both parts.

Fig. 4.Definition of the numerical model of generic screwed connection

The mesh was composed of hexahedral volume elements with complete


integration (C3D8) in the contact areas and reduced integration (C3D8R) in the
rest of the model. The contact between the parts was defined by a modified
Coulomb model, which includes two following parameters: the friction coefficient
μ and the critical slip γ. These parameters were identified on the curve of the
adapted EPB model (see Fig. 2). They vary with Ra, the materials of the parts (see
Table 2) and the initial preload P0, and were calculated for each test configuration.
The Coulomb friction coefficient for the screw head/washer contact was defined
for a dry contact (μvr = 0.25) and γvr = 0.1γp was chosen arbitrarily so that the
Stiffness and slip laws for threaded fasteners … 523

screw/washer contact was stiffer than the contact between the clamped parts. The
behavior of the threads is not taken into account because the assumption is made
that it does not affect the global loads and displacements studied here.
The calculation is divided into two steps, as detailed in Table 1: firstly, tightening
at P0, and secondly transverse displacement u. The preload P0 is equivalent to
80% of Re0.2,screw, a value commonly used in the industry. The dynamic effects are
not included in this model.

Type Blocking Loads Time/ΔT(s)


RP2 : Tx,Ty,Tz,Rx,Ry,Rz P0
Step 1 Static, general 1/0.2
RP3 : Tx,Ty,Tz,Rx,Ry,Rz Screw internal surface
RP2 : Ty,Tz,Rx,Ry,Rz u (load/unload)
Step 2 Static, general 1/0.001-0.05
RP3 : Tx,Ty,Tz,Rx,Ry,Rz RP2

Table 1. Loading conditions of the numerical model

The outputs considered are the initial stiffness, k0, and the residual slip, gp,res. The
tangential force is recovered from Reference Point 2 (RP2). The slip between parts
is the difference between the average displacement of the lower surface of the top
part and the average displacement of the upper surface of the bottom part. The
stiffness k0 is calculated for a fixed initial displacement increment of 0.1 μm.

Fig. 5.Typical loading/unloading curve (step 2) and outputs identification

The input parameters used for the study were the screw diameter d, the Young’s
modulus of the parts Ep and the screw Ev, friction at the interface of the clamped
parts with the roughness parameter Ra and the amplitude of the imposed
displacement u. These parameters varied over the three levels defined in Table 2.
524 R. Thanwerdas et al.

The parameters were used for a first response surface design of experiments that
included 27 numerical test configurations. The results were analyzed using
Minitab Statistical Software v17. A polynomial model of degree 2 with order 2
interactions was identified for each output.

Parameters d Ep Ev Ra u E ν Re H=3Re[13]
mm GPa GPa μm μm GPa --__ MPa MPa
Aluminum
71.7 0.330 503 1509
7075
Low level Titanium
2 71.7 114 0.2 1 114 0.342 880 2640
(-1) TA6V
Average level Average
6 132 153 1.7 50 132 0.330 354 1062
(0) material 1
High level Average
10 193 193 3.2 100 153 0.336 543 1628
(1) material 2
Stainless
193 0.330 205 615
steel 316L

Table 2. Input parameters levels and corresponding material properties

The models from the first design of experiments gave relatively high residues
(more than 20% on most test points). This was expected considering the large
variation ranges of the input parameters. Models of higher accuracy (degree 3,
interactions of degree 3, 4 and 5) gave almost zero residues on the test points, but
were much more sensitive to checks on random test points, with problems like
negative residual slips. Lower degrees models were less accurate but caused fewer
anomalies of this type. For extra precision, an analysis of variance (ANOVA) was
used to highlight the predominant influences of each input parameter. The same
designs of experiments were reused by taking variation ranges divided by two for
the most influential input parameters.

Fig. 6.Mean effects of the input parameters on the initial stiffness k0, evolution of the first design
of experiments for the output k0 (a) and response surface k0 = f(d,Ra) (b)

For example, the diameter of the screw d and Ra were the two parameters with the
most influence on the initial stiffness k0. Four further designs of experiments were
thus set up by varying the ranges of these parameters while maintaining the other
parameters (Ep, Ev, u) within their initial ranges. The models were validated if the
Stiffness and slip laws for threaded fasteners … 525

residues were below 10%. Formulations finally obtained for the initial stiffness k0
and the residual slip gp,res are given in Table 3.
k0 (N/mm) gp,res (μm)
2d d d 6 2ddd6 6 < d d 10 6 < d d 10
1 d u d 50.5 50.5  u d 0.1
0.2 d Ra d 1.7 1.7 < Ra d 3.2 0.2 d Ra d 1.7 1.7 < Ra d 3.2

Constant -3.74E+05 -1.68E+05 -7.72E+05 -4.13E+05 -2.44E-02 4.60E-02


d 4.66E+04 4.79E+04 7.28E+03 4.31E+04 2.74E-03 -6.16E-04
Ep 1.38E+00 7.30E-01 3.63E+00 2.17E+00 1.19E-07 1.84E-08
Ev 2.89E+00 1.61E+00 7.51E+00 4.97E+00 6.14E-08 -5.14E-07
Ra 3.95E+04 3.25E+04 1.64E+05 9.10E+04 4.68E-03 8.88E-03
u 0 0 0 0 4.35E-02 9.29E-02
d² 5.47E+03 2.94E+03 5.13E+03 3.36E+03 -1.79E-05 2.46E-04
Ep² -1.52E-06 -8.40E-07 -4.35E-06 -2.65E-06 -7.13E-14 -1.11E-13
Ev² -6.58E-06 -4.10E-06 -1.74E-05 -1.24E-05 2.07E-13 1.74E-12
Ra² 4.20E+04 8.09E+03 9.14E+04 2.51E+04 -1.07E-04 -3.82E-03
u² 0 0 0 0 5.48E+00 3.51E+00
d.Ep 3.17E-01 8.66E-02 4.97E-01 1.69E-01 4.75E-10 -2.94E-09
d.Ev -1.96E-01 -1.50E-01 -1.75E-01 -2.05E-01 -1.39E-08 -7.02E-09
d.Ra -3.57E+04 -9.67E+03 -3.91E+04 -3.91E+04 -2.23E-04 -1.78E-04
d.u 0 0 0 0 1.86E-03 -1.64E-02
Ep.Ev -5.06E-06 -2.60E-06 -1.35E-05 -1.35E-05 -3.77E-13 2.51E-13
Ep.Ra -6.88E-01 -8.84E-02 1.97E+00 -1.97E+00 -2.64E-08 7.35E-09
Ep.u 0 0 0 0 9.92E-07 -1.01E-07
Ev.Ra 1.36E-01 1.16E-01 7.88E-03 2.88E-01 1.99E-09 -1.46E-08
Ev.u 0 0 0 0 1.77E-06 1.56E-06
Ra.u 0 0 0 0 -3.34E-02 7.01E-02

Table 3. Coefficients of the polynomial models of initial stiffness k0 and residual slip gp,res

4 Results and discussion


In Table 4, the stiffness and the residual slip obtained with the metamodels
from Part 3 are compared with the results of the tests described in Part 2. The
reference test configuration corresponds to the following parameters: P0 = 3250 N,
Ra = 0.39 μm, um = 0.3 mm.

k0 (N/mm) gp,res (μm)


Error Huth Error
Configuration Test Model Test/ (for Test Model Test/
model comparison) model
Reference 2.50E+05 2.56E+05 2.42% 6.19E+04 6.34 4.77 24.78%
Ra = 1.51 μm 2.01E+05 2.12E+05 5.09% 6.19E+04 6.90 8.37 21.29%
um = 0.5 mm 2.37E+05 2.56E+05 7.99% 6.19E+04 76.2 67.8 11.02%

Table 4.Test results and test/model comparisons

The stiffness model gave very good correlation with the test results for two values
of Ra and displacement amplitudes tested. The stiffness obtained by the industrial
Huth’s formulation was 76% less than the stiffness calculated by the model. This
confirms that taking the slip phase into account has a non-negligible impact on the
526 R. Thanwerdas et al.

shear stiffness of screwed connections. Moreover, the residual slip model showed
good correlation with the test results, with errors of less than 25% on low values
of about 6.5 microns and close to 10% at a value of about 70 microns. The
residual slip model will likely have to be refined for very small residual slips.

5 Conclusions and perspectives


This study has established an approach to improve the current design methods
for screwed connections, especially for the space sector. In the case of in-plane
shear, this approach provides stiffness and residual slip metamodels depending on
the geometry, materials and contact characteristics of the screwed connection.
Tests have validated these models on some configurations.
Subsequently, the approach developed in this study will be consolidated by other
test campaigns. It will also be extended to other phenomena such as loosening and
damping and other types of loads (e.g. bending, torsion, tension-compression).
Ultimately, this work will lead to evolution in the design of bolted connections
and modeling in current industrial FEMs. Modifications in associated calculation
margins may also be considered.

References
1. ESA Requirements and Standards Division (2010). ECSS-E-HB-32-23A, Threaded fasteners
handbook. ESTEC, P.O. box 299, 2200 AG Noordwijk, The Netherlands.
2. Gant, F. (2011). Stratégie de modélisation et de simulation des assemblages de structures
aéronautiques en contexte incertain. Thesis, LMT Cachan, France, 1-137.
3. Groper, M. (1985). Microslip and macroslip in bolted joints. Experimental Mechanics, 25(2),
171-174.
4. Gaul, L., Nitsche, R. (2001). The role of friction in mechanical joints. Applied Mechanics
Reviews, 54(2), 93-106.
5. Ibrahim, R. A., Pettit, C. L. (2005). Uncertainties and dynamic problems of bolted joints and
other fasteners. Journal of sound and Vibration, 279(3), 857-936.
6. Jiang, Y., Zhang, M., Park, T. W., & Lee, C. H. (2004). An experimental study of self-
loosening of bolted joints. Journal of Mechanical Design, 126(5), 925-931.
7. Eriten, M., Polycarpou, A. A., Bergman, L. A. (2011). Physics-based modeling for fretting
behavior of nominally flat rough surfaces. International Journal of Solids and Structures, 48(10),
1436-1450.
8. Hibbitt, Karlsson and Sorensen (1992). Abaqus: Theory manual.
9. Fernlund (1961). A method to calculate the pressure between bolted or riveted plates, Tech.
Rep. 17, Chalmers University of Technology, Gothenburg, Sweden, 1-124.
10. VDI 2230 Blatt 1 (2014), Systematische Berechnung hochbeanspruchter
Schraubenverbindungen Zylindrische Einschraubenverbindungen, VDI-Richtlinien, ICS
21.060.10, Düsseldorf, Germany, 1-182.
11. Hasselström, A. K., & Nilsson, U. E. (2012). Thermal contact conductance in bolted joints.
Diploma Work 85/2012, Chalmers Univ. of Technology, Gothenburg, Sweden, 1-91.
12. Whitehouse, D. (1994). Handbook of surface metrology. Institute of physics Publishing,
Bristol, UK, 1-987, ISBN 0-7503-0039-6.
13. Dupreux, M. (2015). Aide-Mémoire Sciences des Matériaux, 3e edition Dunod, 1-400,
EAN13: 9782100745593.
Refitting of an eco-friendly sailing yacht:
numerical prediction and experimental
validation

A. MANCUSO1, G. PITARRESI1, G.B. TRINCA1 and D. TUMINO2*


1
Università degli Studi di Palermo, DICGIM, Viale delle Scienze, Palermo 90128, Italy
2
Università degli Studi di Enna Kore, Facoltà di Ingegneria e Architettura, Cittadella
Universitaria, Enna 94100, Italy
* Corresponding author. Tel.: +39-935-536-491; E-mail address: davide.tumino@unikore.it

Abstract A 4.60 m sailing yacht, made with a flax fiber composite and wood, has
been refitted with the aim of hull weight reduction and performance improvement
during regattas. The first objective was obtained with a lightening of internal hull
reinforcements while the second one with a reduction of the maximum beam, in
order to minimize the longitudinal moment of inertia. The refitting was first simu-
lated via CAD-FEM interaction to establish the feasibility of the procedure and to
verify the structural integrity. The resulting hull was then instrumented with strain
gauges and tested under typical rigging and sailing conditions. Results obtained by
the numerical modeling and measured from experiments were compared.

Keywords: Parametric design, Refitting, Sailing yacht.

1 Introduction

The design of high performance racing yacht is a very complex activity. The
goal is the better compromise between lightness and stiffness. This objective can
be achieved only if a structured design approach is followed [1]. In this perspec-
tive, an accurate estimation of internal loads (rigging) and external loads
(aero/hydro) becomes fundamental. Internal loads can be easily obtained from
equilibrium static conditions, once the type of rigging is established (with or with-
out spreaders, fractional, etc.). An accurate prediction of aero/hydro dynamic forc-
es is instead more complex, usually requiring time consuming numerical CFD
simulations [2]. Some works approaching the problem with numerical modelling
comprise [3,4], where the authors take into account the slamming effect which
leads to relevant loads, and [5] where the uncertainty of several variables (compo-
site fibre orientation, mechanical properties, etc.) is also considered. Some exper-

© Springer International Publishing AG 2017 527


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_53
528 A. Mancuso et al.

imental approaches have also been attempted, such as in [6] where the structural
integrity of a IACC yacht is monitored via fiber-optic sensors. The studied yachts
are generally short-life designed, so that long term effects on structural perfor-
mances are usually neglected in the design approach. Although several papers are
found in literature concerning the design of mid-large racing yachts, to the authors
knowledge, much little information is available regarding small boats as dinghies.
The present study has focused on the re-engineering of a 15’ SKIFF type sailing
yacht. The numerical Finite Element Method (FEM) is proposed to model the hull
and deck, made of plywood and a green sandwich composite with a Flax Rein-
forced Epoxy skin laminate. The paper focuses on the analysis performed to mod-
el a system of simplified loads, based on rigging and navigation conditions. The
model is then validated by means of Electrical Resistance strain gauges installed
in different locations of the hull and frame structure, which provide a local meas-
ure of the deformation state that is compared with the FEM prediction.

2 Methods

A simultaneous engineering design approach, similar to [7-9], has been applied


based on full integration between numerical simulations and experimental data.
The analyzed dinghy has been refitted with the aim of a weight reduction, ob-
tained by cutting and removing parts of the internal frame and hull. The refitting
was in particular designed to achieve performance improvements during regattas,
by reducing the maximum beam, in order to minimize the longitudinal moment of
inertia. The final shape obtained after an accurate FEM investigation, is shown in
fig. 1. The weight reduction with respect to the original shape was of about 19%
(from 88 kg to 71 kg.).
Electrical Resistance strain gauges (ER) have been applied in order to obtain
local measurements of strain components to be compared with the corresponding
numerical predictions. In particular, four three-grid rectangular rosettes (HBM
type RY81-6/350) and four single grid (HBM type LY11-6/350) ER gauges have
been installed on specific locations chosen from preliminary FEM analyses. These
locations are shown in Fig. 1: the rosettes are bonded on the upper lamina (in-
board side) of the sandwich hull material, while single grids are bonded on the
plywood frame structure. A three letters nomenclature is proposed, where the first
letter (S, R) stands for Single grid or Rosette; the second letter (K, W, A, S) stands
for Keel, Web frame, Ahead, Stern, respectively. The last letter (S, P) stands for
Starboard side and Port side. ERs SKS, SWS, RAS, RSS are approximately sym-
metric about the central beam keel to respectively SKP, SWP, RAP, RSP.
Care was taken in order to orient the grids of rosettes with the same angles with
respect to the local fibers direction. It is though noticed that misalignment errors
[10] would not affect the values of calculated principal strains, so the comparison
with numerical predictions in Section 3.3 is performed on the maximum principal
Refitting of an eco-friendly sailing yacht … 529

strains. All grids have been wired up with four wires and protected with a polyure-
thane paint HBM type PU120 and a silicone sealing layer HBM type SG250.

Fig. 1. CAD model of the dinghy after refitting, and location of strain gauges with nomenclature.

One rosette and one single grid ER were also prepared for use as dummy gaug-
es for temperature compensation on equivalent pieces of sandwich and plywood.
For the evaluation of the rigging loading in the lab, a multichannel HBM
UPM100 data logger was used to connect all ERs and synchronously acquire all
signals at a sampling rate of about 1 Hz. Each grid was connected using a quarter
bridge 4 wire scheme. The dummy gauges were connected and monitored as sepa-
rate quarter bridge channels. All active and dummy gauges have been preliminary
monitored for 20 minutes with the boat under unloaded conditions, and all ERs
showed no significant thermal drift, with the signals oscillating between ±1 Pm/m.

3 Numerical simulations

Numerical simulations have been performed starting from the complete CAD
model prepared in CREO Parametric (from PTC). The software package ANSYS
R.15 was used to setup different aspects of the simulation. Some practical assump-
tions have been adopted to determine the loading conditions: analytical equilibri-
um equations are used together with numerical simulations in the Mechanical
APDL environment. The complex bio-sandwich material used for the hull [11, 12]
is modelled by means of the ACP PrePost. Structural analyses are performed in
Workbench where different load cases are simulated.
530 A. Mancuso et al.

3.1 Estimation of loads

The main procedural assumption made for the simulations is to replace the rig
with an equivalent system of forces exerted by the rig on the boat, i.e. at the con-
nections of forestay and shrouds with the deck and at the mast foot. Determination
of the loads on the rig considers two different conditions: the preload (rigging) ap-
plied on the system and the combination of aerodynamic forces from the main sail
and weight of the crew.
Preload on mast, shrouds and forestay can be calculated by solving the equilib-
rium equations in the cartesian reference frame; this is a self-balanced system of
forces necessary to compensate the load variations on the rig during navigation.
After preload, the mast is subject to a pure compressive load, while shrouds and
forestay to pure tension loads. Two values of preload on the mast has been estab-
lished, and the resulting rigging loads are then calculated and reported in Table 1.
The smaller preload (Rigging 1) is applied on the boat when the deck was not yet
joined to the hull. The higher preload (Rigging 2) was applied when the boat was
complete with the connection of the deck. The sign of values reported in Table 1
refers to the prevalent action of the load: negative is for loads pushing the deck
downward and positive for loads pulling the deck upward. The rigging forces are
readily applied in the model at the connection points between the rig and the deck.
The second load system is the one that comes from the equilibrium between
aerodynamic, fluid dynamic and weight forces [13]. Figure 2 shows forces used
for the equilibrium in planes yz and xz. In figure 2 (left) the equilibrium of the
moments around the x axis is considered with respect to the centerboard, while in
figure 2 (middle) the equilibrium of moments around the y axis is considered with
respect to the center of buoyancy.

Wy Wx

bP,x
bWx,z
bWy,z
z P z
P
y x

Cy
Cx bCx,z
bP,y

Fig. 2. Equilibrium of moments around x (left) and around y (middle). Model in APDL used to
calculate loads on the deck due to wind and crew weight actions (right).
Refitting of an eco-friendly sailing yacht … 531

Resulting equations are as follows:

P ˜ bPy Wy ˜ bWy,z (1)

P ˜ bPx Wx ˜ bWx,z (2)

where, for eq. (2), the moment due to the hydrodynamic force on the hull and
on the centerboard has been neglected due to the small distance between the re-
sulting axis and the center of buoyancy. The weight of the crew is assumed P =
1500 N, while distances bP,x and bP,y are geometrically determined. Distance bWy,z
(or similarly bWx,z) comes from the evaluation of the center of pressure on the main
sail, see [13] for details. Components of the aerodynamic force W are then calcu-
lated as Wx = 230 N and Wy = 960 N.
The aerodynamic force is applied to the center of pressure on the main sail, and
is equilibrated by the crew and the hydrodynamics of the hull and centerboard.
The aerodynamic force and the crew weight can be transferred to the rig by analyt-
ically solving the equilibrium equations or by modeling the rig with FEM. In this
study the Mechanical APDL module is used to model the rig with beam and link
as shown in figure 2 (right). Reactions calculated in this analysis (Table 1, last
row) can then be applied to the connection points of the rig to the deck, and the re-
sultant forces are obtained by algebraically adding the navigation forces to the pre-
loads.
The weight of the boat is 900 N and is considered as a volume distributed load
and, added to the crew weight, equals the displacement of the hull, ’ = 2400 N. It
is assumed that the boat runs flat during navigation, hence no pitch and roll angles
are considered. No hydrodynamic effect is considered on the hull; an hydrostatic
pressure distribution is applied to the hull in order to equilibrate the total weight of
boat and crew regardless to any modification of the floating line caused by the
speed of navigation. In section 3.4 an extreme case will also be simulated where
the boat is supported on two waves at stern and bow, corresponding to the physi-
cal condition of incipient pre-planning navigation.
Finally, another force to be considered on the boat is given by the hydrodynam-
ic actions on the centerboard. In this paper, instead of using numerical techniques
as inertia relief [14, 15], we evaluate this force as reaction, applied on the trunk of
the centerboard, able to equilibrate the above mentioned external loads and keep-
ing a flat trim on the sea.

Table 1. Loads [N] on rig.

Configuration Mast Upwind shroud Downwind shroud Forestay


Rigging 1 (with deck) -3000 1323 1323 387
Rigging 2 (without deck) -3630 1600 1600 469
Navigation -1991 -809 32 1235
532 A. Mancuso et al.

3.2 FEM model

Once internal and external loads are determined, the CAD model of the boat is
implemented in the Workbench environment. Due to the complexity of the elastic
behavior of the composite sandwich used for the hull, the material has been de-
fined using the ACP PrePost. A mechanical characterization of the flax composite
skin and cork core has been performed in previous works [11, 12]. The elastic
constants of the unidirectional ply and the stacking sequence of the plies are de-
fined in the ACP in order to obtain an oriented layered section of the hull material.
The sequence used is [0/45/-45/90/cork/90/-45/45/0]. Direction 0° is aligned with
the longitudinal x axis. All other components of the boat, i.e. web frames, keel,
trunk and deck, are made of marine plywood.
The element type used for the FE model is the four-noded SHELL181, with
membrane plus flexural behavior. In the case of components made of marine ply-
wood the element has a specified thickness, while, for the sandwich hull, is asso-
ciated to a layered section. All connections between components are assumed to
be perfectly bonded. The resulting mesh is constituted by 120172 regular quadri-
lateral elements. Average dimension of the element side is approximatively 10
mm, but in some specific areas, where resistance gauges are located, mesh refine-
ments are defined. The orthotropy of the material defined in the ACP module is
associated to a linear elastic constitutive model implemented in the APDL solver.
According to considerations done in 3.1, displacements are constrained along x
at stern to equilibrate Wx, and along y at the trunk level to equilibrate the heeling
moment given by Wy and P.

Fig. 3. (left) loads applied to the FEM model; (right) FEM model of the boat and particular of the
mesh.

3.3 Validation with experiments

Experimental data for the validation of the FEM model are provided by meas-
uring the strains from ERs (see Section 2.2), under the action of rigging loads. The
Refitting of an eco-friendly sailing yacht … 533

boat has been rigged in two different configurations (with and without the deck),
and at different preload levels. Rigging loads up to 1600 N (measured on the
shrouds by a load cell and reported in Table 1) were applied, and ER data were
observed to grow linearly within this range of loads.
User coordinate systems are created in correspondence to the location of single
grid ER and rosettes (see fig. 3). The deformation values predicted by the FEM
model are calculated along the directions corresponding to the ERs orientations.
The three deformations corresponding to the three grids of rosettes were in partic-
ular taken from the deformations of the upper in-board lamina of the hull sand-
wich. These deformations were further combined to derive principal strains to be
compared with the equivalent experimental value. The particular self-equilibrated
set of rigging loads influences mainly the portion of the boat between shrouds and
forestay, leaving the aft portion substantially unloaded, see fig. 3 (right).
Table 2 reports experimental and numerical strains for the two configurations
with and without deck. A good agreement can be generally noted, especially for
the single grid ERs. It must be remarked that all ERs are placed in high gradient
strain areas and small errors in the localization of the measuring point can easily
result in significant departures between experimental and numerical results. In the
worst case, the maximum difference does not exceed 18% and, because of uncer-
tainties due to the localization of ERs, symmetry of structures, fiber alignment,
etc. these level of error is considered satisfactory.

Fig. 3. Maps used to calculate the strain in correspondence of SWP (left) and of RAP (right). Lo-
cal triads are positioned and aligned as ERs.

Table 2. Strains [Pm/m] obtained with experiments and FEM.

Configuration Method SKS SKP SWS SWP RAS RAP RSS RSP
Experiments 508 434 2474 2100 319 276 245 194
Rigging 1 FEM 465 396 2190 1997 327 325 245 228
error % 8.5 8.8 11.5 4.9 -2.6 -17.8 -0.3 -17.5
Experiments 311 256 1276 1144 - - - -
Rigging 2 FEM 310 270 1310 1220 - - - -
error % 0.3 -5.5 -2.7 -6.6 - - - -
534 A. Mancuso et al.

3.4 Load cases

Section 3.3 has demonstrated the reliability of the FEM model to reproduce de-
formations of the boat when subject to simple rigging preload. It is now interesting
to simulate real navigation conditions that are difficult to reproduce in laboratory,
and require measurements to be performed during tests at sea. At this purpose,
four operative conditions are simulated for the boat (see also Fig. 4):
x C1: only rigging with a load on shrouds of 1600 N,
x C2: floating on flat sea with loads due to rigging, hydrostatic pressure and the
crew sit on center of the deck,
x C3: navigation on flat sea with loads due to rigging, aerodynamic and hydro-
static pressure and the crew on trapeze,
x C4: navigation on rough sea with loads due to rigging, aerodynamic pressure
and the crew on trapeze.

C1: rigging C2: rigging + crew on


deck, hydrostatic
pressure

z z
x x

C3: rigging + external C4: rigging + external


loads, crew on trapeze, loads, crew on trapeze
hydrostatic pressure and boat on two waves

z z
x x

Fig. 4. Load configurations used in FEM simulations.

Results of FEM simulations are summarized in Fig. 5. These are presented in


terms of the strains that would be measured by the installed ERs. In the configura-
tion C4 the boat is supposed to be constrained only at bow and stern (i.e. standing
on two waves peaks) and no uniform hydrostatic pressure is applied on the hull.
Some considerations arise as follows.
Strains on the keel (SKS and SKP) are symmetric in all conditions. Instead,
strains on SWS and SWP differ significantly when navigation conditions are ap-
Refitting of an eco-friendly sailing yacht … 535

plied. The SWS strain in the downwind side is more than twice the SWP strain in
the upwind. It is then a general rule for this kind of boat that the downwind shroud
overloads the point of connection to the deck in order to equilibrate the weight of
the crew on trapeze. Configurations C1 and C2 are very similar. Configuration C4
generally increases the strain level with respect to C3, especially on the keel.
Regarding the hull strains (represented by the maximum principal strain at the
rosettes locations), a symmetric behavior is obtained for the areas ahead of the
mast (RAS and RAP). In the locations behind the mast of RSS and RSP strains are
more sensitive to navigation loading conditions. In particular, the downwind side
(RAP) has a higher increase of strain than the upwind side (RAS). In general, hull
maximum strains under C4 reach higher levels compared to C3, in particular for
the locations ahead of the mast.
In general, it is noted that during navigation the level of strains on the framing
plywood structure and on the hull can double the one due only to rigging.

Fig. 5. Calculation of strains form simulations for different operative conditions.

4 Conclusions

The present work has described a FEM model of a complete sailing dinghy.
The structure is composed by a hull made of a sandwich with cork core and flax
reinforced epoxy skins, a deck and an internal framing rig of plywood. The nu-
merical model implements materials constitutive behaviors developed from previ-
ous experimental mechanical characterizations. In order to verify the model, elec-
trical resistance single and three grid rosette strain gauges have been installed at
specific locations of the hull and framing structures. Experimental strains have
been measured under a symmetric rigging loading, and compared with equivalent
strains from the FEM model. This comparison provided fairly small differences
(below 18%), reckoned acceptable for the level of complexity of the analyzed
structure, thus providing good confidence in the prediction capabilities of the de-
veloped FEM model.
536 A. Mancuso et al.

Some different loading configurations have also been simulated and studied
numerically, representing complex scenarios as navigation under flat or rough sea.
The values of strains obtained with FEM on various boat locations are consistent
with the expected boat behavior. Future work will attempt to use the installed
strain gauges to measure strains during real navigation conditions, in order to pro-
vide further confirmation of the effectiveness of the FEM model also in complex
navigation conditions.

Acknowledgments The authors are grateful to ANSYS and HBM for their support given on sci-
entific activities of the project. A particular thank also goes to the Zyz Sailing Team students that
participate to manufacturing and racing activities.

References

1. Ingrassia T., Mancuso A., Nigrelli V., Tumino D. A multi-technique simultaneous approach
for the design of a sailing yacht. International Journal on Interactive Design and Manufactur-
ing, Article in Press. DOI: 10.1007/s12008-015-0267-2.
2. Alaimo A., Esposito A., Messineo A., Orlando C., Tumino D. 3D CFD analysis of a vertical
axis wind turbine. Energies, 2015, 8(4), 3013-3033.
3. Allen T., Battley M., Casari P., Kerling B., Stenius I., Westlund J. Structural Responses of
high Performance Sailing Yachts to Slamming Loads. 11th International Conference on Fast
Sea Transportation FAST, 2011, Honolulu, Hawaii, USA.
4. D. Kelly, C. Reidsema, A. Bassandeh, G. Pearce, M. Lee. On interpreting load paths and
identifying a load bearing topology from finite element analysis. Finite Elements in Analysis
and Design, 2011, 47, 867–876.
5. Lee M.C.W., Payne R.M., Kelly D.W., Thomson R.S. Determination of robustness for a stiff-
ened composite structure using stochastic analysis. Composite Structures, 2008, 86, 78–84.
6. Murayama H., Wada D., Igawa H. Structural Health Monitoring by Using Fiber-Optic Dis-
tributed Strain Sensors With High Spatial Resolution. Photonic Sensors, 2013, 3(4), 355–376.
7. Cerniglia D., Ingrassia T., D'Acquisto L., Saporito M., Tumino D. Contact between the com-
ponents of a knee prosthesis: Numerical and experimental study. Frattura ed Integrità
Strutturale, 2012, 22, 56-68.
8. Ingrassia T., Nigrelli V., Buttitta R. A comparison of simplex and simulated annealing for optimi-
zation of a new rear underrun protective device. Engineering with Computers, 2013, 29, 345-358.
9. Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear underrun protec-
tive device for truck. Proceedings of the 8th International Symposium on Tools and Methods
of Competitive Engineering, TMCE 2010, 2, 713-725.
10. Ajovalasit A., Cipolla N., Mancuso A. Strain Measurement on Composites: Errors due to
Rosette Misalignment. Strain, 2002, 38, 150-156.
11. Mancuso A., Pitarresi G., Tumino, D. Mechanical Behaviour of a Green Sandwich Made of
Flax Reinforced Polymer Facings and Cork Core. Procedia Engineering, 2015, 109, 144-153.
12. Pitarresi G., Tumino D., Mancuso A. Thermo-mechanical behaviour of flax-fibre reinforced
epoxy laminates for industrial applications. Materials, 2015, 8(11), 7371-7388.
13. Larsson L., Eliasson R.E. Principles of yacht design, 1996 (Adlard Coles Nautical, London).
14. Alaimo A., Milazzo A., Tumino, D. Modal and structural fem analysis of a 50 ft. pleasure
yacht. Applied Mechanics and Materials, 2012, 215-216, 692-697.
15. Bamett A.R., Widrick T.W. Ludwiczak D.R. Closed-Form Static Analysis With Inertia Re-
lief and Displacement-Dependent Loads Using a MSC/NASTRAN DMAP Alter. NASA TM
106836, 1995.
Geometric Parameterization Strategies for
shape Optimization Using RBF Mesh Morphing

Ubaldo Cella1,2*, Corrado Groth1 and Marco Evangelos Biancolini1


1
University of Rome “Tor Vergata”, Enterprise Engineering dept. “Mario Lucertini”, Roma,
Italy
2
Design Methods aerospace consulting (www.designmethods.aero), Messina, Italy
* Corresponding author. Tel.: +39-339-3970006;. E-mail address:
ubaldo.cella@designmethods.aero

Abstract Mesh morphing is one of the most promising approach for problems in
which numerical analyses, based on discretised domains, involve shape parame-
terization. Some of the benefits associated to its adoption are the reduction of the
computational meshing costs and the remeshing noise prevention, guaranteeing at
the same time the continuum shape parameterization and consistency of mesh to-
pology. One of the best mathematical tool to drive the mesh morphing (smooth-
ing) task is recognized to be Radial Basis Functions. This paper introduce the RBF
Morph tool and lists a set of applications in which the RBF shape parameterization
is used to face problems ranging from aerodynamic optimization to Fluid Struc-
ture Interaction analyses.

Keywords: Radial Basis Function, mesh morphing, numerical optimization,


geometric parameterization.

1 Introduction

The adoption of multidisciplinary numerical optimization (MDO) has become,


in the last decade, the standard choice to face most of design problems in the aero-
space field. Such methods can be used to study cases ranging from aerodynamic or
structural design to dynamic FSI (Fluid Structure Interaction) structural response
optimization. When a domain discretization is involved, a critical aspect of MDO
based numerical tools, which affects both the efficiency and the quality of the so-
lution, is related to the strategy to be used to implement the geometric parameteri-
zation. Most of methods commonly adopted can be divided into two categories:
CAD based and mesh morphing based. The first permits to exploit the features of
modern parametric CAD systems providing the possibility to manage complex

© Springer International Publishing AG 2017 537


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_54
538 U. Cella et al.

models, great control of the quality of the geometry and large flexibility in vari-
ables and constraints definition. The drawback is the necessity to regenerate the
computational domain for every new candidate to be investigated introducing un-
certainness in the procedure robustness and in the accuracy of mesh depending
analysis methods. The remeshing requirements, furthermore, limits the application
of automatic CAD based analysis procedures to problems having moderate dimen-
sion (in term of computational domain cells number) or to relatively simple ge-
ometries suitable to be modelled by structured grids. The mesh morphing ap-
proach consists in implementing the geometric parameterization directly on the
computational domain using algorithms able to smoothly propagate the model
displacement to the surrounding volume.
Several advantages are related to the RBF mesh morphing approach: the ro-
bustness of the procedure is preserved, any kind of mesh typology are supported
without the need to regenerate it, the smoothing process can be highly paralleliz-
able and can be integrated in any solver. The latter feature offers the very valuable
capability to update the computational domain “on the fly” during the progress of
the computation. The main disadvantages are the requirement of a “back to CAD”
procedure, some limitation in the model displacement amplitude, due to the distor-
tion occurring after morphing, and the high computational cost related to the solu-
tion of the RBF system which, if large computational domains are involved, im-
poses the implementation on HPC environments.
The first commercial mesh morphing software based on Radial Basis Functions
was RBF Morph. Its development began in 2008 as a consultancy activity to a top
Formula 1 team and continued with the application to typical aerospace engineer-
ing problems. Today the software can be fully integrated in several CFD (com-
mercial and open source) and FEM solvers and was successfully used to face
many engineering problems that require geometric parameterization (shape opti-
mization, 6DOF analyses, ice accretion, static and dynamic FSI analysis with both
2-ways and modal approach). It was demonstrated that the aforementioned listed
disadvantages of mesh morphing shape parameterization were successfully by-
passed or practically limited by the efficient implementation of the RBF algo-
rithm. The RBF Morph core technology is at the base of three European research
projects, funded within the 7 th FP in which the University of Rome “Tor Vergata”
is involved. In this paper a description of its working principles and a list of case
studies, in which the shape parameterization was implemented using RBF mesh
morphing, is presented.

2 Radial Basis Functions

Radial Basis Functions (RBF) are powerful mathematical functions able to in-
terpolate, giving the exact values in the original points, functions defined at dis-
Geometric Parameterization Strategies ... 539

crete points only (source points). The interpolation quality and its behaviour de-
pends on the chosen RBFs. Typical radial functions are reported in Table 1.
Table 1. Typical RBFs.

RBF ࣘሺ࢘ሻ
Spline type (ܴ௡ ) ȁ‫ݎ‬ȁ௡ ǡ ݊ odd
Thin plate spline (ܶܲܵ௡ ) ȁ‫ݎ‬ȁ௡ Ž‘‰ȁ‫ݎ‬ȁ ǡ ݊ even
Multiquadric ඥͳ ൅ ‫ ݎ‬ଶ
Inverse multiquadric ͳ
ξͳ ൅ ‫ ݎ‬ଶ
Inverse quadratic ͳ
ͳ ൅ ‫ݎ‬ଶ

Gaussian ݁ ି௥

A linear system (of order equal to the number of source points introduced)
needs to be solved for coefficients calculation [1]. Once the unknown coefficients
are calculated, the motion of an arbitrary point inside or outside the domain (inter-
polation/extrapolation) is expressed as the summation of the radial contribution of
each source point (if the point falls inside the influence domain). An interpolation
function composed by a radial basis and a polynomial is defined as follows:

‫ݏ‬ሺ‫ݔ‬ሻ ൌ ෍ ߛ௜ ߶ሺԡ‫ ݔ‬െ ‫ݔ‬௜ ԡሻ ൅ ݄ሺ‫ݔ‬ሻ (1)


௜ୀଵ

The minimal degree of polynomial ݄ depends on the choice of the basis func-
tion. A unique interpolant exists if the basis function is a conditionally positive
definite function. If the basis functions are conditionally positive definite of order
݉ ൌ ʹ, a linear polynomial can be used:

݄ሺ‫ݔ‬ሻ ൌ ߚ ൅ ߚଵ ‫ ݔ‬൅ ߚଶ ‫ ݕ‬൅ ߚଷ ‫ݖ‬ (2)

The values for the coefficients ߛ of RBF and the coefficients ߚ of the linear
polynomial can be obtained by solving the system
‫ܯ‬ ܲ ߛ ݃
ቀ ቁ ൬ ൰ ൌ ቀ ቁ (3)
்ܲ Ͳ ߚ Ͳ

where ݃ are the know values at the source points. ‫ ܯ‬is the interpolation matrix de-
fined calculating all the radial interactions between source points

‫ܯ‬௜௝ ൌ ߶ ቀቛ‫ݔ‬௞೔ െ ‫ݔ‬௞ೕ ቛቁ ͳ ൑ ݆݅ ൑ ܰ (4)

and ܲ is the constraint matrix


540 U. Cella et al.

଴ ଴ ଴
ͳ ‫ݔ‬௞భ ‫ݕ‬௞భ ‫ݖ‬௞భ
‫ ݔ ͳۇ‬଴ ‫ ݕ‬଴ ‫ ݖ‬଴ ‫ۊ‬
ܲ ൌ ‫  ۈ‬௞మ  ௞మ  ௞మ ‫ۋ‬ (5)
‫ڭ ڭ‬ ‫ڭ‬ ‫ڭ‬
ͳ ‫ݔ‬௞଴ ‫ݕ‬௞଴ ‫ݖ‬௞଴
‫ۉ‬ ಿ ಿ ಿ‫ی‬

The radial basis is a meshless method. Only grid points are moved, regardless
of the element connected, and it is suitable for parallel implementation. In fact,
once the solution is known and shared in the memory of each calculation node of
the cluster, each partition has the ability to smooth its nodes without taking care of
what happens outside because the smoother is a global point function and the con-
tinuity at interfaces is implicitly guaranteed.

3 RBF Morph description

RBF Morph is a numerical suite for morphing and shape optimization that
combines a very accurate control of the geometrical parameters with an extremely
fast mesh deformation capability. The tool was born as an add-on of the ANSYS
Fluent CFD code and is fully integrated in the solving process [2]. Today RBF
Morph is also available as a standalone library to be coupled with any code. It was
successfully embedded in the solving process with OpenFOAM, CFD++, elsA,
StarCCM+ and the FEM solvers NASTRAN and ANSYS Mechanical.
The industrial implementation of the RBF mesh morphing poses two chal-
lenges: the numerical complexity related to the solution of the RBF problem for a
large number of centres and the definition of suitable paradigms to effectively
control shapes using RBF. The RBF Morph software allows to deal with both as it
comes with a fast RBF solver capable to fit large dataset (hundreds of thousands
RBF points can be fitted in few minutes) and with a suite of modelling tools that
allow the user to setup each shape modification in an expressive an flexible way.
RBF Morph allows to extract and control points from surfaces and edges, to put
points on primitive shapes (boxes, spheres and cylinders) or to specify them di-
rectly by individual coordinates and displacements. Primitive shapes can be com-
bined in a Boolean fashion allowing to limit the action of the morpher itself. The
shape information coming from an individual RBF setup are generated interac-
tively with the help of the GUI and are used subsequently in batch commands that
allow to combine many shape modifications in a non-linear fashion (non linearity
occurs when rotation axis are present in the RBF setup). The displacement of the
prescribed set of source points can be amplified according to parameters that con-
stitutes the parametric space of the shape model.
The most important features of the RBF mesh morphing are: it provides a
mesh-independent solutions; the morphing action can be highly parallelizable;
very large models (hundreds of millions of cells) can be morphed in few minutes
Geometric Parameterization Strategies ... 541

and every kind of mesh element type (tetrahedral, hexahedral, polyhedral, pris-
matic, non-conformal interfaces, etc.) are supported. Fig. 1 reports an example of
an RBF mesh morphing action applied to the analysis of a motorbike windshield.

Fig. 1. Source points of an RBF problem and result of the mesh morphing action.

Mesh morphing with RBF Morph is executed in three steps:


1. definition and setup of the problem;
2. solution of the RBF system (fitting);
3. morphing of surface and volume meshes (smoothing).
The smoothing action is performed firstly applying the prescribed displacement
to the grid surfaces and then smoothly propagating the deformation to the sur-
rounding domain volume. A back2CAD feature is implemented in order to gener-
ate a CAD model of the modified geometry. The principle is to apply the RBF
setup to the source CAD model in STEP format (CAD morphing). The method is
not based on a total CAD regeneration but rather on a synchronization of the CAD
surfaces and the mesh. The method is not exact but in our experience the discrep-
ancies between the mesh and the generated CAD is in general very small.

4 Application of RBF

Several engineering problems can be efficiently faced by mesh morphing. Ex-


amples of applications in several fields can be found in [3] ,[4], [5] and [6]. Aero-
dynamic optimization is easily implemented by very flexible shape parameteriza-
tion capabilities. In [7] a car shape optimization coupled to an Adjoint method,
using the OpenFOAM CFD solver, is reported (Fig. 2).

Fig. 2. Mesh morphing for car shape optimization.


542 U. Cella et al.

RBF shape optimizations applied to the aerodynamic performance improve-


ment of a glider in manoeuvring is reported in [8] (Fig. 3). Other examples of
aerodynamic optimization problems faced with RBF Morph in aeronautics are re-
ported in [9] and [10].

Fig. 3. Wing/fuselage interference optimization of a glider

Fluid Structure Interaction (FSI) problems are very efficiently approached cou-
pling the structural to the fluid dynamic solution by mesh morphing. The classical
approach, also called CFD-CSM (Computational Fluid Dynamic – Computational
Structural Mechanics) or 2-ways, consists in iterating between CFD and FEM
solver according to the scheme reported in Fig. 4 [11]. In such procedure the pres-
sure from the CFD solution has to be mapped into the FEM grid as loads. The in-
terpolation between the two non-conformal domains is also performed applying
RBF.
RBF Morph
Undeformed CFD Loads FEM
geometry computation mapping computation

Yes No
CFD Mesh Deformed Shape
END
updating shape changed?

RBF Morph

Fig. 4. 2-ways FSI procedure

RBF mesh morphing is suitable to face steady [12] and unsteady [13] FSI
analyses by a modal approach. The principle consists in setting up a database of
RBF solutions of a number of structural natural modal shapes to be amplified, ac-
cording to the modal coordinates computed and updated during the CFD itera-
tions, and combined to replicate the structural deformation under loads. The CFD
environment becomes, with this approach, an intrinsically aeroelastic analysis that
do not involves any further structural analysis. The flow chart of the modal FSI
Geometric Parameterization Strategies ... 543

analysis setup is reported in Fig. 5 together with the examples of the four first mo-
dal shapes of a wing and the formulations used to render the mesh parametric.
Another very challenging task for mesh morphers is the ice accretion problem
on aircrafts. RBF Morph demonstrated the capability to robustly replicate very
complex ice shapes maintaining acceptable mesh quality [14].
Computational costs of the RBF morphing action is known to be a critical as-
pects. Thanks to its high parallelizability, very large problems can be faced on
HPC environments. To the authors knowledge, the largest model managed with
RBF Morph has 700 millions of cells and was morphed in 45 minutes using 768
CPUs. Table 2 reports three samples of the solver performance detailing the fitting
and the smoothing action elapsed time.
Undeformed geometry
Undamped vibration modes
ሾ‫ܯ‬ሿ‫ݍ‬ሷ ൅ ሾ‫ܭ‬ሿ‫ ݍ‬ൌ ܳ Structural modal analysis

Morphed CFD mesh database (one per mode)

Modal coordinates

Parametric mesh update

CFD
CFD iterations
flexible
model
No
Convergence?
Parametric mesh formulation
௡೘೚೏೐ೞ Yes

ܺ஼ி஽ ൌ ܺ஼ி஽బ ൅ ෍ ߟ௠ ߜܺ௠


END
௠ୀଵ

Fig. 5. Modal approach for FSI analysis

Table 2. RBF solver performance samples.

Mesh cells Source points CPUs Fitting time(serial) Smoothing (parallel)


14 mill. 60.000 4 53 sec. 3.5 min
50 mill. 30.000 140 25 sec. 1.5 min.
100 mill. 200.000 256 25 min 5 min.

5 Conclusions

A geometric parameterization method, suitable to be adopted in numerical analy-


ses that require volumes or surfaces domains discretization, was presented. The
method is based on mesh morphing approach using Radial Basis Functions. RBF
Morph was the first commercial mesh morphing software based on RBFs. The
qualities and the performance of the tool was demonstrated by reporting a set of
engineering application ranging from shape optimizations to FSI analyses. The
high parallelizability of the RBF solver provides, furthermore, the capability to
544 U. Cella et al.

manage very large mesh morphing problems. The workflow can be easily and ef-
ficiently automated and coupled to any flow or structural solver. The quality of its
morphing action was demonstrated also on very challenging problems as aircrafts
ice accretion. In comparison to a CAD and remesh driven, the RBF mesh morph-
ing approach has the advantage to reduce the time to setup, can be applied to any
type of grids, it prevents the remeshing noise and maintains high robustness levels
of the process.

Acknowledgments This work was partially supported by the RBF4AERO Project, funded in
part by the European Union 7th Framework Programme (FP7-AAT, 2007–2013) under Grant
Agreement no. 605396 (www.rbf4aero.eu). The load mapping procedure applied in the 2-way
FSI analysis presented in this paper constituted the starting base of activity of another EU 7th FP
project led by the University of Rome “Tor Vergata” and funded within the aeronautic pro-
gramme JTI-CS-GRA (Joint Technology Initiatives - Clean Sky - Green Regional Aircraft). The
project, called RIBES and funded under Grant Agreement no. 632556
(http://cordis.europa.eu/project/rcn/192637_en.html), aims to increase the load field transfer ac-
curacy between non-conformal domains.

References

1. De Boer A., van der Schoot M. S. and Bijl H., “Mesh deformation based on radial basis func-
tion interpolation”, Computers & Structures, 2007, 85(11–14), pp. 784 - 795.
2. Biancolini, M. E., Automotive Simulation World Congress 2014. Tokyo, Japan.
3. Biancolini , M. E. “Mesh morphing and smoothing by means of radial basis functions (RBF):
a practical example using fluent and RBF morph”, in: Handbook of research on computa-
tional science and engineering: theory and practice, 2012, GI Global, ISBN13:
9781613501160.
4. Biancolini M.E., Biancolini C., Costa E., Gattamelata D., Valentini P.P., “Industrial Applica-
tion of the Meshless Morpher RBF Morph to a Motorbike Windshield Optimisation”, Euro-
pean Automotive Simulation Conference (EASC), 2009, Munich (Germany).
5. Ponzini R, Biancolini M E., Rizzo G. and Morbiducci U., “Radial Basis Functions for the in-
terpolation of hemodynamics flow pattern: A quantitative analysis”, In: Computational Mod-
elling of Objects Represented in Images III: Fundamentals, Methods and Applications, Rome
(Italy), 5 – 7 September 2012, ISBN: 9780415621342.
6. Biancolini M E., Viola I. M.. and Riotta M., “Sails trim optimisation using CFD and RBF
mesh morphing”, Computers & Fluids, April 2014, Vol. 93, pp 46 – 60,
doi:10.1016/j.compfluid.2014.01.007.
7. E.M. Papoutsis-Kiachagias, S. Porziani, C. Groth, M.E. Biancolini, E. Costa and K.C. Gian-
nakoglou, “Aerodynamic Optimization of Car Shapes using the Continuous Adjoint Method
and an RBF Morpher”, 11th International Conference EUROGEN 2015, 14 - 16 September,
Glasgow (UK), doi: 10.13140/RG.2.1.1615.2165.
8. Costa E., Biancolini M. E., Groth C., Cella U., Veble G., Andrejasic M., “RBF–based aerody-
namic optimization of an industrial glider”, 30th International CAE Technologies, 27 - 28 Oc-
tober 2014, Verona (Italy).
9. Biancolini M. E., Cella U., Mancini M., Travostino G., “Shaping up – Mesh morphing reduces
the time required to optimize an aircraft wing”, ANSYS Advantage, 2013, Vol. VII, Issue 1.
10. Biancolini M. E. and Gozzi M., “Aircraft design optimization by means of Radial Basis
Functions mesh morphing”, ANSYS regional conference, Italy, June 2013.
Geometric Parameterization Strategies ... 545

11. Cella U. and Biancolini M. E., “Aeroelastic Analysis of Aircraft Wind Tunnel Model Cou-
pling Structural and Fluid Dynamic Computational Codes”, AIAA Journal of Aircraft, Vol
49, n. 2, March - April 2012.
12. Giovanni Paolo Reina, “Theoretical Investigation of Wing Aeroelastic Response after Store
Separation”, Master’s Thesis, University of Naples “Federico II”, December 2013.
13. Biancolini M. E., Cella U., Groth C. and Genta M. “Static Aeroelastic Analysis of an Aircraft
Wind-Tunnel Model by Means of Modal RBF Mesh Updating”, in course of publication in the
Journal of Aerospace Engineering.
14. Biancolini M, Groth C., “An Efficient Approach to Simulating Ice Accretion on 2D and 3D
Airfoil”, In: Advanced Aero Concepts, Design and Operations Applied Aerodynamics Con-
ference, Bristol, 2014.
Sail Plan Parametric CAD Model for an A-Class
Catamaran Numerical Optimization Procedure
Using Open Source Tools

Ubaldo Cella1*, Filippo Cucinotta2 and Felice Sfravara2


1
Design Methods aerospace consulting (www.designmethods.aero), Messina, Italy
2
University of Messina, Engineering dept., Messina, Italy
* Corresponding author. Tel.: +39-339-3970006. E-mail address:
ubaldo.cella@designmethods.aero

Abstract A geometric tool for a catamarans sail plan and appendages optimiza-
tion procedure is descripted. The method integrates a parametric CAD model, an
automatic computational domain generator and a Velocity Prediction Program
(VPP) based on a combination of sail RANS computations and analytical models.
The boat performance is obtained, in an iterative process, solving the forces and
moment equilibrium system of equations. Hull and appendages forces are mod-
elled by analytical formulations. The closure of the equilibrium system is provided
by the CFD solution of the sail plan. The procedure permits to find the combina-
tion of appendages configuration, rudders setting, sail planform, shape and trim
that maximize the VMG (Velocity Made Good). A significant effort was ad-
dressed to the selection and evaluation of open-source tools to be adopted in the
implementation of the method. The geometric parametric model, which is the core
of the procedure, was object of particular attention. The FreeCAD geometric mod-
eller was selected for this task. The sail shapes candidates are automatically gen-
erated, within the optimization procedure, by Python scripts that drive FreeCAD
to update the geometry according to the variables combination. A very flexible
model, able to offer a very wide space of variables, was implemented. This paper
describes the implemented geometric model and the environment in which is in-
cluded.

Keywords: Parametric CAD, Open-Source, Numerical Optimization, Sail design

© Springer International Publishing AG 2017 547


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_55
548 U. Cella et al.

1 Introduction

Sails design is a very complex aerodynamic problem [1]. An example of gen-


eral design process is reported in [2]. The high costs of research and of numerical
tools led, in the past years, to face the problem with an experimental “trial and er-
ror” strategy supported by the experience of sailors and sail makers, with the ex-
ception of big competition environments with high budgets, where sophisticated
design methods and numerical studies support the “trial and error” approach. The
historical evolution of the America’s Cup Class demonstrated, however, that the
qualitative leap in the design process is inevitably related to the capability to trans-
fer know-how and numerical methodologies from more advanced technology
fields as the aerospace research. This is becoming even truer with the increasing
analogies between modern catamarans and airplanes. There are many examples of
studies that concern the use of numerical tools to investigate the sailing perform-
ance in the design phase. In [3] a numerical approach for prediction of sailing per-
formance and validated with experimental tests is applied. In [4] and [5] the sail
performance in upwind and downwind sailing condition is estimated by CFD. In
[6] an optimization procedure is described for the keel design.
The differences respect to the past is the recent availability of engineering open
source tools whose growing level of maturity [7] offers less sponsored sailing
teams the possibility to focus the economical efforts to research and to the im-
provement of technological capabilities. The work presented in this paper exploits
this new scenario to deal with the problem of developing efficient, performing and
cost-effective sails aerodynamic design methodologies. To this aim, a multidisci-
plinary numerical shape optimization procedure was developed. The core of the
tool is the sail geometric model which must be parametric, has to be able to gener-
ate the largest possible set of solutions and must be implemented in a batch proce-
dure.
The method is applied to the optimization of an A-Class multihull sail plan.
The A-Class catamaran is a very high tech boat and is considered the fastest sin-
gle-handed racing dingy in the world. It was object of rising interest, from the en-
gineering point of view, due to its many similitudes (apart the size) with the more
famous modern America’s Cup Class multihull. It represents, in fact, a very valu-
able laboratory where to test novel solutions with reasonable costs.
A preliminary version of this design method was developed using commercial
software. A new version, fully based on open-source tools, is under tests and is
here introduced. The whole procedure is managed by a combination of scripts
written in Scilab, Python and Fortran. The VPP module (VPP was introduced by
Kerwin in [8], subsequent developments are reported by Philpot in [9] and [10])
was developed using the Scilab computing environment. The parametric geometry
is generated by FreeCAD. The evaluation of OpenFOAM was recently completed
within the SHAPE EU funded program and its integration is on progress. The de-
scription of the method and an image representative of the typical numerical
Sail Plan Parametric CAD Model ... 549

analysis was selected by ANSYS as “best-in-class” winners of the 2016 Hall Of


Fame Competition in the corporate category.

2 Boat performance prediction model

A boat model, fully based on analytical formulation, is proposed. The objective


is to provide a very fast and versatile tool capable to parameterize several aspect
of the boat components: chord, draft, twist, setting, airfoil and planform of the ap-
pendages as well as a range of hull parameters. The model is developed in form of
a function able to interact, in a comparative iterative process within the equilib-
rium equations system, with the sail RANS aerodynamic solution to constitute a
Velocity Prediction Program (VPP).
ߚ
݀
ܺ
ܻ ܼ
‫ܨ‬௛
ܻ

݄௛

ܹெ
݄
߮
‫ܮ‬
ߜ

݈ெ

Fig. 1. Forces acting on the boat and reference frame.

The forces equilibrium equations of the complete boat along the X, Y and Z di-
rections, referred to a frame with the X axis aligned with the sailing direction and
Z axis perpendicular to the water plane (Fig. 1) are:

‫ۓ‬ ‫்ܦ‬ை் ൌ ෍ ‫ܦ‬௙௢௜௟௦ ൅ ෍ ‫ܦ‬௥௨ௗௗ௘௥௦ ൅ ‫ܦ‬௛௨௟௟ ൅ ‫ܨ‬௔௘௥௢


ۖ
‫ܨ‬௛ ܿ‫ ߮ ݏ݋‬ൌ ෍ ‫ܮ‬௙௢௜௟௦ ܿ‫ݏ݋‬൫߮ േ ߜ௙௢௜௟௦ ൯ ൅ ෍ ‫ܮ‬௥௨ௗௗ௘௥௦ ܿ‫ݏ݋‬൫߮ േ ߜ௙௢௜௟௦ ൯ ൅ ‫ܮ‬௛௨௟௟ 
‫۔‬
ܹۖ ൅ ܹ ൅ ‫ ߮ ݊݅ݏ ܨ‬ൌ ܹ ൅ ෍ ‫ܮ‬
‫ ە‬ெ ஻ா ௛ ஻ை ௙௢௜௟௦ ‫݊݅ݏ‬൫߮ േ ߜ௙௢௜௟௦ ൯ ൅ ෍ ‫ܮ‬௥௨ௗௗ௘௥௦ ‫݊݅ݏ‬ሺ߮ േ ߜ௥௨ௗௗ௘௥௦ ሻ

(1)
where ‫ ܦ‬refer to the hydrodynamic forces acting along the X axis while ‫ ܮ‬refer to
forces laying on the YZ plane. ‫ܨ‬௔௘௥௢ is the aerodynamic force of the surfaces ex-
posed to wind. ܹெ , ܹ஻ா and ܹ஻ை are respectively the crew weight, the boat
empty weight and the operative displacement.
550 U. Cella et al.

The moment equilibrium around the X axis and the centre of buoyancy of the
downwind hull gives:
݀
ܹ஻ா ܿ‫ ߮ ݏ݋‬൅ ܹெ ݈ெ ܿ‫ ߮ ݏ݋‬ൌ ‫ܨ‬௛ ݄௛ ൅ ෍ ‫ܮ‬௙௢௜௟௦ ݄௙௢௜௟௦ ൅ ෍ ‫ܮ‬௥௨ௗௗ௘௥௦ ݄௥௨ௗௗ௘௥௦  (2)
ʹ

The left hand side of the equation represents the maximum possible righting
moment occurring when the helmsman is at the trapeze. It was decided to not in-
volve the yaw and pitching moment equilibrium at this stage.
To estimate the hull side force ‫ܮ‬௛௨௟௟ (force laying in a plane parallel to the wa-
ter plane and normal to the sailing direction), the bare hull is modelled as a lifting
body:
ͳ ߲‫ܥ‬௅ಹ
‫ܮ‬௛௨௟௟ ൌ ߩ௪ ܸ ଶ ܵு ߚ (3)
ʹ ߲ߚ

in which ߩ௪ is the water density, ܸ is the boat speed and ߚ is the leeway angle.
డ஼ಽಹ
The lift curve slope is estimated by an analytical formulation tuned against a
డఉ
matrix of CFD solutions at several hull speed. The value of ܵு , which is the side
projection on the symmetry plane of the submerged part of the hull, is estimated
by CAD.
The hull total resistance coefficient is modelled as a combination of a friction
and a residuary component [11]:

‫ ்ܥ‬ൌ ሺͳ ൅ ݇ሻ‫ܥ‬௙ ൅ ‫ܥ‬௪

The skin friction coefficient ‫ܥ‬௙ is estimated according to the ITTC-57 friction
line expression. The residuary component ‫ܥ‬௪ is simplified by an exponential for-
mulation tuned against CFD solutions. Foils are modelled as wings. The polars are
estimated applying preliminarily design criteria from aerospace literature [12].
The possibility to access to an experimental airfoils database in UIUC .drg format
[13] or to analyse “on the fly” an opportunely parameterised section geometry, by
a coupled panel/boundary layer code [14], was also implemented.
Substituting the foils forces formulation in the equilibrium system of equations
(1), in the moment equilibrium equation (2) and including the hull side force equa-
tion (3) we obtain (assuming velocity and sail centre of effort to be given as input)
a system of five equations and five unknowns (‫்ܦ‬ை் , ‫ܨ‬௛ , ܹ஻ை , ‫ܮ‬௛௨௟௟ and ߚ). The
solution of the equations system is implemented as a script function (written in
Scilab) that produces as output the boat total resistance ‫்ܦ‬ை் and the sail heeling
force ‫ܨ‬௛ at a given speed, centre of effort height and set of parameters characteriz-
ing the boat configuration. The closure of the performance solution problem is ac-
complished by coupling the sail RANS solution in an iterative process in which
the values of boat speed and sailing direction is varied until the sail thrust ‫ܨ‬௧ ஼ி஽
and heeling forces ‫ܨ‬௛ ஼ி஽ , computed by CFD, are equal respectively to the total
Sail Plan Parametric CAD Model ... 551

boat drag ‫்ܦ‬ை் and the sail heeling force ‫ܨ‬௛ estimated by the boat analytical
model:
‫்ܦ‬ை் ൌ ‫ܨ‬௧ ஼ி஽
൜ 
‫ܨ‬௛ ൌ ‫ܨ‬௛ ஼ி஽

The values of speed and sailing course that satisfy the above equalities are used
to provide the boat performance in terms of Velocity Made Good (ܸ‫ ܩܯ‬ൌ
ܸ …‘• ߚ் , where ߚ் is the true wind angle).

3 Optimization environment

An optimization environment in which the optimal sail plan, trim and append-
age configuration is searched is developed. The method integrates, in an automatic
process, a sail parametric CAD model, a computational domain generation mod-
ule, the RANS analysis and the VPP model as schematized in Fig. 2. With the
view of managing a large number of variables, a decision making algorithm based
on the Simplex approach [15] is currently adopted.
The performance of the design tool is strictly related to the capability of the
geometric module to propose the widest possible range of candidates and to the
capability to offer a large set of design variables. The description of the parametric
model that fulfils such requirements is following reported.

Fig. 2. Scheme of the optimization procedure.


552 U. Cella et al.

3.1 Parametric CAD model

The selected strategy to parameterise the computational domain is based on a


parametric CAD geometry update and in the CFD mesh regeneration. The chosen
software, used to support this procedure, is the FreeCAD general purpose open-
source parametric 3D CAD modeller. FreeCAD makes heavy use of open-source
libraries as OpenCascade, Coin3D and Qt. The program itself can also be used as
a library by other programs and has the capability to be totally controlled by Py-
thon scripts in command line mode without GUI interface. The set of adopted
geometric parameters are made available to the optimization algorithm from a Py-
thon script and used as design variables. The script is in charge also to export the
model in Stereo Lithography interface format (stl) which is the format required by
the selected open-source CFD mesh generator.
The modelled sail plan consists in a single mast/mainsail configuration. The
CAD parameters were selected with the aim to investigate the largest possible
range of geometries. Traditional sail plans, wing masts or rigid sails with a small
portion of flexible sail can be generated. The sail is built by a loft surface through
a foot, an intermediate arbitrarily positioned and a head curve that are used as con-
trol sections. The luff curve is used as guide. In the similar manner, the mast is
generated from three geometries at the same sail stations. The planform is con-
trolled by reference surface, aspect ratio, taper ratio and by other parameters that
give the possibility to investigate any kind of shape. The examples in Fig. 3 give
the sense on the flexibility of the parametric model. Table 1 reports the list of im-
plemented parameters.

Fig. 3. Examples of sail planforms that can be generated by the parametric CAD module.

Sail sections are modelled by cubic Bezier curves. The first point of the control
polygon is connected to the mast luff, the last one coincides with the leech of the
sail. The four coordinates of the two intermediate control points are parameters of
the geometry (polylines in Fig. 4). The mast sections are generated by spline
curves controlled by three parameters: the tangent tension at the leading edge, the
tangent tension at the luff point and the angle between the latter tangent and the
Sail Plan Parametric CAD Model ... 553

mast chord. The curve is mirrored respect to the chord to guarantee the symmetry
of the mast section. The tangent continuity on the leading edge is assured by posi-
tioning the second control point on a line orthogonal to the chord.
The mast spanner and the three sail sections angle are setting parameters. The
input reference surface area is kept unchanged. After the geometry creation, the
final sail area is measured and the loft surface cut in order to restore the required
value.

Fig. 4. Examples of mast/sail sections that can be generated by the parametric CAD module.

Every optimization iteration the CAD model is updated according to the se-
lected parameters combination, a new computational domain generated and the
RANS computation restarted on the new geometry. The evaluation of a single de-
sign point of a non-separated aerodynamic solution (which in general involves
four or five iterations between CFD analysis and boat analytical model) is in gen-
eral performed in around 15 minutes running in parallel on a workstation with 20
CPUs.
Table 1. Sail plan geometric parameters.

Sail reference surface Mast spanner angle


Aspect Ratio Mast luff curve camber
Taper ratio Mast sections chord (3 var.)
Mid-section position (% of mast) Mast sections leading edge tangents tensions (3 var.)
Mid-section sail chord Mast sections trailing edge tangents tensions (3 var.)
Sail angle respect boat symmetry Mast sections trailing edge angles (3 var.)
Sail twist Sail sections control points coordinates (12 var.)
Twist between foot and mid-section Sail foot and head chords vertical angle (2 var.)

4 Conclusions

A numerical optimization environment for catamarans sail plan and appendages,


that couples a VPP based on analytical models and on a sail RANS computation,
was developed. The analytical formulations, used to model the hull and append-
ages forces, were implemented in a form of independent functions and coupled to
the sail aerodynamic solutions to solve the equilibrium system of equations of the
554 U. Cella et al.

boat in an iterative procedure. This procedure constitutes the VPP module that es-
timates the performance of the selected geometric configuration in term of boat
VMG. The sail parameterization strategy is based on the generation of a paramet-
ric CAD model and a computational domain remesh. The geometric module was
developed using the open-source FreeCAD modeller. A very flexible and robust
model was built implementing a large number of parameters giving the optimiza-
tion procedure the possibility to cover a very wide design spaces and offering the
possibility to select a large number of design variables. The module is imple-
mented in the procedure by a Python script that drives the update of the geometry
and export it in the format suitable to the analysis tool.

Acknowledgments The authors wish to thank Marco Evangelos Biancolini from the university
of Rome “Tor Vergata” for having supported the development of the baseline procedure with the
required commercial software. Special thanks are also reserved to Agostino De Marco from uni-
versity of Naples “Federico II” for having supported the development of the analytical models
used in the VPP.

References

1. Claughton, A.R., Shenoi, R.., Wellicome, J.F.: Sailing Yacht Design: Theory. Addison Wesley
Longman (1998).
2. Fallow, J.B.: America’s Cup sail design. J. Wind Eng. Ind. Aerodyn. 63, 183–192 (1996).
3. Yoo, J., Kim, H.T.: Computational and experimental study on performance of sails of a yacht.
Ocean Eng. 33, 1322–1342 (2006).
4. Viola, I.M.: Downwind sail aerodynamics: A CFD investigation with high grid resolution.
Ocean Eng. 36, 974–984 (2009).
5. Ciortan, C., Guedes Soares, C.: Computational study of sail performance in upwind condition.
Ocean Eng. 34, 2198–2206 (2007).
6. Cirello, A., Mancuso, A.: A numerical approach to the keel design of a sailing yacht. Ocean
Eng. 35, 1439–1447 (2008).
7. Deshpande, A., Riehle, D.: The Total Growth of Open Source. In: Open Source Development,
Communities and Quality. pp. 197–209. Springer US, Boston, MA (2008).
8. J.E. Kerwin: A velocity prediction program for ocean racing yachts (revised February 1978).
(1983).
9. Philpot, A.B.: Developments in VPP Capabilities. In: Yacht Vision. , Auckland, New Zealand
(1994).
10. Philpott, A.B., Sullivan, R.M., Jackson, P.S.: Yacht velocity prediction using mathematical
programming. Eur. J. Oper. Res. 67, 13–24 (1993).
11. Insel M. and Molland A.: An investigation into resistance components of high speed dis-
placement catamarans. Transaction of the Royal Institute of Naval Architects, (134):1 – 20,
(1992).
12. Abbott, H., Von Doenhoff, A.E.: Theory of Wing Sections: Including a Summary of Airfoil
Data. Dover Publications, New York (1959).
13. University of Illinois: UIUC wind tunnel data on the web, http://m-
selig.ae.illinois.edu/pd.html.
14. Drela, M.: XFOIL: An Analysis and Design System for Low Reynolds Number Airfoils. Pre-
sented at the Conference on Low Reynolds Number Airfoil Aerodynamics (1989).
15. Nelder J. A. and Mead R.: Simplex method for function minimization. The Computer Jour-
nal, 7:308 – 313 (1965).
A reverse engineering approach to measure the
deformations of a sailing yacht

Francesco DI PAOLA1, Tommaso INGRASSIA2*, Mauro LO BRUTTO3 and


Antonio MANCUSO2
1
DARCH, Università di Palermo, viale delle Scienze, 90128 Palermo, Italy
2
DICGIM, Università di Palermo, viale delle Scienze, 90128 Palermo, Italy
3
DICAM, Università di Palermo, viale delle Scienze, 90128 Palermo, Italy
* Corresponding author. Tel.: +39-09123897263. E-mail address:
tommaso.ingrassia@unipa.it

Abstract In this work, a multidisciplinary experience, aimed to study the perma-


nent deformations of the hull of a regatta sailing yacht is described. In particular, a
procedure to compare two different surfaces of the hull of a small sailing yacht,
designed and manufactured at the University of Palermo, has been developed. The
first one represents the original CAD model while the second one has been ob-
tained by means of a reverse engineering approach. The reverse engineering pro-
cess was performed through an automatic close-range photogrammetry survey,
that has allowed to obtain very accurate measures of the hull, and a 3D modelling
step by the well-known 3D computer graphics software Rhinoceros. The reverse
engineering model was checked through two different procedures implemented by
the graphical algorithm editor Grasshopper. The first procedure has allowed to
compare the photogrammetric measurements with the rebuilt surface, in order to
verify if the reverse engineering process has led to reliable results. The second has
been implement to measure the deviations between the original CAD model and
the rebuilt surface of the hull. This procedure has given the possibility to highlight
any permanent deformation of the hull due to errors during the production phase
or to excessive loads during its use. The obtained results have demonstrated that
the developed procedure is very efficient and able to give detailed information on
the deviation values of the two compared surfaces.

Keywords: reverse engineering; close range photogrammetry; CAE tools; sailing


yacht; generative algorithms.

© Springer International Publishing AG 2017 555


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_56
556 F. Di Paola et al.

1 Introduction

After a boat has been constructed and used, differences in shape and
dimensions can occur due to production defects and/or excessive loads during use.
The measurement of these differences represents a very important task because
they have direct impact on boat speed, stability, strength and efficiency [1].
Two different approaches can be used to reconstruct the real hull shape of a
boat: a direct method and an indirect (or non-contact) one [2]. Using the first
approach, based on manual measurement tools, a direct contact with the boat is
needed. This kind of method, even if not more expensive and rather flexible,
usually does not produce very accurate results, especially for large-dimensions
objects. Indirect approach, instead, does not require any contact between
measurements tools and boat hull [3]. Most common techniques used for non-
contact reconstructions of 3D objects [4-8] are based on laser scanning systems
and photogrammetry methods.
Photogrammetry allows to determine the size and the shape of an object by
analyzing recorded images. In particular, close-range photogrammetry is the
technique usually used in reverse engineering processes [9]. Many studies have
demonstrated that close-range photogrammetry is a very accurate and low-cost
method and, in many applications, can have comparable performances to laser
scanning methods and coordinate measuring arms [10-11]. For these reasons,
close-range photogrammetry, also thanks to recent informatics, technological and
software improvements, has become more and more widespread and has been
largely used in different application fields (like structural monitoring, engineering
and manufacturing, cultural heritage, quantifying landform change, etc.).
In this paper a reverse engineering process has been carried out to reconstruct
the shape of the hull of a small sailing yacht with the aim to measure its defor-
mations after several regattas. The main objective of the work is to demonstrate
that deformations measurement can be made in a very fast and accurate way
through automatic digital close range photogrammetry and automatic processes
developed using Grasshopper, an advanced visual programming environment for
Rhinoceros 3D. The close range photogrammetry has been used to measure the re-
al shape of the hull with an accuracy of about ±0.1 mm; then, a 3D modeling pro-
cess has been performed to reconstruct the hull’s surface. Two different proce-
dures have been implemented in Grasshopper to check the 3D model as regards
photogrammetric measurement and to compare the 3D model with original CAD
model.
A reverse engineering approach to measure … 557

2 Case study

A small sailing yacht (i.e. a dinghy), designed and manufactured at the Univer-
sity of Palermo has been surveyed (Fig. 1 left). The hull shape of the analyzed
sailing yacht has been defined following a simultaneous design approach [12-13].
In particular, CFD codes and analytical resistance prediction models have been
simultaneously used in order to find the optimal hull shape for a given sailing
condition (close-hauled in a breeze). The CAD model of the hull has been firstly
created by setting typical design ratios (e.g. prismatic coefficient, beam to length
ratio) and refined identifying two mutually set of curves (Fig. 1 right) perpendicu-
lar one each other (the so called waterlines and sections) used to define the hull
surface.

Fig. 1 -Crew over the wing of LED during regattas (left); yacht lines plane (right)

3 Reverse engineering model: the photogrammetric survey

The use of close-range photogrammetry in reverse engineering and manufactur-


ing applications is not new because this technique is one of the main approaches to
get accurate, precise and reliable 3D data. Applications are typically carried out in
scenarios where it is necessary to have measuring accuracy in the range of a few
tens of micrometers to tenths of a millimeter and where object size is in the range
1-10 m [14]. In this context close range photogrammetry is a very powerful tech-
nology that allows the development of fully automated pipeline and get perfor-
mance comparable with active range sensors (e.g. short-range laser scanner).
The reverse engineering surface of the hull was obtained by an automatic close-
range photogrammetry survey following the typically photogrammetric workflow:
camera network design, camera calibration, automatic images orientation and
points measurement by coded and non-coded targets, accuracy evaluation.
For the hull’s survey a very strong convergent camera network was planned by
turning around the hull from three different path; every path was planned from a
558 F. Di Paola et al.

diverse high but at the same average distance from the hull (about 1.5 meters)
(Fig. 2). The images from the highest level were repeated three times, once by tak-
ing the images with the camera in landscape position, the other rotating the camera
in the portrait position (±90°).

Figure 2 - Image acquisition (left) and camera network (right)

In all 169 images of the hull were taken with a digital camera Nikon D5100
equipped with a 35 mm Nikkor AF-S f/1.8G fixed focus lens; the camera has a
CCD sensor with size of 23.6 mm x 15.6 mm, a pixel size of 4.8 Pm and an effec-
tive resolution of 4928 pixels x 3264 pixels. The image scale was 1/43 and the
coverage of each image was about 1.0 m x 0.7 m. Because the camera focal length
was 35 mm, each pixel was about to 0.21 mm in the object space.
The photogrammetric survey was used to measure 23 profiles and many feature
details of the hull. The profiles and the details were indicated by non-coded targets
that were automatically detect and measured by the photogrammetric system.
Their positions have been chosen in accordance with the positions of the profiles
used to generate the CAD model. About 200 coded targets were also putted on the
hull to automatically orient the images. Ten calibrated scale bars, with two cali-
brated distances measured with a computer numerical control machine and with an
accuracy of ±20 microns, were placed along the edge of the hull; the calibrated
scale bars were used to scale the photogrammetric model and to check the accura-
cy of the survey. The use of calibrated scale bars is very frequent in the field of
industrial applications to scale the photogrammetric model because it allows to
obtain accuracy in the order of few tens of micrometers [15].
The accuracy evaluation of the photogrammetric project shows maximum im-
ages residual of 0.5 pixel and scale bars precision with RMS of ±0.024 mm; the
independent check performed with the calibrated distanced not used for orienta-
tion shows a RMS value of ±0.018 mm. These results confirm the metric accuracy
of the photogrammetric measurement.
The points along the profile were used to generate the reverse engineering sur-
face, while the points over the hull were used to check the 3D model obtained dur-
ing the 3D modeling phase.
A reverse engineering approach to measure … 559

4 Reverse engineering model: the 3D modeling phase

The preliminary phase of acquisition of the hull geometry provided a numerical


model, which represented the basis for the formulation of the mathematical model.
The parametric NURBS surface was generated from controllable and adjustable
plane curves.
It was structured with specific morphological characteristics that allowed both
a comparison with the project surface of reference, and an assessment of the punc-
tual variations. The alignment with the project CAD model was possible after des-
ignating the stern area as reference system. During the post-processing phase, the
acquired points were imported into the known NURBS modelling software Rhi-
noceros. The acquired points were collected and organised in several layers de-
pending on their origin (transom, transversal sections, gunwale line, keel line) to
ensure a more efficient management and an accurate control [16-18]. The points
acquired during the survey, which were necessary for the creation of the main
curves of the surface, were not evenly distributed. For this reason, a rigorous pro-
cess of optimisation and editing of the data was fundamental (Fig. 3). Subsequent-
ly, the organised geometrical data was processed, in order to generate the
isocurves fitting the acquired points. The main generated curves represented the
geometrical-spatial structure of the hull surface.
The next step was to generate a loft surface that interpolated, with small varia-
tions, the grid of points acquired during the survey.

Figure 3 – Creation of the section profiles and the lofted surface of the hull

5 The analysis of the surface’s hull

To analyze the hull surface, two customized procedures were developed with
the Rhinoceros's plugin Grasshopper [19].
This approach allowed the implementation of a workflow, which made the
modelling process of the hull parametric, and enhanced the analysis and diagnostic
tools already existing within the software. It also helped learning the geometric
properties of the created surfaces (typology and class, curve evolution, apparent
boundary, construction origins).
560 F. Di Paola et al.

The first procedure had a structure that allowed the user to calculate the devia-
tions of the points/targets from the generated surface. This was possible after load-
ing the geometries of the hull surface and the points/targets of the photogrammet-
ric survey (input data) into the working environment (Fig. 4). The results were the
maximum and minimum deviation values. For this case study, the set parameters
provided output results with a maximum variation in the order of a tenth of a mil-
limetre, thus validating the congruity of the created surface with the initial ac-
quired data.
Once the reliability of the survey data was verified, the study proceeded with
the formulation of a second procedure with a more complex structure. This proce-
dure was capable of comparing the project CAD model with the one from the pho-
togrammetric survey, while determining the surface variations, as well as the pos-
sible deformities resulting from the molding [20].

Figure 4 – Definition of the first algorithm for the calculation of the deviations of the
points/targets from the generated surface, providing max and min values as results

The structure of the procedure is shown in figure 5 and its workflow is the fol-
lowing:
1. Input: during the initial sequence, both the project and the photogrammet-
ric CAD models are loaded. Before proceeding with the data elaboration, the op-
erator adjusts the subdivision of the surface mesh on the u and v directions (specif-
ically, a subdivision of 200 fractions on the two directions was chosen). The
function linked to this command defines the isocurves in the two directions de-
scribing the input surfaces.
2. Distance determination: during this phase, it is determined which shape
is the one used as reference and which is the one to compare. Then, a structured
grid of points is created from the intersection of the isocurves of the reference sur-
face (specifically, a 40.000-points grid), and the distance of each point from the
surface to compare is determined using a vector. Then a logic function is intro-
A reverse engineering approach to measure … 561

duced, in order to determine the sign of the value of the measured distance. From
an operational point of view, an auxiliary plane was used, which was tangent to
the reference surface and orthogonal to the minimum distance segment of the
point on the surface.
3. Max e min deviation: maximum and minimum deviation values are ex-
tracted and used as extremes of a variation range of the two surface samples.
4. Remapping for colour gradient: a specific colorimetric value is assigned
to each distance value.
5. Output: a legend with a range of pre-selected values is generated.
6. Query: the query block represents the most important element of the pro-
cedure. It includes a logic function that determines the punctual variations be-
tween the surfaces, starting from their respective u-v coordinates.
This procedure has allowed to calculate the deviations of the photogrammetric
model compared with the project one; the variations are of a few millimeters
(40.000-points grid; maximum deviation value: 4,26 mm; minimum deviation val-
ue: -3,60 mm) (Figs. 5-6).

Figure 5 - Definition of the second procedure for the comparison of the project CAD model with
the photogrammetric model; scheme of the operational phases
562 F. Di Paola et al.

Figure 6 – Determination of the punctual variation between the surfaces. Below, maximum and
minimum deviation values of the real hull shape with regard to the CAD mode

6 Conclusion

The obtained results demonstrate that the developed procedure is very efficient
and able to give detailed information on the deviation values of two compared sur-
faces. As regard the case study, remarkable differences have been noted on the re-
al hull surface compared to the initial CAD model. These deviations could be due
to excessive or asymmetric load conditions. The developed surface comparison
algorithm can be used with data coming from different reverse engineering sys-
tems (laser scanner, DIC, moiré fringes based, etc.) so allowing a very large field
of use. With regard to the experiences comparable to the case study, the experi-
mental study proposes the use of customized algorithmic functions in the geomet-
ric analysis process that optimizes punctual query of the output data, ensuring a
tighter control of hull deformations.
The implemented process, of course, is not limited exclusively to sailing
yachts, but it can represent a very useful tool to measure, in a simple, accurate and
parametric way, any dimensional and shape differences between reverse engineer-
ing acquired data and CAD models of any object.

References

1. Brewer, T., Understanding Boat Design, 4th Edition, International Marine, 1994
2. Ahmed, Y.M., Jamail, A.B., Yaakob, O.B., Boat survey using photogrammetry method
(2012) International Review of Mechanical Engineering, 6 (7), pp. 1643-1647.
A reverse engineering approach to measure … 563

3. Koelman, H.J., Application of a photogrammetry-based system to measure and re-engineer


ship hulls and ship parts: An industrial practices-based report (2010) CAD Computer Aided
Design, 42 (8), pp. 731-743.
4. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., Numerical study of the components po-
sitioning influence on the stability of a reverse shoulder prosthesis (2014) International Jour-
nal on Interactive Design and Manufacturing, 8 (3), pp. 187-197
5. Martorelli, M., Ausiello, P., Morrone, R., A new method to assess the accuracy of a Cone
Beam Computed Tomography scanner by using a non-contact reverse engineering technique
(2014) Journal of Dentistry, 42 (4), pp. 460-465
6. Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multi-layer structures by
laser-based ultrasonic technique, (2008) Journal of Adhesion, 84 (10), pp. 811-829.
7. Cerniglia, D., Djordjevic, B.B., Ultrasonic detection by photo-EMF sensor and by wideband
air-coupled transducer, 2004, Research in Nondestructive Evaluation, 15 (3), pp. 111-117
8. Barone, S., Paoli, A., Razionale, A.V., Multiple alignments of range maps by active stereo
imaging and global marker framing, (2013) Optics and Lasers in Engineering, 51 (2), pp.
116-127
9. Atkinson, K., Close range photogrammetry and machine vision, Whittles Publishing, 2001
10. Skarlatos D., Kiparissi S., Comparison, Laser scanning, Image based modelling, open source,
accuracy, 3D reconstruction, (2012), In Proceedings of ISPRS Annals of the Protogrammetry,
Remote Sensing
11. Cuesta, E., Alvarez, B.J., Sanchez-Lasheras, F., Fernandez, R.I., Gonzalez-Madruga, D., Fea-
sibility Evaluation of Photogrammetry versus Coordinate Measuring Arms for the Assembly
of Welded Structures, (2012), Advanced Materials Research, Vol 498, pp. 103-108, Apr. 2
12. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique simultaneous ap-
proach for the design of a sailing yacht, (2015) International Journal on Interactive Design
and Manufacturing, DOI: 10.1007/s12008-015-0267-2
13. Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical redesign of a semitrailer,
2005, WIT Transactions on the Built Environment, 80, pp. 359-369
14. Robson, S., Shortis, M., Engineering and manufacturing. In Applications of 3D measurement
from images. Whittles Publishing, edited by Fryer, J., Mitchell, H., Chandler, J., (2007), pp.
65-101
15. Luhman, T., Close range photogrammetry for industrial applications. ISPRS Journal of Pho-
togrammetry and Remote Sensing, 65(2010), pp. 558-569.
16. Di Paola F, Inzerillo L, Santagati C, (2013). “Image-based modeling techniques for Architec-
tural heritage 3d digitalization: Limits and potentialities” International Symposium -CIPA
XXIV th. In: International Archives of the Photogrammetry, Remote Sensing and Spatial In-
formation Sciences, vol. XL-5/W2, p. 550-560, P. Grussenmeyer, Strasbourg. ISSN: 2194-
9034, DOI: 10.5194/isprsarchives-XL-5-W2-555-2013.
17. Di Paola F., Pizzurro MR, Pedone P., 2013. Digital and interactive Learning and Teaching
methods in descriptive Geometry. In Procedia - Social and Behavioral Sciences journal, 106
Published by Elsevier Ltd., pp. 873 – 885, ISSN: 1877-0428.
18. Di Paola F., Pedone P., Inzerillo L., Santagati C., 2014. Anamorphic Projection: Analogi-
cal/Digital Algorithms. In the International Journal Nexus Network Journal Architecture and
Mathematics, N.16, Kim Williams Books, Turin. ISSN (online): 1590-5896,
http://www.nexusjournal.com.
19. Tedeschi A., 2014. AAD Algorithms-Aided Design. Parametric Strategies Using Grasshop-
per. Potenza: Le Penseur, 2014, 495 p. ISBN 978-88-95315-30-0.
20. Dimcic M., 2011. Structural Optimization of Grid Shells. Based on Genetic Algorithms.
Institut fur Tragkonstruktionen und Konstruktives Entwerfen der Universitat Stuttgart, 2011,
PhD Thesis.
A novel design of cubic stiffness for a Nonlinear
Energy Sink (NES) based on conical spring

Donghai QIU 1*, Sébastien SEGUY 1 and Manuel PAREDES1


1
Institut Clément Ader (ICA), Université de Toulouse, CNRS-INSA-ISAE-Mines Albi-UPS,
3 rue Caroline Aigle, 31400 Toulouse, France
* Corresponding author. Tel.: +33 (0) 5 61 17 11 80; E-mail address: qiu@insa-toulouse.fr

Abstract Mitigation of unwanted vibration is an important issue in aeronautics


and space area. Since the emergence of innovative absorber Nonlinear Energy
Sink (NES), more attentions were paid to this promising technique. This absorber
is characterized by a secondary mass highly coupled via a nonlinear stiffness to
the main structure that needs to be protected. The mastery of the nonlinearity is a
key element for obtaining optimum performance for NES. However, it is difficult
to implement cubic stiffness without linear part. In this paper, a novel design NES
of cubic stiffnes without linear part is presented. For this, the two conical springs
are specially sized to provide the polynomial components only with linear and cu-
bic term. To counterbalance the linear term, a concept of negative stiffness mech-
anism is implemented by two cylindrical compression spring. A small sized NES
system is developed. To validate the concept, the load-displacement relation test is
performed, and simulation under period excitation and transient loading is studied.
Future developments will aim experimental validation and application of the pro-
totype.

Keywords: Nonlinear Energy Sink; cubic stiffness; conical spring; negative stiff-
ness mechanism; strongly modulated response

1 Introduction

Vibration mitigation devices are required more rigorous with the modern me-
chanical products designed faster, lighter and more sophisticated. Particularly in
the aeronautics and space, it requires the absorber be light, and have a broad spec-
trum of frequency. For the traditional linear absorber is hard to achieve these re-
quests, thus introducing the nonlinear energy sink (NES) seems to be a good way
of dealing these issues. This type of absorber is characterized by a secondary mass
highly coupled via a nonlinear stiffness to the main structure that needs to be pro-
tected [1]. It is firstly investigated by Gendelman and Vakakis, and it has demon-
strated that this system with strongly nonlinear element is able to absorb and dis-

© Springer International Publishing AG 2017 565


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_57
566 D. Qiu et al.

sipate the energy efficiently from the main structure [2]. With the performance, a
wide variety of applications of NES can be used in space and aero-structure, vi-
brating machinery, building and vehicle suspensions.
The concept of a NES relies on a vanishing linear stiffness, giving rise to an
essentially nonlinear restoring force. Having no natural frequency, the NES can
thus adapt itself to the frequency of the primary system. A targeted energy transfer
(TET) can then occur in an irreversible fashion [3]. Depending on the type of
nonlinearity, NES can be categorized as cubic NES, vibro-impact NES, piecewise
NES, and rotational NES. As far as the cubic NES, it has been shown that this
configuration is most effective at moderate–energy regimes. However, this NES
hasn’t been applied broadly in engineering practice. One key issue is that, it is dif-
ficult to implement cubic stiffness without linear part. In our recent approaches,
the essential cubic stiffness was mostly realized by employing the construction of
two transverse linear springs with no pretension [4]. However, this type may not
be suitable for practical application, due to its large size; in addition, a relatively
weak nonlinear stiffness exists at the beginning extension, leads to the cubic term
approximated to a linear term.
Another idea which has emerged recently is to use nonlinear spring in trans-
lation direction. For example, it has been show in [5] by adopting elastomeric
spring with pyramidal shape to provide a nearly cubic restoring force, yet this
type’s may be limited with the request of large displacement of NES. Therefore,
implementing a cubic stiffness element practically is still an important issue to
broaden the application of NES.
In this article, a novel NES design leading to the award of a strongly cubic
stiffness is presented. The structure is as follows: section 2 is devoted to the theo-
retical design, including the conception of conical springs and negative stiffness
mechanism; in section 3, the assembly of a small sized NES system is developed.
In the next section, simulation validation of this NES system under period excita-
tion and transient loading is studied. Finally, concluding remarks and future de-
velopments are addressed.

2 The theoretical design

2.1 Conical spring


Considering the strong nonlinearity and avoid buckling at large deflections,
two telescoping conical springs with a constant pitch are adopted. The dynamical
behavior of conical spring with a constant pitch can be classified as linear and
nonlinear part. In the linear phase, the largest coil is free to deflect as the other
coils, so the load-deflection relation is linear and the stiffness can be expressed as:
Gd 4
R (1)
2na (D12  D22 )(D1  D2 )
A novel design of cubic stiffness for a Nonlinear … 567

In the nonlinear regime, the first elementary part of the largest coil has reached
its maximum physical deflection. It starts to be a non-active element of the spring.
During the second regime of compression, the active coils continuously decrease
leads to a gradual increase of the spring stiffness. The load-deflection relation is
shown as follows:
2PD14 na D n n
'(P) [(1 ( 2 1) f )4 1]  (La  Ls )(1 f ) (2)
Gd (D2  D1 )
4
D1 na na
Where the detailed description of Eq. (1) and Eq. (2) can refer to [6].
To benefit the nonlinear performance of conical spring, the symmetrical
connecting type of spring is proposed as shown in Fig.1.

Fig.1 Symmetrical connecting type Fig.2 Pre-compressing at transition point

However, this configuration has the disadvantage that stiffness curve is


piecewise. The dynamical behavior of spring is still classified as the linear and
nonlinear part. To skip the linear phase, a method of pre-compressing at transition
point is adopted, as shown in Fig.2. By changing the initial original point, the be-
haviors of two conical springs in Fig.3 can respectively belong to linear and non-
linear regimes simultaneously.
Combining the two spring’s curves, the composed stiffness curve in Fig.4 is
obtained and it is obviously observed that the new curve is smooth and no longer
piecewise.

Fig.3 Conical spring characteristic Fig.4 Pre-compressed characteristics


To analyse the internal polynomial components, the method of polynomial
fitting is adopted and the new load-deflection relation is expressed as follows:
P a1 x  a2 x2  a3 x3 (3)
In this polynomial, the linear term a1 x is hardly to be eliminated owing to the
superposition of linear and nonlinear part, while the square term a2 x2 is possible to
make its value small.
568 D. Qiu et al.

Fig.5 is the characteristic of the composed components. As the coefficient of


square term is large, the curve of cubic and linear term can’t fit the original curve
well. In addition, it’s hard to decrease linear term meanwhile keep the cubic term
at the fixed range. So next step, the work of optimization will be introduced.

2.2 Optimization design


To describe the stiffness curve, the objective function of conical spring is set
as piecewise as follows:
­k0 x
° ( x d st )
F ® (4)
°̄a3 ( x  st )  a2 ( x  st )  a1 (x  st )  pt (x ! st )
3 2

Where k0 is the rate of the linear phase, pt and st correspond respectively to the
force and the displacement of the transition point.
The optimization model consists of the following part: (a) Objective: setting
absolute values of square coefficient a2 as minimum; (b) Variable: mean diame-
ter of smallest coil D1 and mean diameter of largest coil D2 (These two values are
the main factors to determine the nonlinearity), the coil diameter d , number of ac-
tive coils na , free length L0 ; (c) Constraints: the cubic coefficient a3 , the deflection
of transition point st , the linear stiffness k0 ; (d) The optimization method: Multi-
island Genetic Algorithm (MIGA). This algorithm has the advantage of solving
the huge domain space problems.
After optimization, a new polynomial component is obtained and presented
in Fig.6. It can be observed that the curve of cubic a nd linear term is close to the
original one, by means that the contribution of square term is small that can be al-
most neglected.

Fig.5 Components before optimization Fig. 6 Components after optimization

2.3 Negative stiffness


In order to counteract the linear term of conical and have a pure cubic force-
displacement, adding a new term which has the negative stiffness in translation di-
rection seems be a way forward. For this, a negative stiffness mechanism is im-
plemented from two cylindrical compression springs, and the structure is shown in
Fig.7.
A novel design of cubic stiffness for a Nonlinear … 569

3 3

Fig.7 Negative stiffness mechanism

Where l is the free length of linear spring, P is the pre-compressing force, l p is


the pre-compressing length, u is the displacement of NES.
After pre-compressing spring in the length of l p , the force-displacement re-
lationship based on Taylor expansion is given as:
P kl  P
f 2 u  3 ˜ u3 (5)
l l
Superposing the force with the one of conical spring in the translational di-
rection, the composed force can be expressed as:
lp l l
Pm (a1  2k) ˜ x  (a3  k p 3 ) ˜ x3 (6)
l l
According to the equation, if we set a1 2kl p / l , the linear component could
be canceled out by the negative stiffness mechanism. The equation will be left
with the pure cubic term, and its coefficient will increase a little larger.

3 Assembly of NES system

Based on the proposed methods, a small sized NES system providing strongly
nonlinear stiffness is designed, and the assembly drawing is presented in Fig.8, the
component parts are spherical plain bearing, linear guide, conical spring, linear
spring and NES mass, of which x and y correspond respectively to the displace-
ment of primary system and NES mass. It is important to highlight that the spring
distance to NES mass is adjustable so to reach the suitable force shape.
y

x
Fig.8 Assembly of NES system
570 D. Qiu et al.

3.1 Spring test


To obtain the performance of conical spring, identification study is per-
formed. The conical spring manufactured is presented in Fig.9, and the test
equipment is shown in Fig.10. Here we test five piece conical springs, and the re-
sults are obtained in Fig.11. As expected, the linear part of experimental curves
fits the theoretical curve perfectly, and the nonlinear part is close to theoretical
one.

Fig.9 Telescoping conical spring Fig.10 Compression spring test

To make sure the conical spring work in compression state, the maximum
displacement of NES is limited at the deflection of transition point. The corre-
sponded characteristic curve is presented in Fig.12. It can be shown that the com-
posed force of conical spring and negative stiffness mechanism corresponds well
to the theoretical one, by means this novel conception can be extended to design
the strongly cubic NES without linear part.

Fig.11 Stiffness curve of manufactured spring Fig.12 Stiffness curve of NES system

3.2 NES mass calculation


The mass of NES mNES consists of main mass, spherical plain bearing, linear
spring support base, and conical spring support base. As the mass of the NES is
very small, the inertia of the springs is no longer negligible and has to be consid-
ered. In a rough approximation, considering the spring as a beam and neglecting
axial inertia, the kinetic energy of the NES mass and linear spring is written as fol-
lows:
A novel design of cubic stiffness for a Nonlinear … 571

l0 x 1
TNES ³ Us ( y))2 dx  mNES
N y
2
(7)
0 l0 2
Where Us mlinear / l0 is the mass density of the spring. Thus the effective mass of
single linear spring can be expressed as mlinear ml / 3 , the detailed process can re-
fer to [4] and [7].
For the single conical spring, the effective mass is written as follows.
1 1 1
(1  E 10 )  E 4 (1  E 6 )  E 8 (1  E 2 ) R2
mconical 2mc 10 3 2 , E (8)
(1  E 4 )2 (1  E 2 ) R1
Based on this, the total mass of NES can be depicted as:
m2 mNES  2mlinear  2mconical (9)

4 Simulation validations

4.1 Dynamical model


Based on the previous test, the parameters of NES system is obtained in
Tab.1. To validate this concept, an analytical study of a harmonically excited line-
ar oscillator (LO) strongly coupled to a NES is presented.
Tab.1 Parameters of NES
m1 9kg k2 2.33u106 N / m3
m2 0.09kg C1 4Ns/m
k1 3u10 N / m
4
C2 0.4Ns/m

The governing equations of motion of this system are given by:


d2x dx dx dy dx
m1 2  c1  k1 x  c2 (  )  k2 ( x  y)3 k1 xe  c1 e
dt dt dt dt dt (10)
d y
2
dy dx
m2 2  c2 (  )  k2 ( y  x)3 0
dt dt dt
Where the imposed harmonic displacement xe G cos :tt .

4.2 Periodic loading


Based on the Eq. (10), setting the excitation amplitude as 0.1mm, the numer-
ical responses of NES and LO is obtained in Fig.13. A quasi-periodic regime with
a slow evolution of the amplitudes of both oscillators is observed. For LO, the
amplitude increases and decreases repeatedly in a regular fashion. For the
amplitude of NES, it can be classified into two levels: a small one corresponding
to the growth of LO amplitude, a large one when LO amplitude decreases. This
alternating regime of strongly modulated response (SMR) proves the jump
572 D. Qiu et al.

phenomenon of SIM [8]. The instaneous percentage energy carried by NES and
LO is presented in Fig.14. It can be oberved there exists a phase resonance
capture, the energy is entirely transferred to the NES which localizes quickly
almost 100 percent of the energy of the system.

Fig. 13 SMR with G=0.1mm Fig.14 Instaneous energy carried by NES and LO

4.3 Transient loading


By introducing the excitation G 0 and the initial condition x 5mm to Eq.
(10), the numerical response is obtained in Fig. 15. It can be observed, without
NES, the vibration extinction of LO will follow a natural exponential decrease;
while with NES, it will follow a quasi-linear decrease, much faster than the expo-
nential one, during which the NES will vibrate with a large amplitude until the en-
ergy in LO has been almost completely cancelled. In Fig. 16, the evolution of the
energy and the percentage of the energy present in NES and LO is showed. We
can find that the energy is transferred irreversible from LO to NES, of which al-
most 68 percent of the energy is dissipated by NES.

Fig. 15 Transient response Fig.16 Evolution of the energy in NES and LO

5 Conclusion

In this paper, a novel design NES of cubic stiffnes without linear part is
presented. For this, two conical springs are specially sized to provide the polyno-
A novel design of cubic stiffness for a Nonlinear … 573

mial components only with linear and cubic term. To counterbalance the linear
term, the concept of negative stiffness mechanism is proposed by two cylindrical
compression spring. A small sized NES system providing strongly nonlinear stiff-
ness is developed, of which the distance of spring is adjustable so to reach the
suitable force shape. To validate the concept, the load-displacement relation test is
performed, and the simulation of a harmonically excited linear oscillator coupled
to this type of NES is studied. The results show that this structure can output the
pure cubic stiffness as expected; at the specified period excitation amplitude, this
system could passively transfer the unwanted disturbance energy with the re-
sponse of strongly modulated response (SMR); at transient loading, this type can
dissipate targeted energy irreversibly with the speed of vibration extinction of LO
following a quasi-linear decrease. Future developments will aim experimental
validation, and the application of passive vibration control in spatially flexible
structures.

References

1. Gendelman O, Manevitch L I, Vakakis A F, et al. Energy pumping in nonlinear mechanical


oscillators: Part I—Dynamics of the underlying Hamiltonian systems[J]. Journal of Applied
Mechanics, 2001,68(1): 34-41.
2. Kerschen G, Kowtko J J, McFarland D M, et al. Theoretical and experimental study of
multimodal targeted energy transfer in a system of coupled oscillators[J]. Nonlinear Dynamics,
2007, 47(1-3): 285-309.
3. Lee Y S, Vakakis A F, Bergman L A, et al. Passive non-linear targeted energy transfer and its
applications to vibration absorption: a review[J]. Proceedings of the Institution of Mechanical
Engineers, Part K: Journal of Multi-body Dynamics, 2008,222(2): 77-134.
4. Gourc E, Michon G, Seguy S, et al. Experimental investigation and design optimization of
targeted energy transfer under periodic forcing[J]. Journal of Vibration and Acoustics,
2014,136(2): 021021.
5. Luo J, Wierschem N E, Hubbard S A, et al. Large-scale experimental evaluation and
numerical simulation of a system of nonlinear energy sinks for seismic mitigation[J].
Engineering Structures, 2014, 77: 34-48.
6. Rodriguez E, Paredes M, Sartor M. Analytical behavior law for a constant pitch conical
compression spring[J]. Journal of Mechanical Design, 2006, 128(6):1352-1356.
7. Yamamoto Y. Spring's Effective Mass in Spring Mass System Free Vibration [J]. Journal of
Sound and Vibration, 1999, 3(220): 564-570.
8. Gendelman O V, Gourdon E, Lamarque C H. Quasiperiodic energy pumping in coupled
oscillators under periodic forcing[J]. Journal of Sound and Vibration, 2006,294(4): 651-662.
Design of the stabilization control system of a
high-speed craft

Antonio GIALLANZA1, Luigi CANNIZZARO1, Mario PORRETTO1 and


Giuseppe MARANNANO1*
1
DICGIM - Dipartimento di Ingegneria Chimica, Gestionale, Informatica, Meccanica –
University of Palermo. Viale delle Scienze, 90128 Palermo.
* Corresponding author. Tel.: +39-091-238-97270.
E-mail address: giuseppe.marannano@unipa.it

Abstract In this paper, the main causes of technical malfunction of a hydrofoil


was analyzed. In particular, a preliminary analysis evaluates the economic impact
for the navigation company of the periodical maintenance related to the keeping of
the vessel in dry dock. The study demonstrated that the main critical points are fo-
cused on the fragility of the stabilization control system. The increasing of operat-
ing costs has motivated the realization of a study aimed at redesigning the stabili-
zation system. The continuing failure of the stabilization system (usually in water-
immersed) severely limits the use of the high-speed craft. The proposed design so-
lution considers the positioning of the control actuators of the flaps inside the hull.
Therefore, a kinematic system constituted by a slider-crank mechanism that is
driven by a double-acting hydraulic cylinder positioned above the waterline was
studied and developed. In order to design the mechanical system, it was necessary
to take into account of the critical factors related to the transmission of high torque
loads with limited space available for the placement of the system components. In
fact, in order to reduce the motion resistance and to optimize the hydrodynamic
flows in the connection area of the wings to the central strut, it was necessary to
design a double cardan joint of reduced radial dimension. Several numerical anal-
yses conducted in ANSYS environment allowed to validate the proposed solution.
Fatigue tests on an experimental prototype of the stabilization system allowed to
ensure the integrity of the solution during the navigation.

Keywords: High-speed craft; hydrofoil, stabilization control system; FE analy-


sis, fatigue tests.

© Springer International Publishing AG 2017 575


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_58
576 A. Giallanza et al.

1 Introduction

The fast vessels used for passengers transportation refer to the High-Speed
Craft (HSC) regulations that establish several conditions between cruising speed
and ship displacement. They are classified into three main categories: multihulls,
monohulls and hydrofoil [1,2] (Fig. 1a). The use of a hydrofoil for maritime pas-
senger transportation has today reached a strategic importance. In fact, their usage
must satisfy the territorial necessities, whether they are residential, occupational or
related to the tourism sector. In particular, a foilmaster is a short/medium-range
surface-piercing hydrofoil for passengers transportation. Its benefits are the very
low fuel consumption compared to similar vessels and the excellent sea-keeping
characteristics.

Fig. 1. (a) Foilmaster HSC; (b) Current flap control system.

An acceptable comfort on board is also guaranteed in bad marine weather con-


ditions (sea conditions up to force 4-5). This is due to the fact that the hull, at
cruising speed, lifts out of the water and its movements are controlled by a stabili-
zation control system [3-7]. It follows that the drag forces (caused by the wave
motion produced by the vessel and by the frictional resistance of the hull surfaces)
are significantly reduced at cruising speed. Consequently, the fuel consumption
(expressed in kg per passenger per mile) is considerably lower than that of other
HSC types. Against these indisputable advantages, the foilmaster presents a high
cost of maintenance. The main critical points are focused on the fragility of the
stabilization control system. The stabilization system currently employed on the
hydrofoil consists of two flaps whose movements are controlled by two actuators
(one per side, see Fig. 1b). A control system allows to record the ship motions in
terms of pitch and roll magnitude and, consequently, a hydraulic circuit (that is di-
rectly connected to the flaps actuators) allows to stabilize the hydrofoil. The tech-
nical feasibility of roll motion control devices has been amply demonstrated for
over 100 years. Performance, however, can still fall short of expectations because
of difficulties associated with control system designs, which have proven to be far
from trivial due to fundamental performance limitations and large variations of the
spectral characteristics of wave-induced roll motion. Perez et al. [8] present an ac-
count of the development of various ship roll motion control systems together with
the challenges associated with their design. It discusses the assessment of perfor-
Design of the stabilization control system … 577

mance and the applicability of different mathematical models, and it surveys the
control methods that have been implemented and validated with full scale experi-
ments. The paper also presents an outlook on what are believed to be potential ar-
eas of research within this topic. Kang et al. [9] have investigated the usefulness of
active stabilizing system to reduce ship rolling under disturbances, using reaction
of the flaps. In the proposed anti-rolling system for a ship, the flaps as the actuator
are installed on the stern side in order to reject rolling motion induced by disturb-
ances like wave. Currently, in the system employed on the hydrofoil, a control de-
vice detects the ship movements in terms of roll and pitch angles and, consequent-
ly, it drives a hydraulic actuator which acts on the flaps, allowing the ship
stabilization. The system also operates during the takeoff phase by a synchronous
movement of the flaps. At a cruising speed of about 36 kn, the flaps angles are be-
tween ±10°. At the maximum flap angle position, a preliminary fluid dynamics
analysis [10] of the system allowed to assess that a torque of 3000 Nm is applied
to the flap itself. The main advantage of the use of the analyzed system is due to
its design simplicity. Against, the presence of the hydraulic actuators in the water
normally generates fouling phenomena on the cylinder and on the piston rod. This
process quickly degenerates the cylinder seal and promotes the oil leakage. More-
over, it is possible to have frequent damages on the positioning sensors of the ac-
tuators, that are fundamental for their control. Out of service of the stabilization
system highly limits the use of the hydrofoils. Frequent stops are also imposed by
the maritime control authority since the presence of leakage faults in hydraulic ac-
tuators produces pollution, especially in marine protected areas. The raised issues
have a profound economic impact. In fact, maintenance plays a key role in the
management policies of a shipping company. The analysis of maintenance costs
should consider the considerable costs related to the transferring and keeping of
the vessel in dry dock and, moreover, the induced costs due to the ship
unavailability. These considerations are closely related to the foilmaster HSC. In-
stead, they are not applicable for the other vessel types. In fact, although equipped
with stabilization systems, the latter have control actuators positioned inside the
hull and, therefore, they are not affected by the mentioned damage problems. The
study is based on the current maintenance policy adopted by shipping companies
(fault-type policy), which involves the unpredictability of the ship unavailability
for maintenance. A different maintenance policy, preventive and/or predictive
maintenance for example, would result in the need to equip the stabilization sys-
tem of sophisticated sensors in order to detect a progressive damage. The installa-
tion of such sensors (always immersed in water) is difficult to realize and, indeed,
increase the complexity of the system. Against, an increase in the frequency of
maintenance interventions even only for more frequent monitoring of the stabiliza-
tion systems involves, in any case, the keeping of the vessel in dry dock by mak-
ing the policy of preventive maintenance inefficient as well as ineffective. The
maintenance of a modern vessel for passenger transportation accounts for about 20
to 25% of operational cost. From these considerations arises the need to intervene
in the design phase in order to innovate the existing stabilization system that it can
be considered the main critical point for the purposes of the operational manage-
ment of the ship. It is evident the criticality of the current system from the point of
578 A. Giallanza et al.

view of costs and, in order to allow a reduction of maintenance costs, this justifies
the study conducted for a radical redesign of the stabilization control system of the
hydrofoil.

2 Design solution

The design solution consists in the repositioning of hydraulic cylinders of the


flaps within the hull. Therefore, a kinematic system (consisting in a crank – see
Fig. 2) was studied. A double-effect hydraulic cylinder, positioned over the water
line within a metallic frame, drives the whole mechanism. This solution allows an
instant access for routine maintenance and minimal waterproofing costs.

Fig. 2. CAD model of the kinematic system.

The system is constituted by a drive rod, a connecting rod and a rocker arm that
converts the linear motion of the hydraulic piston in a rotating motion of the flap
(Fig. 2a). An optimal wing structure consists in two half-wings, moderately
pitched down (of about 8°), that depart from the central strut (Fig. 2b), forming the
so-called “gull-wing disposition”. For this reason, the rotation axis of the flap does
not coincide with the horizontal axis of the rocker arm. Therefore, it is necessary
to use a double universal joint in order to realize the connection with the flap (Fig.
3a).

Fig. 3. (a) Rendering of the double cardan joint; (b) Connecting rod and rocker
arm of the system.
Design of the stabilization control system … 579

In particular, a hydraulic actuator drives a rod that is axially guided by means


of sliding bearings; the crank mechanism acts a splined shaft that is connected (by
a flange) at the cardan joint. Realization of the broaching on the rocker arm was
rather difficult. For this reason, it was used a sleeve with unified polygonal exter-
nal profile on which the broaching was realized. The sleeve was inserted in a slot
realized on the rocker arm (Fig. 3b). The forces acting on the elements of the
mechanism depend on the different geometric configurations assumed during its
operating. The heavy load configuration (for which the maximum force is applied)
was carried out by means of several CFD Analyses. In particular, in the heavy
load configuration, a torque of 3000 Nm is applied. The latter is generated for a
flap angle equal to -10° (Fig. 4a).

Fig. 4. (a) Flap angle in the heavy load configuration; (b) Exploded view of the
central body of the double cardan joint.

The double cardan joint must be of limited radial dimensions, in order to


optimize the hydrodynamic flows in the connection area of the wings to the
central structure. Moreover, disassembly and assembly should be facilitated in
order to allow the removal of the flap. These requirements have excluded the pos-
sibility of using a traditional type of universal joint. Therefore, the double cardan
joint was completely designed and developed ex-novo. Fig. 4b shows a detail of
the central body of the double joint. The double cardan joint, the splined shaft and
the broached sleeve was designed and realized using stainless steel AISI 420 [6],
quenched with yield strength greater than 1200 MPa. The rocker arm, the crank-
pin, the connecting rod and the actuating rod was designed and manufactured in
austenitic stainless steel AISI 316. All the sliding bearing used for the mechanism
was realized using composite material with steel core and sintered bronze surface
layer. This bearing present a Polytetrafluoroethylene (PTFE) cover layer with anti-
friction additives (5-30 μm) and it are used for low speeds or limited angular mo-
tions applications. The sliding bearing used for double cardan joint were realized
by bronze UNS96700. 3D solid modeling [11,12] was realized by means of
SolidWorks software. The structural analyses were performed in ANSYS Work-
bench environment. The whole mechanism was discretized with hexahedral ele-
ments. The contact surfaces of the mechanism were modelled using appropriate
contact elements. Bonded contacts are used between the elements that not present
relative motion; frictional contacts, indeed, are used between components that
have sliding motion. To reduce computing time, the system analysis was carried
out considering two different subsystems:
580 A. Giallanza et al.

1. Rod, rocker-arm and splined shaft;


2. Double cardan joint.
For each subsystem, load conditions and constraints are appropriately defined.
The first subsystem (Fig. 5a) is constrained imposing that the displacements and
rotations of the external surfaces of the bushings are equal to zero. Finally, the
drive-rod movement was locked. The torque was directly applied to the splined
shaft flange. In the second subsystem (Fig. 5b) the torque is applied on the flange
and the polygonal shaft is instead constrained. Fig. 5 shows the discretization de-
tails of the components of the two analyzed subsystems.

Fig. 5: Detail of the meshed elements: (A) mechanism, (B) double cardan joint.

Fig. 6 shows, in particular, the Von Mises stress distributions for the most
stressed components of the studied subsystems.

Fig. 6: Von Mises stress distribution on the rocker arm (a) and on the splined shaft
(b).

Fig. 6 shows that the stress concentration areas are located in proximity of the
fillet radius of the rocker arm polygonal profile and at the fillet radius of the
splined shaft. Regarding the analysis on the double cardan joint, the most stressed
parts are located in correspondence of the base fillet radius of the central female
part (Fig. 7). Maximum stress, equal to σ max=965 MPa, is located in a limited area
of the fillet radius that is characterized from high stress concentration. In particu-
Design of the stabilization control system … 581

lar, the Fig. 7 shows the double cardan joint areas in which stress value is greater
than 800 MPa.

Fig. 7: Areas in which Von Mises stress values are greater than 800 MPa.

The contact stress on the sliding bearings is lower than the allowable stress of the
selected material. Static analysis was carried out in order to make sure that the
structure operates in uniform manner. In order to confirm the numerical results, an
experimental study (shown in the following section) was carried out

3 Experimental tests

In order to evaluate the mechanical behavior of the studied system, it was nec-
essary to design a load fixture through which simulate the real load conditions
during operation (Fig. 8).

Fig. 8: (a) Scheme of the load fixture; (b) Load fixture.

Through the test configuration it is possible to simultaneously test two actuat-


ing systems that generate two torques of equal magnitude but opposite sign. The
two mechanisms are spaced in conformity to the real position within the hydrofoil.
582 A. Giallanza et al.

The test equipment is constituted by a lower support member, by a connecting


frame between the cardan axes and by a support plate for the sliding bearings that
guide the axial displacement of the drive rods. In Fig. 9 the main components of
the actuating system, realized for the execution of the experimental tests, are
shown.

Fig. 9: (a) double cardan joint; (b) broached component; (c) rocker arm; (d)
splined shaft.

Several fatigue tests were conducted on a MTS servo-hydraulic testing machine


(Fig. 8b) with load cell of 100 kN. The experimental tests have allowed to evalu-
ate the response of the system according to the real load conditions. The test setup
provides to reach a maximum number of load cycles equal to N=100000. Experi-
mental cyclic tests were carried out using a frequency equal to 1 Hz with load ra-
tio R=Pmin/Pmax=0.1. The maximum load of the value is equal to P=12.5 kN, corre-
sponding at a torque application equal to 3000 Nm on the cardan joint axis.
During the fatigue test, the double cardan joint was damaged (number of load cy-
cles equal to n=22465) due to the onset of two simultaneous defects that are origi-
nated in the proximity of the radius of curvature of the central female part (Fig.
10), in the same region where the numerical analysis predicted high stress concen-
tration values (see Fig. 7).

Fig. 10: Detail of the fatigue fracture surfaces on the main part of the cardan
joint.
Design of the stabilization control system … 583

The proposed design solution does not ensure the strength requirements. There-
fore, the main body of the cardan joint was completely redesigned. The high
stresses concentration in proximity of the central female component (that causes
the nucleation of the fatigue crack) suggests that it is appropriate to provide, in or-
der to reduce the contact pressure, more extensive contact surfaces between the
female element and the corresponding male support. The optimization of the com-
ponent was carried out to redefine the new coupling surfaces as shown in Fig. 11a.

Fig. 11: (a) Optimized double cardan joint; (b) Areas with Von Mises stress
values greater than 550 MPa.

The fillet radius on the female support is equal to 1.5 mm, three times greater
than previously thought. The height of the male support profile is equal to 17.5
mm, twice higher than the original profile. From the numerical analysis it is possi-
ble to assert that the component has a globally uniform behavior, with a maximum
Von Mises stress (equal to σmax=647 MPa) lower than that determined in the pre-
viously analyzed configuration (equal to σmax=965 MPa). Fig. 11b shows the areas
of the cardan joint that present Von Mises stress values greater than 550 MPa.
Maximum stress is localized in a limited area of the fillet radius.
Several experimental cyclic tests were carried out with the same load condition
used in the previous analysis. Tests are carried out imposing a maximum number
of the load cycle equal to N=100000 and no anomaly has been observed. The hys-
teresis cycle shows that the stiffness of the mechanical system does not vary with
the increase of the cycle number. At the end of the test, a function check of the
cardan joint was carried out. In particular, a Dye Penetrant Inspection (DPI) and a
Magnetic Particle Inspection (MPI) were conducted. From this check tests, no
faults were observed and it was established that the system correctly operates.

3 Conclusions

The analysis of the maintenance interventions on Foilmaster has highlighted


the considerable criticality represented by the wing flap control system. Such criti-
cism is mainly due to the current position "in the water" of the flap hydraulic actu-
ators. The costs analysis, arising from extraordinary intervention of these systems,
584 A. Giallanza et al.

has allowed to determine the considerable amount of these costs. The collection of
statistical data, for fixed periods of time, could certainly promote the deepening of
this aspect determining, for instance, preventive and/or predictive maintenance
models. In any case, in this study, the objective was to reduce the accidental una-
vailability of the Foilmaster which is manifested when the stabilization system is
subject to failure. From the analysis of the annual cost of maintenance is evident
that the proposed stabilization system will allow a reduction in the number of ex-
ceptional operations related to the hydraulic actuators. In addition, the proposed
control system allows achieving an important objective of environmental protec-
tion because, using this solution, it is eliminated all the possibilities of sea pollu-
tion due to the hydraulic liquid leakage. The latter considerations justify the study
conducted in order to design and to implement an innovative control system. The
new control system, which involves the use of special materials and special heat
treatments, was validated by means of finite element analysis and experimental fa-
tigue tests in order to guarantee the integrity and the related safety of the
Foilmaster.

References

1. Lewis E.V. Principles of Naval Architecture, Volume 3, 1990 (Society of Naval Architects
and Marine Engineers).
2. Faltinsen O.M. Hydrodynamics of High-Speed Marine Vehicles, 2005 (Cambridge University
Press).
3. Molland A.F. and Turnock S.R. Marine Rudders and Control Surfaces, 2007 (Elsevier Ltd.).
4. Crossland P. The effect of roll stabilization controllers on warship operational performance.
Control Engineering Practice, 2003, 11, 423-431.
5. Gawad A., Ragab S., Nayfeh A., Mook D. Roll stabilization by anti-roll passive tanks. Ocean
Engineering, 2001, 28, 457-469.
6. Oda H., Ohtsu K., Hotta T., Mook D. Statistical analysis and design of a rudder roll stabiliza-
tion system. Control Engineering Practice, 1996, 4(3), 351-358.
7. Sellars F., Martin J. Selection and evaluation of ship roll stabilization systems. Marine Tech-
nology, SNAME, 1992, 29(2), 84-101.
8. Perez A. and Blanke M. Ship roll damping control. Annual Reviews in Control, 2012, 36(1),
129-147.
9. Kang G.B., Kim Y.B., Jang J.S., Zhai G., Ikeda M. and Choe Y.W. A Study on Anti-Rolling
System Design of a Ship with Flaps. SICE Annual Conference in Sapporo, 2004, 84-88.
10. Chow C.Y. An introduction to computational fluid mechanics, 1979 (John Wiley and Sons,
Inc.,New York, NY).
11. Ingrassia, T., Nigrelli, V. Design optimization and analysis of a new rear underrun protective
device for truck. Proceedings of the 8th International Symposium on Tools and Methods of
Competitive Engineering, 2010, 713-725.
12. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D. A multi-technique simultaneous ap-
proach for the design of a sailing yacht. International Journal on Interactive Design and Man-
ufacturing, 2015.
Dynamic spinnaker performance through digital
photogrammetry, numerical analysis and
experimental tests

Michele Calì1, Domenico Speranza2,*, Massimo Martorelli3


1
Electric, Electronics and Computer Engineering Department, Università degli Studi di Catania,
V.le A. Doria, 6 - 95125 Catania (Italy)
2
Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio,
Via G. Di Biasio, 43 - 03043 Cassino (Fr) - Italy
3
Department of Industrial Engineering, Università degli Studi di Napoli Federico II, P.le Tecchio,
80 - 80125 Napoli – Italy
* Corresponding author: Speranza Domenico Tel./Fax: +39.0776.2993988; e-mail address:
d.speranza@unicas.it

Abstract Sail manufacture has undergone significant development due to sailing


races like the America’s Cup and the Volvo around the World Race. These
competitions require advanced technologies to help increase sail performance. Hull
design is fundamentally important but the sails (the only propulsion instrument)
play a key role in dynamic of sailboats. Under aerodynamic loads, sail cloth
deforms, the aerodynamic interaction is modified and the pressure on the sails is
variously distributed resulting in performance inconsistencies. The interaction
between fluid and structure necessitates a solution which combines aerodynamic
and structural numerical simulations. Furthermore, in numerical simulations the
aeroelastic sail characteristics must be known accurately. In this paper, the
dynamic performance of a Spinnaker was studied. Digital photogrammetry was
used to acquire the images, make the 3D reconstruction of the sail and validate the
models in Computational Fluid Dynamics (CFD) analysis. Orthotropic constitutive
characteristics of ten different sail cloths were measured by experimental test. The
methodology allowed to compare dynamic performance in terms of forces,
pressure and vibration for the different sail cloths and different fiber orientations.

Keywords: Sail aerodynamics, CFD analysis, Turbulence models, Detached Eddy


Simulations, Pressure distributions.

© Springer International Publishing AG 2017 585


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_59
586 M. Calì et al.

1 Introduction

Sports sailing is a field in which technological development (new aerodynamic and


hydrodynamic materials) is fundamentally important. Until the 19th century, the
natural fibres of cotton and linen were used for sails both of which degrade in UV
light and absorb a lot of water. The development of CGI (Computer Generated
Imagery) has improved sail performance and alongside it the development of sail
production technology has created synthetic fibres (e.g. polyester) which are ever
more suitable in a wide variety of sail types. It is well-known that sail performance
(especially for speed) is closely connected to the boat for which they were
designed and that a fundamental characteristic is how long sails can keep their
shape. This in turn strongly influences sail design and in particular the choice of
material and how the sail panels are assembled. Together with aerodynamic
developments and ever higher performing cloths, sail design today uses
optimisation criteria by taking advantage of material anisotropy and orienting the
sail panels according to the direction of greatest tension. The most common panel
assembly types are radial, bi-radial and full-radial for which the direction of
greatest tension is the warp whereas an alternative and also very common way is to
use the cross-cut where the weft direction is that of the leech. Sails are actively
three-dimensional since they are subject to very variable wind and sea conditions
during racing so it is important that along the other weave directions (weft and bias
at 450 to the weft) low sail stretch maintains shape. The aim of this paper is to
study the dynamic performance of a Spinnaker with a radial panel assembly. A no-
contact passive Reverse Engineering (RE) technique, digital photogrammetry, was
used to reconstruct the sail in 3D. Today the basic principles of RE systems are
codified into complete sets of procedures specific to various applications [1-4], in
particular digital photogrammetry can be useful in naval applications [5-11].
Digital photogrammetry reconstruction is also used to validate models in
Computational Fluid Dynamics (CFD) analysis. Aerodynamic and structurally
numerical simulations, in coupled mode, were performed and the constitutive
characteristics of ten different sail cloths were measured by experimental test. The
sail was modelled with Detached Eddy Simulations (DES), which provided
drawings of the topology of the turbulent structures in the sail wake and revealed
new flow features. In particular, the methodology compared the dynamic
performance of a spinnaker, in terms of forces, pressure distributions and vibration
for the ten principal sail cloths.

2 Materials and Methods

To evaluate the dynamic performance of a spinnaker, the ten most common sail
cloths were characterised. These cloths were supplied by Velerie Bainbridge,
Dimension-Polyant GmbH and Banks Sails Membrane. Table 1 shows the main
properties and characteristics of the cloths. The fourth column shows the weight in
Dynamic spinnaker performance through digital … 587

grams of a square metre of cloth (also thickness) and in brackets the cloth density
of the sample. The sail cloths are made from synthetic fibres ranging from low-cost
nylon (sample 2) or polyester (samples 5 and 8) or kevlar (samples 3, 6, 7 and 9) to
expensive aramids/Dacron© (polyethylene terephthalate-PET) (samples 10 and
10+) or carbon fibres (sample 1).

Table 1. Characteristics of sail cloths.


Sample Cloth Sail Specific weight Thickness Symmetry
N° Supplier [g/m2] ([g/cm3]) [mm]
1 Custom Carb. Membrane 292.0 (1.73) 0.33 Orthotropic
2 Nylon SPI Polyant 61.8 (1.14) 0.06 Transversely isotropic
3 Kevlar FLEX Polyant 224.9 (1.45) 0.4 Orthotropic
4 Hydranet Polyant 394.6 (1.64) 0.45 Transversely isotropic
5 Diax 60P Bainbridge 156.8 (1.40) 0.3 Transversely isotropic
6 Kevlar CZ 15 Polyant 95.6 (1.45) 0.20 Orthotropic
7 Pentex FLEX 13 Polyant 230.1 (1.45) 0.35 Orthotropic
8 PX 15T Polyant 170.2 (1.40) 0.25 Orthotropic
9 Pentex FLEX 15 Polyant 229.1 (1.45) 0.30 Orthotropic
10 Dacron Polyant 243.6 (1.85) 0.30 Transversely isotropic
10 + Dacron Polyant 284.8 (2.15) 0.30 Transversely isotropic

The HydraNet® sailcloth (sample 4) is a durable polyester yarn combination with a


woven net of Dyneema which is five times stronger than polyester.

2.1 Experimental Set-Up

(a) (b)
Fig. 1. (a) Zwick&Roell Z100 type tensile testing machine; (b) sail cloth samples.

Conforming to ASTM:D882, experimental tests were performed using a


588 M. Calì et al.

ZwickRoellZ100 type tensile testing machine and TestXpert v11.02 software (Fig.
1a). Tensile tests in the warp and weft directions as well as in the bias direction
were performed on ten different sail cloths (Fig. 1b).
The tensile tests were carried out following UNI EN ISO 13934-1, 2000 standard
on samples 30 mm wide by 90 mm long. In the yield tests, the samples were pre-
loaded at 5 N with different load velocities (v1 = 0.2 mm/min; v2 = 2 mm/min; v3 =
20 mm/min). Analysing the sample results after load cycles in the plastic zone
produced energy-loss estimates therefore leading to a cloth damping value. Three
samples were tested for each cloth for each fibre direction (weft, warp and bias at
450 to the weft) to ensure repeatability and robustness.

(a) (b)
Fig. 2. Samples during some test phases: (a) Nylon SPI; (b) Custom Carbon.

2.2 Constitutive characteristics of sail cloths

Figure 3 shows the tensile test results in the weft direction for the six different
typologies of sail cloths: carbon fibres (sample 1 Fig. 1b); nylon (sample 2 Fig.
1b); kevlar (samples 3 Fig. 1b); HydraNet (sample 4 Fig. 1b); polyester (samples 5
Fig. 1b) and Dacron (polyethylene terephthalate-PET) (samples 10 Fig. 1b).

Fig. 3. Stress-Strain diagram in weft direction.

The curve trends can be divided into three different areas: the first with a
proportional linear elastic length, the elastic/plastic; second length showing a
downward concavity and the third after the cloth has yielded but not separated
Dynamic spinnaker performance through digital … 589

where the cloth continues to provide some resistance (even 40% stretching for
some cloths) which lessens with increasing tensile stress. The slope of the curves in
the first region provides the cloth's Young's modulus. All the samples show
initially comparable elastic deformation. The final deformation is on average 200%
higher than the elastic one, whereas reinforced cloth (sample 1, 3, 5, 6, 7, 8 and 9)
has much greater plastic deformation tolerance. In all the cases examined, cloth
fracture was ductile and occurred after much plastic deformation. Final cloth
shearing was quite variable: cloths 2, 3, 4 and 10 were about 25% higher to cloths
1, 5, 6, 7, and 9. Figure 4 shows the tensile test results in the warp and bias
directions (at 450) in terms of nominal stress against nominal strain.
Constitutive behaviour for cloths 1, 3 in the warp direction is noticeably
different to that in the weft direction confirming their marked orthotropy. In these
diagrams the second area shows an entirely linear trend. Even in this case, the
initial curve gradient supplies Young's modulus transversely to the cloth. The
shearing behaviour is analogous to the tensile behaviour in the weft direction.

Fig. 4. Stress-Strain diagram in the warp and bias direction.

Table 2. Young's modulus, Tensile strength and Elongations.


Sample Cloth Tensile Young's mod. Young's mod. Maximum
N° strength weft direction warp direction Elongation
[MPa] [MPa] [MPa] [%]
1 Custom Carb. 208 3250 550 21.3
2 Nylon SPI 167 780 780 23.2
3 Kevlar FLEX 258 2580 350 9.8
4 Hydranet 221 1120 1120 26.8
5 Diax 60P 188 2200 2200 27.9
10 Dacron 213 5800 5800 26.2

It may be observed that the behaviours of samples 2, 4, 5 and 10 are similar in


two directions but the third area is practically absent. Cloths 2 and 4 show high
deformation at low loads (low Young's modulus). Table 2 shows Young's modulus
values in the directions of weft and warp. The deformation energy Ed dissipated by
the cloths in hysteresis cycles at various frequencies f provides the equivalent
viscous damping coefficients [12-14] thus:
590 M. Calì et al.


‫ܧ‬ௗ ൌ ݇ ή ο݈௠௔௫ ή ܾ ௖ή௙ (1)

ா೏
ܿ௘௤ ൌ మ (2)
ଶగమ ή௙ήο௟೘ೌೣ

where k, b and c are the cloth's dependent constants, ο݈௠௔௫ the maximum sample
strain and ceq the equivalent viscous damping coefficient. The dissipated energy
decreases with frequency f and is proportional to the square of the maximum strain
ο݈௠௔௫ . Table 3 shows the k, b and c constant values and the equivalent damping
coefficient ceq.

Table 3. Equivalent damping coefficient.


Sample Cloth Ceq k b c

1 Custom Carb. 0.09 4 1.08 1.12
2 Nylon SPI 0.03 4 1.08 6.73
3 Kevlar FLEX 0.05 4 1.08 4.45
4 Hydranet 0.07 4 1.08 2.36
5 Diax 60P 0.04 4 1.08 5.45
10 Dacron 0.06 4 1.08 2.95

2.3 Acquisition method

The spinnaker of a Elan31, a 9 meters cruiser-racer, was decided on for the


acquisitions. It is a symmetric sail produced by the sail-maker Banks Sails Naples
with a surface area of 60 m2. The sail is made of Superkote 75; two sheets (length
of about 6 m) connect its clews to the boat; its halyard is fixed to the head of the
mainmast. The Young's modulus in weft, warp and bias directions are respectively
those of samples 2 and 8. Two different acquisitions were done. The first with
docked sail boat. The second during navigation.

(a) (b)

Fig. 5. (a) Marker points acquired on the spinnaker; (b) interpolated 3D polygonal mesh surface.
Dynamic spinnaker performance through digital … 591

The on-board instrumentation revealed wind intensity at 7.0 knots (3.6 m/s) and
direction 185° (nearly aligned from stern to bow) in docked sail boat acquisitions;
21.0 knots (10 m/s) and direction 183° in the acquisition during navigation. The
sail was set by an expert sailor and kept fixed during the acquisition. Figure 5 (a)
shows the marker points on the surface acquired. Three digital cameras were used,
one on the wharf so that the sail could be photographed entirely from the stern, the
others on two small vessels on either side of the sail to completely include the luff
or leech and half of the foot. In this way the three photographs have more points in
common and each point of the grid has been captured in at least 2 photographic
images. The three cameras shot the photos contemporaneously to capture the same
deformed image of the spinnaker. It can be assumed that in the stationary wind
conditions the sail remained “still” during the entire operation.
Also the photos were acquired with a constant focal length for all three cameras.
The acquired images are processed, in PhotoModeler software, applying different
filters like RGB-Green Extract, Convolution, Median-Smoothing and Edge–
Detection. From the accurate cloud points detected, an interpolated 3D polygonal
mesh surface of the spinnaker was constructed (Fig. 5 b).

3 Aerodynamic and structural numerical simulations

As opposed to aircraft wings, sails are significantly twisted and cambered both
chord-wise and span-wise which produces a characteristic wake. The flow field in
the wake can only be computed numerically. The relatively complex RE and 3D
geometry make direct numerical simulations infeasible and therefore fluid
dynamics turbulence and structural simulations must be modelled jointly.
Reynolds-averaged Navier–Stokes simulations (RANS), performed since 1996,
(Hedges et al., 1996) provide a reasonable estimate of the pressure distributions on
sails, although they do not provide in-depth understanding of the turbulent
structures in the wake. This study presents a virtual wind tunnel test with RANS
and Detached Eddy Simulations (DES) using different grids and time steps. In
particular, the turbulence is modelled with RANS in the boundary layer and the
wake is modelled with Large Eddy Simulations.

3.1 Computational domain and boundary conditions

The sail flying shapes were used to perform numerical fluid dynamics and
structural simulations with ANSYS® software. The sails and sheets were modelled
with a non-slip condition. A prismatic computational domain 26 m high, 26 m
wide and 65 m long was used to model a wind tunnel (Fig. 6 a). The mean
longitudinal wind velocity during sail acquisition (3.6 m/s) [10-11] was used as the
592 M. Calì et al.

inlet condition. The non-slip condition was used on the floor boundary which
extended 7.2 m downstream from the model. The wind tunnel side-walls and roof
were modelled with slip-conditions but the computational domain extended further
downstream (minimum 60 m from the sail) than the end of the physical roof and
floor, therefore pressure outlet conditions were used for these boundaries.

(a) (b)

Fig. 6. (a) Computational domain and boundary conditions; (b) Sail and air mesh.

The sail and the sheets were modelled with shell triangular elements (3980
elements). The air in the wind tunnel was modelled with 489844 tetrahedral
elements and 81838 nodes (Fig. 6 b).

(a) (b)

(c) (d)

Fig. 7. Wind particle track in virtual tunnel: (a) wind direction 165°; (b) wind direction 150°; (c)
wind direction 135°; (d) maximum displacement in correspondence of sail edge.

The boundary surfaces were made of triangular elements. The model was
refined with pinch control commands and face sizing commands until the growth
rate was 1.2 and the skewness was less than 0.95. The analysis also took into
account the reinforcements commonly applied at the head and clews, areas of
greater stress concentration. To model these reinforcements, the element thickness
in these areas was increased. In the Detached Eddy Simulations, the wake
turbulence used an SST k-omega RANS model (Cdes=0.65; Cb1=0.1355; Cb2=0.622;
Cv1=7.1). Having validated the model by verifying that for a longitudinal 7 knot
Dynamic spinnaker performance through digital … 593

wind the sail shape matched the experimental photogrammetry one, simulations
were carried out using 3.6 m/s wind longitudinal velocities at the wind tunnel inlet.
Other three wind longitudinal velocities (10 m/s, 15 and 20 m/s) were take into
account to evaluate and compare the influence of sail cloths mechanical
characteristics on dynamic performance of spinnaker (Fig.7(a)). Rotating the sail
around its vertical axis of 15°, 30° and 45° the forces, pressure distributions and
vibration for the different sail cloths were studied in correspondence of wind
direction 165°, 150° and 135°. In the simulations it was assumed that the air flow
at the tunnel entrance (inlet velocity) is laminar, setting a value of wind turbulence
of 2%. Fig. 7 (b) shows the maximum displacement in correspondence of sail edge.

3.2 Results and Discussion

The dynamic performance of the same geometry spinnaker was measured by


comparing the forces, pressure distributions, velocity distributions and sail
vibrations obtained with different sail cloths. The difference due to the sail cloths
were taken in account with the Young's modulus values in the directions of weft
and warp reported in Table 2. In particular the sail vibrations are seen to be
proportional to Eddy Turbulence Kinetic Energy that acts on the sail.
Furthermore using the inverse formula of wind action shape (or aerodynamic)
coefficient cp is calculated with the following equation:

ଶ௣
ܿ௣ ൌ (3)
ఘ௏ మ

where ߩis air density [g/dm ], p is the pressure simulated at the sail surface and
3

V is the perpendicular wind speed. This coefficient gives an equivalent measure of


the sail cloths influence on the dynamic sail performance. The results are
summarized in Tab. 4.

Table 4. Turbulence kinetic energy dissipation and shape coefficient.


Sample Cloth Turb. Kinetic Energy cp
N° [kJ] 165° 150° 135°
1 Custom Carb. 49 2.22 2.13 2.08
2 Nylon SPI 87 1.98 1.85 1.83
3 Kevlar FLEX 58 2.20 2.13 2.08
4 Hydranet 74 2.12 2.04 2.00
5 Diax 60P 63 2.18 2.10 2.04
10 Dacron 45 2.26 2.18 2.13

The three values of cp are calculated in correspondence of wind direction 165°,


150° and 135° and Eddy Turbulence Kinetic Energy in 1 second for a wind
velocity of 20 m/s. Similar behaviours were found in the sail mid-sections but
594 M. Calì et al.

greater differences on the highest spinnaker sections where the vortex grows.
Regarding the shape coefficient the Dacron© cloth showed the best performance.

4 Conclusion

This work has illustrated a methodology for improving the dynamic performance
of a spinnaker sail. Photogrammetry, CAD 3D reconstruction and numerical fluid
dynamics turbulence with structural simulations were used. The orthotropic and
transversely isotropic constitutive characteristics of ten different sail cloths were
measured by experimental test. The methodology allowed to measured the
influence of the different sail cloths on the dynamic performance of the sail.

References

1. Gerbino S., Del Giudice D.M., Staiano G., Lanzotti A., Martorelli M. On the influence of
scanning factors on the laser scanner-based 3D inspection process, Int J Adv Manuf Technol,
2015, doi 10.1007/s00170-015-7830-7.
2. Giordano M., Ausiello P., Martorelli M. Accuracy Evaluation of Surgical Guides in Implant
Dentistry by Non-Contact Reverse Engineering Techniques, Dental Materials, 2012, 28(9),
pp. 178-185, ISSN 0109-5641, Publisher Elsevier.
3. Franciosa P, Martorelli M. Stress-based performance comparison of dental implants by finite
element analysis. International Journal on Interactive Design and Manufacturing 2012, 6(2),
pp. 123–129, ISSN 1955-2513, Publisher Springer.
4. Ingrassia T., Mancuso A., Nigrelli V., Tumino D. Numerical study of the components
positioning influence on the stability of a reverse shoulder prosthesis, (2014) International
Journal on Interactive Design and Manufacturing, 8 (3), pp. 187-197, DOI: 10.1007/s12008-
014-0215-6.
5. Ma Y., Tang Y., West N., Zhang Z., Lin S., Zheng D. Numerical investigation on trimming of
a single sail in a regatt. Sports Engineering pp 1-10 Online: 17 November 2015
6. Viola I.M., Bartesaghi S., Van-Renterghem T., Ponzini R. Detached Eddy Simulation of a
sailing yacht Ocean Engineering 90 (2014) 93–103
7. Parolini N., Quarteroni A. Mathematical models and numerical simulations for the America’s
cup, Comput. Methods Appl. Mech. Eng. 194 (2005) 1001-1026.
8. Viola I.M., Flay R.G.J. Pressure distribution on modern asymmetric spinnakers, International
Journal of Small Craft Technology RINA (part B1) 152 (2010) 41-50.
9. Viola I.M., Flay R.G.J. Sail aerodynamics: understanding pressure distributions on upwind
sails, Experimental Thermal and Fluid Science 35 (2011) 1497-1504.
10. Martorelli M., Speranza D. Photogrammetry, CAD Modeling and FE Analysis for Improving
the Performance of a Spinnaker, Journal of Mechanics Engineering and Automation, 2, Vol. 2
(2012), ISSN 2159
11. Martorelli M., Pensa C., Speranza D. Digital Photogrammetry for Documentation of Maritime
Heritage, Journal of Maritime Archaeology, 2014, 9(1), pp. 81-93, ISSN: 1557-2285,
Publisher Springer.
12. Calì M., et al. Meshing angles evaluation of silent chain drive by numerical analysis and
experimental test. Meccanica (2015): 1-15.
Dynamic spinnaker performance through digital … 595

13. Sequenzia G., Oliveri S.M., Fatuzzo G., Calì M. An advanced multibody model for evaluating
rider’s influence on motorcycle dynamics. Proceedings of the Institution of Mechanical
Engineers, Part K: Journal of Multi-body Dynamics, (2014): 1464419314557686.
14. Sequenzia G., Oliveri S.M., Calabretta M., Fatuzzo G., Cali M. A New Methodology for
Calculating and Modelling Non-Linear Springs in the Valve Train of Internal Combustion
Engines (No. 2011-01-0780). SAE Technical Paper (2011).
GA multi-objective and experimental
optimization for a tail-sitter small UAV
Luca Piancastelli(1), Leonardo Frizziero(1), Marco Cremonini(2)
(1)
Alma Mater Studiorum University of Bologna, Department of Industrial
Engineering, viale Risorgimento, 2 – I-40136, Bologna (Italy)
(2)
Nuovamacut, Reggio Emilia (Italy)
Email: leonardo.frizziero@unibo.it

Abstract

This paper introduces a Montecarlo Genetic Algorithm, hierarchical, multi-


objective optimization of a Vertical Take Off landing Unmanned Aerial Vehicle
having a tail sitter configuration. An optimization of the hierarchical type is intro-
duced in place of the methods generally used multi-objective optimization, such as
Pareto and “arbitrary” weighted sums. A Montecarlo method optimizes the
weights of the final objective function used by the Genetic Algorithm. A very
simple "spreadsheet based" algorithm defines the CAD model of the Genetic Al-
gorithm individuals in order to evaluate the performance of the candidates. The
optimization method described in this study appears to be very effective. Then ex-
perimental tests were conducted with scaled-down prototypes. Four flight tests
were performed: Take Off, Cruise, Slow flight, Landing. A Taguchi matrix was
defined for each experiment. The tests started from a prototype that comes directly
from the Montecarlo Genetic Algorithm optimization and led to the final proto-
type shown along the paper (page 7, right figure). Unfortunately, the tail sitter ap-
proach proved poor control authority in the final phase of the vertical landing.
Even the "final" prototype showed unsatisfactory behavior in case of erratic wind
gusts. This unsolved problem is common to the tail sitter configuration that re-
quires a power control by air jets or additional propeller to control the aircraft in
the final phase of landing. Unfortunately, this necessity renders the tail sitter con-
figuration inconvenient for small Unmanned Aerial Vehicles.

Keywords: Genetic algorithm, Montecarlo random optimization, UAV, tail sitter

1 Introduction

The Convair XF-Y1 is an experimental tail-sitter aircraft built in the 1950s. The
main problem declared was linked to the awkward position of pilot during the ver-
tical flight phases, with the pilot lacking of rearward view. This claim was true,

© Springer International Publishing AG 2017 597


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_60
598 L. Piancastelli et al.

but the real problem is the lack of control authority during the touch-down and the
limited static stability while parking. This is due to the fact that the vertical wing
works as a sail with horizontal winds. In the 1990s Boeing tried to solve the lack
of control at low speeds in its tail-sitter Heliwing UAV. This tail sitter had a digi-
tal flight controller using cyclic-pitch rotor control for its vertical flight phases.
More recently the University of Sydney proposed a tail sitter with control surfaces
in the slipstream of two fixed-pitch propellers. The static stability is obtained
through a T-Wing. In this highly unstable configuration an autopilot controls the
vertical flight phases. Another interesting configuration is a small conventional

Fig. 1. The commercial “tail sitter” model (left) and the MC-GA optimized UAV (right).

airplane with control authority in hover obtained by using propeller slipstream


over conventional control surfaces. The problem connected with the high surface
exposed to wind are still unsolved. Stability and control problems grows amid er-
ratic wind gusts. This paper starts from a model of the original Convair design that
was tested indoor by the Authors with extremely encouraging results. The vertical
transition was extremely easy and the top speed was extremely high. The RECCE
application of the project of this paper requires high cruise speed and vertical take-
off and landing. The original project was developed through a few virtual proto-
types simulated with Flow Simulation of Solid Works. Again the results were ex-
tremely satisfactory. So the authors optimized the design with a hierarchical MC
(Montecarlo) and a elitarian GA (Genetic) Algorithm both written in “C”. The
physical laws governing the operation of an aircraft were interpolated from the
Flow Simulation tests. In this way it was possible to evaluate cruise, speed and
fuel consumption. The costs of the components are derived from the ultralight air-
craft industry. The very limited choice of aircraft engines suitable for this applica-
tions simplified the optimization process.
GA multi-objective and experimental … 599

2 The individual of the GA (the UAV)

In order to improve the performance of the commercial model (figure 1- left),


the UAV semi-wing (figure 1 –right) has an approximately square plant. The tail
is a trapezoidal profile that follows the commercial model. The ideal cruise veloci-
ty is Mach 0.6 (Mach=340 m/s). The wing span varies from 0.5 m to 5m while the
tail-span varies from 0.2 m to 2 m. Also for the airfoil choice the same approach is
used. Therefore, on the basis of known features of the various existing models, the
tail uses the NACA 0018; the thick-cambered airfoil N60R is used for the wing.
The wing geometry is obtained with an interpolation function that is cubic along
the airfoil and linear along the wing-span. The design of the fuselage starts from
the engine size and the equipment that needs to carried. The fuselage has a cylin-
drical-conical shape (Figure 2). Upmost is the propulsion power-pack, then it
comes the payload and the additional fuselage fuel tank. The fuselage conical part
has the tails and the wing attached. Geometrically, the fuselage is designed to
contain the engine and the payload. The main fuel tanks are integral in the tail and
the wing. Finally, the characteristics of the propeller is obtained from an algorithm
which allows to derive masses, radius, angular velocity and other input data from
the equations that define the best propeller for the specific individual (UAV). The
function that allows the calculation is called "daldiskload" that draws inside the
"ifunz" which function as output parameter has the "grossweightsupower" that,
starting from disk-loading, calculates the power necessary to lift the entire weight
of the aircraft (MTOW). The most important of the numerous functions that im-
plement the algorithm is undoubtedly the “Weight11b” function that takes as input
the geometric data of engine, propeller, wings, tail and additional fuel tank. Then
it defines the CAD model of the aircraft. From the CAD model it is possible to
evaluate all the masses of components of the aircraft, the speed of the aircraft, the
maximum range and the flight time. These later data come from the fuel consump-
tion, power, rotor efficiency, power plant installation efficiency, power required
for take-off, the hovering and all the other power related parameters: maximum
speed and true cruise-fuel consumption. In fact, the fuel mass depends on integral
and additional tanks capacity. The aircraft design takes into account the vertical
takeoff phase and the cruise. The optimization of the main parameters defines the
plane geometry. The idea is to obtain results that are not only realistic but at the
same time are optimal. As the first parameter of the aircraft project is the vertical
take-off, it is necessary to have a thrust-to-MTOW (Maximum Take Off Weight)
ratio larger than unity. Its minimum acceptable value is 1.05. The parameters tak-
en into account not only define the aircraft's geometry, but also the feasibility of
flight operations. On this basis, the single individual is defined and its perfor-
mance is calculated.
600 L. Piancastelli et al.

3 The fitness and the GA loop

Fig. 2. The fuselage of the CAD model.

The fitness has been made so as to always keep in mind the basic parameters
that are necessary to make the most effective possible UAV. The RECCE mission
consists of take-off, cruise, image acquisition and return to base in the shortest
possible time. Therefore, the fitness looks for an UAV which optimizes all of the-
se phases.
The genetic algorithm's central core [1] is the fitness function that calculates the
individual (aircraft-UAV) excellence level (merit). The merit therefore appears to
be the central key of the “GA loop” and it is calculated as a weighted sum of the
performance of the UAV. This individual is completely described by five funda-
mental parameters (wingspan, tailspan at radix and end, additional fuel tank ca-
pacity, engine type). All the other data are derived from them, through proper
functions. For each engine type a full MC-GA optimization is performed. There-
fore, the parameters of each individual (UAV) are only four. The fitness calculates
the CAD model and performance of the single UAV individual. The performance
parameters are the range, the cruise velocity, the time-on-target. This values are
added in a "weighted sum" with weight1, weight2 and weight3. These weight val-
ues which will be optimized using the Monte Carlo method are aimed to cost-
minimization. The crossover function of the GA is used as many times as neces-
sary to obtain a “qualified” population, or a population of individuals that fulfil the
minimum requirements. Since only five engines are available for this UAV, a sep-
arate Montecarlo-loop+GA-loop optimization is performed for each engine type.
The GA is of the elitarian type without mutation. The crossover is the standard
one, with a single crossover point on both parents' organism strings is selected. All
GA multi-objective and experimental … 601

data beyond that point in either organism string is swapped between the two par-
ent organisms. The resulting organisms are the children. Fitness proportionate se-
lection, also known as roulette wheel selection is used. The initial population is
composed by 10 individuals. The optimization ends when the population best in-
dividual doesn’t change for 5 generations or when there is no significant im-
provement in the values of fitness of the population from one generation to the
next. An history is kept to define a hierarchy of “best individuals” based not only
on population but also from fitness value.

3.1 Basic equations

The fitness is based on the cruise speed evaluation. The cruise speed is the
speed of maximum range for the UAV (GA-individual). It is evaluated by an “in-
terpolation function). This function has been validated through Flow-Simulation
CFD tests on a few individuals (UAV). Equations (1-4) makes it possible to eval-
uate the cruise speed vcruise starting from the max efficiency Thrust (T) of the pro-
peller [N], the MTOW (Maximum Take Off Weight) [N], the wing Aw and tail At
areas [m2].
1
Vcruise  2 A  2C (1)
2 B
A 0.44 u100.3 MTOW  2.44100.3T (2)
2
B 0.24 u100.3 Aw  7.5 u100.3 At (3)

C 2

A  9 u100.3 MTOW  5 u100.3 Aw
2
(4)

4 The cost evaluation and the Montecarlo loop

The MC [2] cost optimization is the process that optimizes the weights used by the
GA loop. In addition to seeking the optimal performances, the inclusion into the
optimization of the costs introduces the concept of cost-effectiveness of the final
design (best population). This result is obtained by inserting into the algorithm
fixed costs, such as engines, PSRU (Power Speed Reduction Unit), propellers and
the cost of CFRP (Carbon Fiber Reinforced Plastic) per unit mass, autopilot,
communication equipment and sensors.
602 L. Piancastelli et al.

Flow diagram 1 The Montecarlo algorithm defines the


weights of the GA optimization. For each weight array a new
optimization is performed.

The aim of the cost function is to evaluate the costs as a sum of all the parameters
that contribute to the final bill for the construction, assembly, maintenance and
operations of the UAV (individual). Therefore, the cost function has its output pa-
rameter in the value of the total life-cycle cost. This evaluation is then used in the
Montecarlo-loop through which the optimum weights for GA-loop-fitness will be
calculated. The Monte Carlo method uses the three-elements weight vector as in-
put to the GA-loop. The MC optimizes the cost-effectiveness by finding the opti-
mum vector that individuates the lowest costs and the best performance. For the
MC the performance is the cruise time. Therefore, the Montecarlo merit function
is the ratio between total life-cost and cruise-time. The best individual of the hier-
archical dual loop optimization is shown in figure 1 (right). The result or the opti-
mization are summarized in Table 1.

Description Original Optimized


Cruise Speed [Mach] 0.9 0.7
Length [m] 1.7 0.5
Range [km] 640 1000
Mass [kg] 800 200
Wingspan [m] 1.8 2
Wingtail [m] 1.8 0.5
Power [kW] 550 150
Cost[USDx100] 1000 100

Table 1: Optimization results


GA multi-objective and experimental … 603

5 The experimental tests

In any case, in order to validate the method, a CFD simulation of the prototype
with Flow Simulation was performed [3]. Then an experimental campaign was
conducted with scaled-down prototypes. Four flight configurations were tested
with the Taguci method: TO (Take Off), Cruise, Slow flight, Landing. A Taguci
matrix was defined for the experiment. The tests performed identified numerous
problems in the concept of the aircraft. The experimentations started from the pro-
totype of figure 3 (left) that comes directly from the MC-AG optimization and led
to the final prototype of Figure 3 (right). Unfortunately, the tail sitter approach
proved poor control authority in the final phase of the vertical landing with high
wind. Also static stability when parked on the “runaway” proved to be critical due
to capsizing. The final prototype uses a stabilizing platform composed by a steel
pipe with a concrete foundation Figure 3 (right). Even the "final" prototype on
landing showed unsatisfactory behavior in case of erratic wind gusts. This prob-
lem is common to the tail sitter configuration that requires a high power control by
air jets or additional propeller to control the aircraft in the final phase of landing.
Unfortunately, this necessity renders the tail sitter landing configuration inconven-
ient for small UAVs. In case of high winds, the horizontal landing is adopted by
the prototype of figure 3 (right) with the aid of a foldable propeller. Therefore,
contra-rotating propellers were eliminated.

Fig. 3. The original (left) and the final configuration (right).


604 L. Piancastelli et al.

6 Conclusions

The hierarchical MC-GA [4] algorithm proved to be extremely efficient in di-


mensioning the most cost-effective UAV population. The batch CAD modelling
software of Solid Works proved to be extremely effective for the final purpose of
the automatic modelling of the best solution. A CFD based interpolating function
is used to evaluate the cruise speed of the different UAVs design [5-8]. However,
the experimental tests on a model of the final prototype highlighted all the short-
comings of the tail sitter POGO design. Therefore, the final design used a different
“flying wing” configuration with a “starting structure” to stabilize the UAV while
parking in high winds [9-10] Vertical landing was possible only in very low winds
and horizontal landing was required in windy days.

References

1. Poli, R. et alii. A Field Guide to Genetic Programming, 2008 (Lulu.com, book-freeware)


ISBN 978-1-4092-0073-4.
2. Motwani, R and Raghavan, P. Randomized Algorithms, 1995 (Cambridge University Press,
New York) ISBN 0-521-47465-5.
3. Wong, K.C. et alii. Attitude Stabilization in Hover Flight of a Mini Tail-Sitter UAV Int. Conf.
on Int. Robot and Systems, IEE/RSJ, San Diego CA, Oct. 2007, pp. 2642-2647 (IEE, San
Francisco).
4. Piancastelli, L. and Frizziero L. GA based optimization of the preliminary design of an ex-
tremely high pressure centrifugal compressor for a small common rail diesel engine, 2015
(ARPN Journal of Engineering and Applied Sciences), ISSN: 1819-6608, Volume 10, Issue
4, 2015, Pages 1623-1630
5. Piancastelli, L., Bernabeo, R.A., Frizziero L. UAV remote control distraction prevention
trough synthetic augmented virtual imaging and oculus rift-style headsets, 2015 (ARPN Jour-
nal of Engineering and Applied Sciences), ISSN: 1819-6608, Volume 10, Issue 10, 2015,
Pages 4359-4365
6. Emel'yanov, S., Makarov, D., Panov, A.I., Yakovlev, K. Multilayer cognitive architecture for
UAV control, 2016 (Cognitive Systems Research), ISSN: 1389-0417, Volume 39, 1 Septem-
ber 2016, Pages 58-72
7. Junaid, A.B., Lee, Y., Kim, Y., Design and implementation of autonomous wireless charging
station for rotary-wing UAVs, 2016 (Aerospace Science and Technology), ISSN: 1270-9638,
Volume 54, 1 July 2016, Pages 253-266
8. Negrello, F., Silvestri, P., Lucifredi, A., Guerrero, J.E., Bottaro, A., Preliminary design of a
small-sized flapping UAV: II. Kinematic and structural aspects, 2016 (Meccanica), ISSN:
0025-6455, Volume 51, Issue 6, 1 June 2016, Pages 1369-1385
9. De Marchi, L., Ceruti, A., Marzani, A., Liverani, A. Augmented reality to support on-field
post-impact maintenance operations on thin structures, 2013 (Journal of Sensors), ISSN:
1687-725X, Volume 2013, 2013, Article number 619570
10. Liverani, A., Leali, F., Pellicciari, M., Real-time 3D features reconstruction through monocu-
lar vision, 2010 (International Journal on Interactive Design and Manufacturing), ISSN:
1955-2513, Volume 4, Issue 2, May 2010, Pages 103-112
Part V
Computer Aided Design and
Virtual Simulation

In last decades Computer Aided Design and Virtual Simu lation have seen a rev o-
lution in the technology and range of applications available. In this way, the focus
of this track will be to explore the means by which this group of technologies has
emerged fro m the use of computers to provide an entirely digital environ ment
covering the whole engineering design process, thus speeding up the transition
fro m preliminary design to camera-ready and functional products. It is worth not-
ing that the main pillar supporting this track turns out to be an already well-known
and mature discipline like Co mputer Graphics. In fact, this section comprises se v-
eral topics trying to go through the different structuring comp onents of Computer
Aided Design and Virtual Simu lation such as Simulat ion and Virtual Approaches,
Virtual and Augmented Reality, Reverse Engineering, Geo metric Modeling and
Analysis, Product Data Exchange and Management, Surveying, Mapping and GIS
techniques and, finally, Building Info rmation Modeling.
The topic Simulat ion and Virtual Approaches is composed of different papers
focused on the description of novel and efficient models of complex systems to
simu late several application case studies and optimise their specific performance.
A significant attention is given to the simu lation of manufacturing processes and
especially those headed up to the automated robotic functions in advanced man u-
facturing of co mponents. Processes functions such as mu lti robot pick and place,
cutting, milling and deburring, are specifically described. Simulat ion is also
proved to be a valuable tool to support the design of component properties, su b-
systems or systems, such as sliding contact forces, tolerances in automotive, solder
joints reliability, motorcycle suspensions, hydraulic components and Active Noise
Control (ANC) systems. Finally, one paper deals with the use of Virtual Mode l-
ling to gain knowledge of the past heritage of machine design presenting an e x-
amp le of Industrial Archaeological Study.
In the case of the topic Virtual Reality and Augmented Reality, two papers are
presented. The first one implements Virtual Reality to evaluate the visual impact
(landscape) of the installation of energy wind systems. In line with the growing a t-
tention of Augmented Reality in the maintenance and manufacturing processes,
the last paper concerns a case study in this field.
606 Part V: Computer Aided Design and Virtual Simulation

The papers belonging to the topic of Reverse Engineering prove the growing
number of research activities boosted by this emerging discipline. These activities
concern both the 3D scanning process and the development of original methods
for processing the point cloud to recognize specific features in different areas (in-
dustrial, bio med ical, etc.). Regarding the acquisition phase, we can highlight a pa-
per aims to bridge the gap of an economic scanning system for capturing full 3D
human body models through low-resolution depth cameras. By in-depth and criti-
cal reviews, three papers analyze the state-of-the-art of point cloud processing
methods both within classic industrial engineering and increasingly important are-
as such as biomedical. Finally, a paper demonstrates the potentialities of a reverse
engineering based-method to develop a semiautomatic p rocess for repairing forg-
ing tools.
Within the topic of Geomet ric Modeling and Analysis, we can find interesting
applications proving that Virtual Reality is not just science or reasoning about the
physical world. Actually, all reasoning, all thinking and all external experience are
forms of virtual reality. For examp le, Geo metric Modeling supports Industrial A r-
cheology to rescue old and maybe forgotten mechanisms or provides new geome t-
rical features to describe three-dimensional face. It also can be very valuable to
help in designing a huge range of mechanisms such as a Stirling engine or a spiral
bevel gear. Finally, it proves to be extremely useful to cope with complex optimi-
zation problems related to the design of tissue engineering scaffolds, the stru ctural
parts of a racing solar car or even the geometric shape of organic solar cells.
In the context of the topic Product Data Exchange and Management, it can be
underlined that product data and informat ion are still of prior interest for compu t-
er-aided design and simulat ion in order to speed up the preparation process and
improve collaboration. Two of the presented papers are focusing on data exchange
and interoperability for imp roving collaboration during the design phase. The first
one applies such approaches in the smart oil & gas production industry. The se-
cond one proposes a CAD add-on for meeting preparation. The last paper opens
on a new domain : medicine. With more and more dig ital framework to support
med ical tool design, and looking for a better collaboration support during this
phase, this paper explo res the challenge of introducing the widely known PLM
strategy in this context. In this sense, this paper draws an interesting portrait of
opportunities and risks.
Entering into the topic of Surveying, Mapping and GIS techniques, the eme r-
gence in recent years of Co mputer Vis ion methods based on Structure fro m M o-
tion with Multi-View Stereo has revolutionized 3D topographic surveys by signif-
icantly boosting efficiency in collecting and processing data. In this sense, we can
highlight the presence of a paper that tests the potential use of unmanned aerial
vehicles as a platform to flexibly obtain sequence of images along coastal areas
fro m which efficiently producing high quality and dense 3D geospatial data.
The last paper focuses on the topic Building Info rmation Modeling (BIM ), i.e.
applying the PLM approach to building data. This paper discusses path planning
Part V: Computer Aided Design and Virtual Simulation 607

within a building by using both BIM and IFC data to either improve building d e-
sign or design evacuation.
Product data and informat ion are still of prior interest for computer-aided de-
sign and simu lation, for speeding up the preparation process and imp roving co l-
laboration.
Two of the presented papers are focusing on data exchange and interoperability
for improving collaboration during the design phase. The first one applies such
approaches in the smart oil & gas production industry. The second one proposes a
CAD add-on for meeting preparation
The other two papers open on two new domains: medicine and building.
With more and more digital framework to support medical tool design and in
order to better support collaboration during such phase, one paper exp lores the in-
troduction the opportunities of PLM strategy in this context. Th is paper draws an
interesting portrait of opportunities and risks.
The last papers focus on the BIM approach, i.e. the PLM for the building. This
paper discusses path planning within a building using BIM and IFC as a source of
informat ion, in order to either improve building design or design evacuation.

Fernando Aguilar - Univ. Almeria

Vincent Cheutet - DISP INSA Lyon

Francesca De Crescenzio - Univ. Bologna

Luca Di Angelo - Univ. L'Aquila


Section 5.1
Simulation and Virtual Approaches
An integrated approach to design an innovative
motorcycle rear suspension with eccentric
mechanism

R. Barbagallo1*, G. Sequenzia1, A. Cammarata and S. M. Oliveri1


1
University of Catania, Catania, Italy
*
Corresponding author – email: rbarbaga@dii.unict.it

Abstract In the present work, by means of an integrated approach, a new rear sus-
pension for motorcycles, able to achieve the required progressiveness in terms of
rigidity by using a constant-stiffness spring and a compact mechanism, has been
studied. The key component is an eccentric system inserted in the shock absorber
head. As reference, we analyzed the rear suspension of the Ducati Multistrada MY
2010, characterized by the use of a variable-stiffness spring. The aim of the paper
is to prove that the new proposed solution can obtain a response, in terms of load
to the wheel, similar to that of the actual system. At first, a mathematical model to
simulate the kinematics of the new suspension is presented. This model is able to
evaluate the influence of geometric dimensions of the components, checking suc-
cessfully the ability to reproduce the behavior of the original suspension. After the
preliminary design, the kinematic and static models are included within an optimi-
zation algorithm ad-hoc created to calculate the exact dimensions of each compo-
nent. Two Matlab/Simulink® lumped mass models, respectively referred to the
novel and reference suspension, are used to compare the dynamic responses dur-
ing the travelling of a particular road profile used in Ducati’s experimental tests.
Finally, an accurate modeling of the components, considering also the production
processes to be used for their creation, is provided.

Keywords: Dynamics; motorcycle rear suspension; integrated simulation

© Springer International Publishing AG 2017 611


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_61
612 R. Barbagallo et al.

1 Introduction

The main function of the suspension of a vehicle is to maximize comfort by


reducing vertical acceleration of the passenger seat. To achieve this goal the rear
suspension of the motorcycle should have a progressive wheel rate. A non-
progressive behaviour shall be obtained by connecting a spring (with constant
stiffness) and a viscous damper (with constant damping) directly to the swing-arm.
A non-linear behaviour could be obtained using a mechanism while keeping
stiffness of the spring and damping of the shock absorber constant. In
motorcycles, such systems are usually planar mechanisms (i.e. four-bar linkages)
which are placed between the rear wheel swing-arm and the rear spring-damper to
achieve a non-linear wheel rate [1]. There are various types of rear suspensions for
motorcycles [2-3] that have different solutions to achieve the required
progressiveness of response in terms of rigidity. Such solutions can be divided
into two groups: those who obtain such progressivity using variable-stiffness
springs with the shortening of the shock absorber proportional to the movement of
the wheel relative to the frame [4] and those based on fixed-stiffness springs that
resort to mechanisms which allow to change the law of shortening [5]. Generally,
these mechanisms include relatively bulky and heavy components while the
variable-stiffness springs are significantly more complex than their counterpart
with constant stiffness.
In the present work, by means of an integrated approach [6], a new rear suspen-
sion for motorcycles, able to achieve the required progressiveness in terms of ri-
gidity by using a constant-stiffness spring and a compact mechanism, has been
studied. The novel system is also able to yield a continuous stiffness in the wheel
travel/wheel load curve improving ride comfort reducing vertical acceleration of
suspended masses.

2 Methods

The basic idea of the present work is to obtain a new and compact rear suspension
able to have a behaviour similar to a suspension with variable spring stiffness re-
ferred to Wheel Load (WL) / Wheel Travel (WT) curve. The performance of the
WL depends essentially on the stiffness of the spring and on Lever Ratio (LR) of
the suspension. The latter is the ratio between Wheel Ordinate (WO) and shock-
absorber elongation. Usually, suspensions is required to respond with increasing
stiffness to increasing WO. In the suspension taken as a reference such effect is
obtained thanks to a spring, which varies its rigidity in correspondence of a certain
value of WO; this implies that two lines with different slopes represent the trend
of the WL in terms of WT. The new suspension is required to have similar pro-
gressivity but using a single-stiffness instead of a double stiffness spring as the
An integrated approach to design an innovative … 613

original. A constant spring stiffness requires that the progressivity of load must be
given by geometry and, in particular, by a very marked decrease of LR for grow-
ing values of WT. In this way, a non-linear trend of WL, being able to approxi-
mate the double linear behaviour of the original suspension, can be obtained. The
solutions generally adopted for this purpose have the drawback of providing bulky
and heavy components. Here, an eccentric link pivoted to the frame and inserted
into the shock’s head is proposed. In this way, it is possible to obtain the wanted
variation of LR with reduced overall dimensions. The layout of the system is rep-
resented in Figure 1, where the eccentric link is schematically indicated and exter-
nal constraints are relative to the frame. As can be seen in the figure, the pivot
constrained to the frame is achieved through the eccentric component linked to the
damper’s head, in turn attached to the head of the connecting rod. The latter is
connected to the damper’s base. Thanks to this mechanism, WL corresponds to the
shortening of the damper length with variable LR. In particular, LR is determined
by the ratio between the distances of the pivot point to the damper and connecting
rod heads.

Fig. 1. Layout of the novel suspension system.

2.1 Kinematics

To evaluate feasibility of the solution and to analyse influence of geometrical pa-


rameters on the suspension behaviour a mathematical model has been developed.
The latter simulates the inverse positioning problem giving location of all points
of the mechanism, plotted in Figure 2, for an assigned position of the wheel. In
Figure 2, the angles characterizing the eccentric component and the fixed lengths,
that geometrically describe the novel suspension, are marked with green points;
the variables that identify positioning in space of the mechanism are blue-
coloured. Reference system, centered at swing arm-frame hinge, is shown in red
colour. The fundamental reference parameter, which is taken as input data for the
inverse positioning problem, is WO. The spring with variable length HG is in se-
614 R. Barbagallo et al.

ries with a prismatic joint between elements HG and CH. Points C, H and G must
necessarily remain aligned during the motion, as all belonging to the damper. The
circles represent the revolute pairs while the prismatic one is represented through a
rectangle. The link EFG is the eccentric link hinged to the frame at point F and
connected to the damper head at point G. Point E is linked to the connecting rod.
The system is characterized by having only one degree of freedom (dof), as ob-
tained from application of the Grübler’s formula for planar mechanisms. Conse-
quently, by setting the value of WO it is possible to know location of all points of
the system. For a given WO the inverse positioning problem gives two solutions.
The resolution through the non-linear system of equations, representing the rela-
tionships between the geometric variables, was obtained using the Matlab soft-
ware. By defining the range of variation of WO the code solves the mechanism for
all desired configurations, as shown in Figure 3.

300

250

200

150
y [mm]

100

50

-50

-100

0 100 200 300 400 500

x [mm]

Fig. 2. Kinematics and notation of the Fig. 3. Inverse positioning problem solutions
mechanism varying WO

2.2 Statics

Within the same code, we implemented the calculation of static forces which the
suspension components are subjected to during operation. The calculation has
been made by considering different configurations assumed by the mechanism and
imposing the following parameters: spring stiffness (K), spring preload and the
force produced by gas inside the damper.
Then, let us consider the spring at rest when the damper is completely extended
and let us introduce damper elongation ld, w.r.t. the rest condition length l0. So, by
means of the principle of virtual work it is possible to equal the work done by ex-
ternal forces (in this case only the vertical force FB applied on the wheel) to that
performed by internal forces, i.e. the elastic force Fel, obtaining:
An integrated approach to design an innovative … 615

l d  l0 (1)
FB Fel
WOd WO0

For each configuration, the values of FB and Fel will be input data from which
to obtain the entire system of forces. From the free-body diagrams, used to derive
the statics equations and reported in Figure 4, static equilibrium for each compo-
nent have been set.

Fig. 4. Free-body diagram for the components of the novel suspension

Then, by implementing the numerical resolution of the system, it is possible to


assess forces at every point of the mechanism for different configurations of the
suspension. An optimization algorithm to better approximate the behaviour of the
reference suspension has been also implemented. The target defined for the opti-
mization algorithm was the minimization of the distance, calculated along the or-
dinate axis, between the graphs WL/WT of the novel and reference suspensions, in
correspondence of the value of WT in which the original suspension changes its
stiffness. As constraints for the novel mechanism curve, the passage for the end-
points has been imposed.
616 R. Barbagallo et al.

2.3 Dynamics

In order to assess the behaviour of the suspension subjected to time-varying


stresses coming from road hazards, two models for the proposed and reference
suspensions have been implemented in Simulink environment. In particular, we
adopted a Half-Vehicle model that describes the vertical dynamics of half vehicle,
focusing analysis on the rear wheel and on its suspension system. In order to use
such a model, however, it was necessary to obtain an equivalent system, shown in
Figure 5 (left), which takes into account the suspension geometry.
Indicating with M1 and M2, respectively, the masses of the suspended and non-
suspended bodies, free-body diagram of the resulting double-mass lumped model
is shown in Figure 5 (right).

Fig. 5. Double-mass model and equivalent system: layout and free body diagram

The variables X1 and X2 respectively represent the vertical coordinates of the


centre of gravity of the suspended and non-suspended masses, while the input sig-
nal W describes the road surface profile.
Before proceeding with dynamic analysis, it is necessary to derive static condi-
tions. In particular, WL value, coming from the subsequent application of a load
Fpr representing passengers and baggage at the saddle position, must be defined.
An integrated approach to design an innovative … 617

Fig. 6. Static equilibrium under a load Fpr acting at the saddle position

Referring to Fig. 6, let us consider the motorcycle in static equilibrium under


the application of the load Fpr, the balance of moments, w.r.t. the theoretical point
of contact between the front wheel and the ground, is obtained through the expres-
sion:

Fpr u ( i  a ) Fp u p (2)
WLCS 
i i

where:
݅ = motorcycle wheel base
ܽ = distance, measured along the x-axis, between ‫ ݎ݌ܨ‬application point and the rear wheel-
ground plane contact point
‫ = ݌‬distance, measured along the x-axis, between the centre of gravity G and the rear
wheel-ground plane contact point
‫ = ݌ܨ‬motorcycle weight force
‫ = ݎ݌ܨ‬load applied at the seat position

3 Results

Optimum parameters of the new suspension able to approximate the behaviour of


the original suspension have been identified through a constrained optimization
process, as reported in Table 1.
Figure 7 respectively reports the comparison between the curves representing
the WL of the new and original suspensions. As it can be observed, results of the
proposed suspension are in very good agreement with the original suspension re-
sults (maximum error about 3.8%). It should be noted also that, while the original
suspension is characterized by a curve with a discontinuity in correspondence of
the change of spring stiffness, the new suspension continuously varies its rigidity.
618 R. Barbagallo et al.

Table 1. Optimal parameters of the novel suspension.

Parameters Units Value


CH [mm] 57.0
DH [mm] 36.2
DE [mm] 258.4
EF [mm] 42.2
FG [mm] 8.8
ω [deg] 85.1
K [N/mm] 90.4
Pre-load [mm] 24.0
Gas force [N] 84.5

Fig. 7. WL/WT comparison between the proposed and the original suspensions.

Once obtained the optimal parameters, we analysed the equivalent dynamic


models of the two suspensions by the Simulink software. In particular, the travel-
ing of different road bumps (30, 60 and 90 mm of height), the first of which is
used in the Leyni’s test bench (used in Ducati for suspensions) has been simulated.
Performance evaluation of a suspension is extremely subjective. Passenger’s
judge is based on personal feelings, which may vary from person to person. How-
ever, there are some objective criteria to quantify performance of a suspension for
vehicles and one of these is the vertical component of acceleration. In several
studies [7-8], it has been shown that acceleration is the main parameter directly
connected to driver's feeling of comfort. A further criterion comes from the mini-
mization of changes in suspended masses’ height w.r.t. to the roadway. Based on
these considerations, we decided to analyse acceleration of the non-suspended
masses of the new suspension and to compare these values with those of the origi-
nal suspension (Figure 8).
An integrated approach to design an innovative … 619

Analysing the graph reported in Figure 8, a similar dynamic response (standard


deviation equal to 0.187 m/s2), with peaks slightly less pronounced for the new
suspension, was obtained.
Once the new suspension has been verified from the point of view of kinemat-
ics and dynamics, a 3D CAD modelling has been created using the commercial
software Autodesk Inventor. In analysing the parts of the suspension from the
structural point of view, we considered only maximum static loads. The material
chosen for the components is Avional 14, an aluminium alloy having Young mod-
ulus E = 72500 MPa and yield tension σsn = 345 MPa. This choice was made be-
cause it is commonly used in Ducati in component manufacturing. Finally, we cal-
culated tensions for each component due to maximum loads. All tests carried out
have shown that the components are subjected to tensions significantly under yield
tension with safety factors greater than 3. The final suspension 3D model bounded
to the swing arm is shown in Figure 9.

Fig. 8. Comparison between the vertical accelerations of the suspended masses.

Fig. 9. Final render model of the novel suspension.


620 R. Barbagallo et al.

4 Conclusions

A novel rear suspension for motorcycles has been presented. An eccentric mecha-
nism allows reproducing the same characteristics of a reference suspension of the
Ducati Multistrada MY 2010 based on variable stiffness spring. Here we have
demonstrated that the novel mechanism yields same response in terms of wheel
load/wheel travel. Besides, the progressiveness of this curve can be changed vary-
ing the geometric parameters of the mechanism revealing great adaptability to the
driver’s request. A lumped model with two masses and equivalent
springs/dampers has given good performance in terms of comfort during the
Leyni’s test bench. Finally, after structural verification, a compact design has been
developed demonstrating that the increased complexity does not imply a severe
change in the mass budget. Besides, the presence of a constant stiffness spring, the
continuity in the wheel load/wheel travel curve and the opportunity of adjusting
the stiffness of the suspension to match different driving conditions make this sys-
tem promising for future developments and manufacturing.

Acknowledgments Authors would like to thank Eng. Simone Di Piazza and Eng. Stefano Isani
from Ducati Motor Holding SpA for their support.

References

1. Noriega, A., Mántaras, D. A., & Blanco, D. (2014). Kinetostatic Benchmark of Rear Suspen-
sion Systems for Motorcycle. In New Advances in Mechanisms, Transmissions and Applica-
tions, pp. 1-8 (Springer Netherlands).
2. Cossalter, V. (2006). Motorcycle dynamics (Lulu. com).
3. Cossalter, V., Lot, R., & Massaro, M. (2014). Motorcycle Dynamics. Modelling, Simulation
and Control of Two-Wheeled Vehicles, pp. 1-42 (John Wiley & Sons).
4. Bradley, J. (1996). The Racing Motorcycle: A Technical Guide for Constructors. Gearing,
Geometry, Aerodynamics and Suspension (Broadland leisure publications).
5. Croccolo, D., and De Agostinis, M. (2013). The Rear Suspension Equilibrium. In Motorbike
Suspensions, pp. 17-31 (Springer London).
6. Nadeau, J. P., and Fischer, X. (2011). Research in interactive design (Vol. 3): virtual, interac-
tive and integrated product design and manufacturing for industrial innovation (Springer Sci-
ence & Business Media).
7. Jazar, R. N. (2013). Vehicle dynamics: theory and application. Springer Science & Business
Media.
8. Cossalter, V., Doria, A., Garbin, S., & Lot, R. (2006). Frequency-domain method for evaluat-
ing the ride comfort of a motorcycle. Vehicle System Dynamics, 44(4), 339-355.
Design of Active Noise Control Systems for
Pulse Noise

Alessandro LAPINI1*, Massimiliano BIAGINI1, Francesco BORCHI1, Monica


CARFAGNI1 and Fabrizio ARGENTI2
1
Department of Industrial Engineering, University of Florence, via S. Marta, 3, Florence
2
Department of Information Engineering, University of Florence, via S. Marta, 3, Florence
* Corresponding author. Tel.: +39-055-275-8687; fax: +39-055-275-8755. E-mail address:
alessandro.lapini@unifi.it

Abstract Active noise control (ANC) methods have been successfully studied
and tested for the cancellation of stationary noise. In the last decade, some adap-
tive solutions for the case of impulsive noise have been proposed in the literature.
Nevertheless, such a model fits a limited class of impulsive disturbances that char-
acterize practical scenarios. In this paper a preliminary study on the design of a
non–adaptive deterministic ANC system for pulse signals that relies on no statisti-
cal assumptions is developed. The spatial audio rendering framework of Wave
Field Synthesis is formally adopted in order to synthesize the cancelling sound
field by means of an array of secondary sources. A set of preliminary simulations
in free field environment, as well as the impact of array geometry and extension,
has been carried out in view of forthcoming geometry and shape optimization of
the system.

Keywords: Active noise control, Wave field synthesis, Pulse noise, Array de-
sign, Virtual sources.

1 Introduction

Active noise control (ANC) techniques aim at reducing the effect of an acoustic
noisy source by means of a cancelling sound wave generated by a suitable array of
secondary sources. In the past decades, the interest in the design of ANC systems
has considerably grown because of the superior advantages in abating low fre-
quency noise than passive methods [1]. ANC techniques have been applied in var-
ious industrial applications such as active noise reduction headset, long duct noise
cancellation system and cabin noise cancellation.
A key feature shared by most of the existing ANC techniques is adaptivity. The
primary noise and the residual signal (that is, the signal remaining after the appli-

© Springer International Publishing AG 2017 621


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_62
622 A. Lapini et al.

cation of the canceling wave) are measured by means of reference and error mi-
crophones, respectively. Then, the acquired signals are continuously processed in
order to control an adaptive ANC system that pilots the array of secondary sources
by varying its response in a real-time manner. Adaptive algorithms, such as the fil-
tered-x least mean square (FxLMS) [2] and its extensions, have been effectively
exploited in scenarios where the noise can be modeled as a stationary process [3].
The application of ANC systems in practical situations where the target noise is
impulsive is still limited. Typical examples are represented by man-made disturb-
ances, such as sounds generated by manufacturing plants, vehicle transit, punching
machines or construction sites. The interest towards these issues is also witnessed
by European efforts in funding research projects and feasibility studies aiming at
increasing, for instance, acoustical comfort in urban and peri-urban areas [4, 5].
In the scenarios mentioned above, FxLMS-based algorithms might fail to cor-
rectly adapt their response, causing a degradation of the cancellation performance
and arising stability problems. Extensions of adaptive ANC techniques to the case
of impulsive noise based on the minimization of the p– norm (1 ≤ p < 2) [6], robust
statistics [7] as well as hard thresholding [8] have been proposed. Nevertheless,
such methods rely on statistical assumptions for the noise process.
The aim of this paper is to address the problem of designing ANC systems for
impulsive acoustic disturbances by means of a non-statistical and non-adaptive
approach, in order to deal with a broader class of scenarios with respect to the
ones mentioned above. Indeed, there exists practical situations where the typical
framework of impulsive noise does not fit the experimental scenario, such as, for
example, in shooting ranges. In such cases, the statistical modeling of noise is
quite complex and the event is very short; thus the adaptive cancelling procedure
can be barely performed. On the contrary, in this paper we only assume that the
acoustic disturbance is a deterministic train of short pulses, whose occurrences can
be unpredictably shaped and distributed across time. For sake of synthesis, we will
indicate this characterization as pulse noise in order to distinguish from the impul-
sive noise framework.
In order to cope with the above issues, the design of a deterministic ANC sys-
tem for pulse noise based on the Wave Field Synthesis (WFS) is considered in this
paper. WFS is a theoretical framework introduced by Berkhout [9] that allows the
synthesis of a desired sound field in a specific area by means of appropriate sec-
ondary sources. Unfortunately, WFS theory strictly requires unfeasible geome-
tries, such as infinite array consisting of elementary infinitesimal sources; thus,
several approximations are usually introduced in the practical design process.
The idea of using the WFS framework to synthesize virtual “anti-sources”
whose acoustic field is used in a destructive manner has been previously proposed,
e.g. [10, 11]. In this paper, the idea is formalized and specifically applied for the
case of pulse noise. By considering a preliminary design of a WFS based ANC
system operating in free field environment, the impact of the most important de-
sign variables, such as array geometry and extension (aperture) is studied. We
make use of numerical simulations in 2-D and 3-D scenarios in order to assess the
Design of Active Noise Control Systems for Pulse Noise 623

cancellation performance. Regarding the specific case of 3-D scenario, both the
solutions based on planar and linear array are designed and tested, in view of
forthcoming geometry and shape optimization of the system. The paper is orga-
nized as follows: Section 2 summarizes the concepts of WFS. The formalization
of WFS based ANC systems is proposed in Section 3. Section 4 presents the re-
sults obtained in the simulations as well as the discussion about the design param-
eters. Finally, conclusions and remarks are presented in Section 5.

2 Practical Wave Filed Synthesis solutions

According to the Kirchhoff-Helmoltz integral [9], the pressure field inside a vol-
ume V generated by a distribution of primary acoustic sources can be synthesized
by a continuous distribution of secondary elementary sources placed over the
boundary of V, i.e. the surface S. Unfortunately, only few geometrical configura-
tions have an analytical closed form that can be easily applied.
A typical example is the Rayleigh I integral. For instance, in a 2-D space, we
consider a line L that divides the plane in two parts and we assume that the prima-
ry sources are completely contained in one of the two half-planes. It can be shown
that the sound pressure field P generated on the half-plane not containing the pri-
mary sources can be equivalently synthesized by an infinitely extended linear ar-
ray of elementary monopoles laying on a line L:

1
P(r, Z  ³
2S L
G(rL | r,Z)’rL P(rL ,Z) ˜ nrL dL , (1)

where P(r,ω) denotes the pressure field at point r ƒ– angular frequency ω; rL ‫ א‬L is a
point on the line L; nrL represents the normal to L in rL pointing outward the half-
plane containing the sources. Moreover,‫׏‬rL is the gradient evaluated in rL and G(rL
|r,ω) is the appropriate Green function for the 2–D space.
The above relation means that the effect of the primary sources can be synthe-
sized (or virtualized) by knowing the values of the pressure field’s gradient
‫׏‬rLP(rL,ω) that the primary sources induce on the surface S. Indeed, such values are
used to excite an array of elementary monopole G(rL |r,ω) secondary sources.
An analogous formulation for a 3-D space is

1
4S ³A
P(r, Z  G(rA | r, Z)’rA P(rA, Z) ˜ nrA dA , (2)

which states that the pressure can be virtualized by an infinitely extended planar
array of elementary monopoles laying on a plane A, being nrA the normal to A.
For a homogeneous 3D medium, assuming that the primary source is a monopole
located in r0 having excitation S(ω) and we are mainly interested in the acoustic
624 A. Lapini et al.

field on a plane B passing through r0, a simplified solution can be also considered.
The 2D1/2 Rayleigh I integral [12], states that the pressure field can be approximate
by means of a linear array L of secondary sources laying on B:

jZ | rL  r0 |
P(r, Z | g0 ³ G(rL | r, Z)S (Z) cos(Minc)G(r0 | rL , Z)dL , (3)
L 2Sc

where c is the speed of sound in the medium; φinc is the angle between the normal
to the array line L laying on B and the line passing through r0 and r; g0 is a posi-
tive constant lower than 1. Since eq. (3) is only an approximation, it is expected
that the quality of the synthesized field becomes increasingly poorer whether r
moves farther form the plane B [12].

3 WFS for active noise control of pulse signals

The primary noise source is represented by an acoustic monopole located in a ho-


mogeneous medium at r0 and excited at time t by a pulse signal s(t). Such an exci-
tation induces a sound pressure field p(r,t) on position r. Due to the limited dura-
tion, s0(t) is generally a wideband signal having Fourier transform S(ω);
analogously, the pressure field is related to its Fourier transform P(r,ω) according-
ly to

f
1 jZt dZ.
p(r, Z
2S ³ P(rA, Z)e (4)
f

We aim to synthesize a cancelling pressure field., i.e.

pˆ (r, Z  p(r, Z , (5)

in the half–space not containing the primary source. In order to achieve this goal,
we need to derive the excitation signal for each secondary source Si(ω).
In a 2–D scenario, the Rayleigh I integral in eq. (1) and eq. (5) can be substitut-
ed in eq. (5) to obtain the cancelling linear array on the line L yielding
f
1 '
P(r, Z ³¦ G(ri | r, Z) L ’ri P(ri , Z) ˜ nri e jZt dZ , (6)
2S 2S
f i
where, for practical purposes, the integral over L has been replaced by summation
of discrete spatial sources uniformly spaced by ∆L and located at position ri. Equal-
ity in the previous relation strictly holds if ∆L ≤ cπ/ωmax, being ωmax the maximum fre-
quency such that S(ω)≠0; otherwise, antialiasing strategies have to be considered
Design of Active Noise Control Systems for Pulse Noise 625

[12]. By matching eq. (4) with eq. (6), we conclude that the i–th monopoles be-
longing to the cancelling array has frequency excitation Si(ω):

'L
Sˆi (Z) ’r P(ri , Z) ˜ nri (7)
2S i

For the 3–D space, the excitation of a monopole in a cancelling planar array
can be derived as an extension of the previous case by considering eq. (2)

'I 'Q
Sˆiq(Z) ’riq P(riq, Z) ˜ nriq (8)
4S

where i,q are sampling indexes in the orthogonal directions of the plane A, having
sampling intervals equal to ∆I and ∆Q, respectively.
Finally, for the cancelling linear array in the 3–D space given by the 2D1/2 Ray-
leigh I integral in eq (3), the excitations are given by

jZ | ri  r0 |
Sˆi (Z) 'L g0S (Z) cos(Minc)G(r0 | ri , Z)P(ri , Z) , (9)
2Sc

4 Simulations

The k-Wave [13] software tool has been used to carry out all the simulations pre-
sented in this paper.

4.1 Setup

The sound pressure generated by a real shot of a competition shotgun has been
digitally recorded at 51.2 kHz, resampled at fs = 5.12 kHz and then used to excite
the primary source.
Figure 1 depicts the simulation scenario for a WFS based ANC system in a 2–
D space. We assume the primary sources placed in a homogeneous medium con-
stituted by air and the cancelling array is placed sa = 4 m far from it. The array has
a variable aperture a; it is composed by discrete elementary monopoles excited ac-
cording to eq. (7) and spaced by da = λ/2 ≈ 65 mm, being λ = c0/(fs/2) the shorter
wavelength associated to the primary acoustic field and c0 = 343 m/s the speed of
sound in the air. Such a value of da is set in order to theoretically prevent spatial
aliasing effects on the synthesized field.
626 A. Lapini et al.

Fig. 1. 2-D simulation scenario of the ANC system by means of a linear array.

Fig. 2. 3-D simulation scenario of the ANC system by means of a planar array.

Fig. 3. 3-D simulation scenario of the ANC system by means of a linear array.

A linear microphone array, located at sm = 17 m far from the primary source


and having a variable β angular aperture, is used to measure the sound energy
flowing in the lower half plane. The sound energy gain is considered in order to
assess the performance of the simulated ANC configurations. It is defined as
Design of Active Noise Control Systems for Pulse Noise 627

~ ~
GdB 10log10 EE EE , where EE and EE are the overall sound energy measured
by the microphone array having β angular aperture, in the case of the ANC system
switched on and off, respectively. Hence, the lower the sound energy gain, the bet-
ter the achieved performance.

Fig. 4. Local gain (dB) in the frequency-angle plane obtained by means of a 2–D linear array
with aperture a = 20 m.

Fig. 5. Local gain (dB) in the frequency-angle plane obtained by means of a 2–D linear array
with aperture a = 30 m.

Regarding the 3–D scenarios, simulations’ setup is a straightforward extension


of the previous case. As depicted in Figure 2, ay and az are the horizontal and ver-
tical apertures of the planar WFS array, respectively. The array is still placed sa =
4 m far from the primary source. Elementary monopoles are distributed over a rec-
tangular grid spaced by day = daz ≈ 67 mm and excited according to eq. (8). A pla-
nar microphone array is located at sm = 12 m from the primary source and is char-
acterized by variable horizontal and vertical angular apertures, βy and βz,
628 A. Lapini et al.

respectively. Thus, the sound energy gain is measured as


~
GdB 10log10 EE y ,Ez EE y ,Ez , being the overall energies depending on both
horizontal and vertical angular aperture of microphone array. An analogous setup
to the case of planar array has been considered for the linear array (specifically da
is equal to day of the planar array). The scenario is depicted in Figure 3.

Table 1. Sound energy gain in the 2–D scenario obtained by varying the array aperture and the
measurement aperture angle.

da= λ/2=67 mm
a
β=30° 60° 90° 120° 150°
20 m -24.7 -24.5 -24.4 -23.7 -10.3
30 m -23.7 -24.0 -24.0 -24.0 -23.0
da=λ/4=33.5mm
a
β=30° 60° 90° 120° 150°
20 m -25.4 -25.6 -25.4 -24.6 -10.3
30 m -25.4 -25.6 -25.7 -25.6 -23.7

4.2 Results

Two different array apertures, a = 20 m and a = 30 m, have been considered for


the simulations in the case of a cancelling array in a 2–D space. In order to verify
the antialiasing capabilities of the system, a finer sampling space da = λ/4 = 33.5
mm has been also simulated. The results are reported in Table 1. In accordance to
the theoretical argument, there is no appreciable advantage on decreasing the val-
ue of da. As to the effect of different array apertures, they are noticeable for β =
150°, for which the configuration a = 20 m exhibits a loss of about 12 dB.
Figures 4–5 depict the local sound energy gains measured in the frequency-
angle plane for an aperture a = 20 m and a = 30 m, respectively. The effect of the
finite array apertures are clearly visible in correspondence with the horizontal cut-
offs (about 70° and 80° for a = 20 m and a = 30 m, respectively). Regarding the
surplus gain observable at 1700 Hz, it is due to a local attenuation in the original
shot signal that generates numerical errors due to finite arithmetic.
In the 3–D space two different horizontal apertures, a = 10 m and a = 20 m,
and two different vertical apertures, az = 1 m and az = 3 m, respectively, have been
set to simulate a cancelling rectangular array. The sampling space da = λ/2 = 67
mm has been set. Table 2 reports the obtained results in terms of sound energy
gain. The effects of horizontal aperture ay are negligible for the tested measure-
ment angles. On the contrary, performances are strongly influenced by the vertical
aperture az. Since the array has shorter vertical aperture w.r.t. the horizontal one,
the gain gradually increases as βz grows.
Design of Active Noise Control Systems for Pulse Noise 629

Two different horizontal apertures, a = 10 m and a = 20 m, have been set up in


the simulations regarding the cancelling linear array in a 3–D. The sampling space
da = λ/2 = 67 mm has been preserved. The results, presented in Table 3, show that
a noticeable increment of gain is measured as βz increases, according to the lacking
validity of the 2D1/2 Rayleigh I integral outside the horizontal plane. On the contra-
ry, no appreciable discrepancies appear due to the different horizontal apertures
for the tested measurement angles.

Table 2. Sound energy gain in the 3–D scenario obtained by varying the horizontal and the verti-
cal array apertures and the horizontal and the vertical measurement aperture angles.

ay =10 m
az = 1 m az = 3 m
βy βz =10° 20° 30° βz =10° 20° 30°
10° -7.5 -6.4 -3.8 -15.5 -15.1 -14.2
30° -7.6 -6.4 -3.9 -15.7 -15.1 -14.1
60° -7.6 -6.3 -3.9 -15.7 -15.0 -14.1
90° -7.4 -5.9 -3.9 -14.8 -14.0 -13.5
ay =20 m
az = 1 m az = 3 m
βy βz =10° 20° 30° βz =10° 20° 30°
10° -7.5 -6.4 -3.8 -15.5 -15.2 -14.3
30° -7.6 -6.4 -3.9 -15.8 -15.2 -14.2
60° -7.6 -6.3 -3.9 -15.9 -15.2 -14.2
90° -7.5 -5.8 -3.9 -14.7 -14.9 -14.1

Table 3. Sound energy gain in the 3–D scenario obtained by varying the linear array aperture
and both the horizontal and the vertical measurement aperture angles.

a =10 m a =20 m
βy βz =10° 20° 30° βz =10° 20° 30°
10° -12.5 -1.8 0.8 -12.5 -1.8 0.8
30° -11.7 -1.6 0.9 -11.7 -1.6 0.9
60° -10.9 -1.3 0.9 -11.0 -1.3 0.9
90° -10.3 -0.9 0.8 -10.6 -1.0 0.8

5 Conclusions

In this paper, the design of active noise control (ANC) for pulse signals has been
preliminary addressed by modelling some realistic scenarios without relying on
any statistical assumptions. By means of the Wave Field Synthesis, a non-adaptive
630 A. Lapini et al.

and deterministic design strategy for the synthesis of cancelling pulse acoustic
fields has been reported according to the Rayleigh I integral. Explicit formulations
of the secondary sources’ excitations have been also shown. The simulations of
both 2–D and 3–D scenarios by means of finite aperture arrays have highlighted
that an attenuation of sound energy greater than 10 dB is achievable in the por-
tions of space strictly located in front of the array. It is remarkable that, in the spe-
cific case of linear array in a 3-D space, the performances suddenly worsen as the
vertical measurement angle increases.

References

1. S. Kuo and D. Morgan, “Active noise control: a tutorial review,” Proceedings of the IEEE,
vol. 87, no. 6, pp. 943–973, Jun 1999.
2. B. Widrow and S. D. Stearns, Adaptive Signal Processing. Upper Saddle River, NJ, USA:
Prentice-Hall, Inc., 1985.
3. S. M. Kuo and D. Morgan, Active Noise Control Systems: Algorithms and DSP Implementa-
tions, 1st ed. New York, NY, USA: John Wiley & Sons, Inc., 1995.
4. M. Carfagni, C. Bartalucci, F. Borchi, L. Governi, A. Petrucci, M. Weber, I. Aspuru, R.
Bellomini, P. Gaudibert, “Life+2010 quadmap project (quiet areas definition and manage-
ment in action plans): The new methodology obtained after applying the optimization proce-
dures”, 21st International Congress on Sound and Vibration 2014, ICSV 2014, 3, pp. 2576-
2583.
5. R. Bellomini, S. Luzzi, M. Carfagni, F. Borchi, L. Governi, C. Bartalucci, “Life+2010
quadmap project (quiet areas definition and management in action plans): The methodology
tested and optimized in pilot case in Florence, Rotterdam and Bilbao”, 7th Forum Acusticum
2014, FA 2014, vol. 2014-January, 2014.
6 R. Leahy, Z. Zhou, and Y.-C. Hsu, “Adaptive filtering of stable processes for active attenua-
tion of impulsive noise,” in Acoustics, Speech, and Signal Processing, 1995. ICASSP-95.,
1995 International Conference on, vol. 5, May 1995, pp. 2983–2986.
7. P. Thanigai, S. Kuo, and R. Yenduri, “Nonlinear active noise control for infant incubators in
neo-natal intensive care units,” in Acoustics, Speech and Signal Processing, 2007. ICASSP
2007. IEEE International Conference on, vol. 1, April 2007, pp. I–109–I–112.
8. X. Sun, S. M. Kuo, and G. Meng, “Adaptive algorithm for active control of impulsive noise,”
Journal of Sound and Vibration, vol. 291, no. 12, pp. 516 – 522, 2006.
9. A. J. Berkhout, D. de Vries, and P. Vogel, “Acoustic control by wave field synthesis,” The
Journal of the Acoustical Society of America, vol. 93, no. 5, pp. 2764–2778, 1993.
10. M. Zanolin, P. Podini, A. Farina, S. De Stabile, and P. Vezzoni, “Active control of noise by
wave field synthesis,” in Audio Engineering Society Convention 108, Feb 2000.
11. A. Kuntz and R. Rabenstein, “An approach to global noise control by wave field synthesis,”
in Signal Processing Conference, 2004 12th European, Sept 2004, pp. 1999–2002.
12. E. W. Start, “Direct sound enhancement by wave field synthesis,” Ph.D. dissertation, TU
Delft, 1997.
13. B. E. Treeby, A. P. Rendell, and B. T. Cox, “Modeling nonlinear ultrasound propagation in
heterogeneous media with power law absorption using a k-space pseudospectral method,”
Journal of the Acoustical Society of America, vol. 131, no. 6, pp. 4324–4336, 2012.
Disassembly Process Simulation in Virtual
Reality Environment

Peter MITROUCHEV*, Cheng-gang WANG and Jing-tao CHEN

Univ. Grenoble Alpes, G-SCOP, F-38000, Grenoble, France,


CNRS, G-SCOP, F-38000 Grenoble, France
* Corresponding author. Tel.: +33-47-657-4700; fax: +33-47-657-4695. E-mail address:
Peter.Mitrouchev@grenoble-inp.fr

Abstract: Integration of disassembly operations simulation during product de-


sign is an important issue today. A method for ergonomic evaluation of disassem-
bly operations is presented here. It is based on three new criteria, presented by di-
mensionless coefficients: visibility score, neck score and bending score. The
method is integrated in a Virtual Reality Disassembly Environment (VRDE) based
on Python programming language using mixed VTK (Visualization Toolkit) and
ODE (Open Dynamics Engine) libraries. The framework is based on STEP, WRL
and STL exchange formats. The proposed method is tested and an example for
disassembly sequences evaluation is presented. The results of the analysis and
findings demonstrate the feasibility of the proposed approach thus providing sig-
nificant improvement in Product Development Process (PDP).

Keywords: Virtual Reality Environment; disassembly operations; evaluation.

1 Introduction

In Virtual Reality Environment (VRE) where interac-


tive simulation is critical, fast and accurate evaluation of Assembly/Disassembly
(A/D) operations is also a challenging problem. Most of the recent work on A/D
related with Virtual Reality (VR) technology focuses on the simulation itself.
Concerning the evaluation of disassembly sequences, different tools using novel
human-computer interface of VR are proposed [1, 2]. A new method based on
work factors and genetic algorithm models was put forward by the team of Hwa-
cho Yi in [3]. These studies aimed at obtaining approximate disassembly times for
a product, based on the information provided by the links amongst the parts; in-
stead of disassembling the product in reality. In addition, none of them considered

© Springer International Publishing AG 2017 631


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_63
632 P. Mitrouchev et al.

the ergonomic evaluation related to product disassembly operation. Yet, this ergo-
nomic evaluation should be involved in the initial design phase. Some commercial
software and tools are proposed to perform ergonomic evaluation during assembly
[4, 5]. However, this evaluation is relatively expensive and often used only by
mass production industries. In this context, a new method for disassembly opera-
tions evaluation in VRE is proposed here. Instead of ergonomic simulations with a
human model, it introduces some new sources in performing disassembling task in
VRE. It is better than the method of Atsuko [6], which only involves one ergo-
nomic score evaluation. It is also better than the method on Niu [7], which uses
human model in a digital mock-up (DMU) and whose main areas of application
are limited to big mass production lines because of its high cost. In order to im-
prove the efficiency of disassembly operation evaluation, three new criteria for er-
gonomic evaluation are proposed here. The final purpose of the evaluation is to es-
timate the difficulties for product disassembly process, during the initial phase of
product design. Thus, the results of this study may be useful for designers, allow-
ing them to take into account of the constraints of disassembly operations by au-
tomatically evaluating disassembly sequences in a VRE.

2 Method for Disassembly operations’ efficiency evaluation


Prior work has focused on the traditional processing evaluation of disassembly
operations in a VRE [8], where a method based on four criteria namely: disassem-
bly angle, number of tools’ changes, path orientation changing and sub-assembly
stability was proposed. Here, three new criteria for ergonomic evaluation, namely:
visibility score, neck score (NS) and bending score (BS) are introduced [9].
The purpose of Ergonomic engineering is trying to fit the task to the human and
not the human to the task where the key point for an effective application is to gain
a balance between the human body characters and the task demands. Thus, a meth-
od for disassembly evaluation integrated in VRE is proposed here. Instead of focus-
ing on the authenticity assessment by comparing the results of VR and real task in
reality, the proposed Geometric Removability Analysis method is focusing on the
evaluation of disassembly difficulty in VRE. The analysis consists in: i). determin-
ing the physical position of the operator; ii). using a camera to replace the 3D hu-
man model (and in particular the eyes of the operator). Finally, the analysis of some
distances and angles related with the component disassembly operation direction,
and the component position in the VRE is performed for the removability evalua-
tion by considering the proposed three ergonomic criteria.

x Visibility score:
In order to evaluate the visibility for the target part (here a bolt, Fig. 1a), the cam-
era in the realised Virtual Reality Disassembly Environment (VRDE) is in the po-
sition of the human eyes and in the direction of the bolt.
Disassembly Process Simulation in Virtual Reality Environment 633

Fig. 1. a). Integration of the camera as the eyes of the operator) b). Pixel calculation.

The pixels’ counting of the target component is based on OpenCV library


(http://opencv.org/). In the proposed method, two images are taken by the camera,
namely: the bolt itself, labelled in red, (Picture 1 of Fig. 1a) and the bolt in the as-
sembly surroundings (Picture 2 of Fig. 1a). The operator is turning the camera
around the target component in order to obtain maximum visibility. Thus, the pro-
posed visibility score v, for a target, is defined as the ratio between the number of
its red colored pixels in the current image va (Picture 2) and the number of red
colored pixels of its whole image vb (Picture 1) captured by the camera: v=va/vb.
The average visibility score for a disassembly sequence is:

1 m
VS = ¦v
m i=0 i
(1)

where m is the number of components (parts) in the assembly. The value of v


ranges from 0 to 1. If the target part is completely hidden by other parts (va=0),
the visibility score s is 0 (the worst situation). For va=vb, v=1 and consequently the
visibility is maximum.

x Neck score (NS):


Two types of Neck Score are usually used for ergonomic evaluation: component
heads and text heads. The method proposed in this paper is based on Rapid Upper
Limb Assessment (RULA) algorithm [10] allowing to evaluate the exposure of
workers to risk of upper limb disorders. Neck score (NS) measures lateral and
forward rotation angles of the neck.
634 P. Mitrouchev et al.

Fig. 2 a). Geometrical parameters related to the human operation b). Neck lateral rotation.

Thus, the proposed average Neck score NS is:


9(c1  c3 )
NS = 1 (2)
2S
where c1 is the forward rotation angle between the visual direction and the vertical
direction (Fig. 2a), c3 is the lateral rotation angle of the neck (Fig. 2b). In the real-
ized application, the value of c3 is less than 20°. The value of NS ranges from 0
(worst situation) to 1 (best situation).

x Bending score (BS)


Another criterion which affects the ergonomy of the disassembly operation is the
bending score (BS). Its value is calculated from the trunk bending angle c2, be-
tween the visual direction and the component moving direction (Fig. 2a). Accord-
ing to RULA angle c2 ranges from 0° to 60°. Thus, the proposed bending score is:

6c 2
BS 1  (3)
S
Note that for c2>60°, the bending score is 0 (worst situation).
However, a problem arises as to how to use this approach in the absence of 3D
human model. For this purpose, as previously said, the proposed method consists
in replacing the human model by a camera. The latter allows to detect all the an-
gles and distances necessary to calculate the overall score of the proposed three
ergonomic criteria. Thus, the proposed procedure for ergonomic evaluation by us-
ing a camera consists in:
- Defining the work environment. First, the target component is set in the OYZ
plane (Fig. 3). Then, the human operation plane is defined as the parallel plane
with OYZ in positive x direction.
- Defining the position of the camera according to the workspace and the position
of the target component. The initial position should consider the operator height
Disassembly Process Simulation in Virtual Reality Environment 635

(size) and the real distance between the operator and the camera. For example, dis-
tances d1 (between the operator's position and the center of component), and d2
(between the operator’s eye and the center of the component) (Fig. 2a) should be
in the ranges of RULA sheet [10]. As a camera is used, instead of human body,
the suitable position for the camera is defined by the operator before the beginning
of the disassembly operation simulation.

Fig. 3. Angles and Camera position relationship.

- Using the camera in order to find its position by detecting the geometrical pa-
rameters namely: distances d1, d2 and calculating angles c1, c2, and c3. The three
score (VS, NS, BS), proposed here above, formulate a strategy to create an analysis
for ergonomic evaluation. According to equations (1), (2) and (3), the overall
score OS for the ergonomic evaluation of disassembly operation is:

OS=VS+NS+BS (4)

3 Case study
In order to demonstrate and validate the proposed method for disassembly evalua-
tion, an example is presented here below. Experiment consisted in virtual disas-
sembling of two screws from a mechanical assembly (Fig. 1a) in the created
VRDE (see Section 4). The original operation using 3D human model is shown as
in Fig. 4a. Instead, in order to avoid using 3D human model, a camera is applied
which replaces the eyes of the 3D human model as shown in (Fig. 4b). As previ-
ously said, the initial position of the camera should be the eyes of the operator by
considering his/her real height. The movements should be limited in the human
head and body’s movable ranges. Note that this is a little awkward in the scene of
the VRE. This is so as, in general, the camera has to observe the objects and can
be moved anywhere if the operator wants to. However, in the realized application,
the movement of the camera is restricted considering the dimensions of the human
body (height 175 cm). The mechanical assembly is imported from CAD system in
WRL formats. As presented here above, the positions of the camera and the object
(target) are first built (see Fig.4b.). Then, the operator may remove or rotate the
camera in a convenient position for observation. When the target is selected, its
pixel of image and position are recorded automatically for later analyses (Rq. The
cursor of the tool is disappearing first in order to save the image pixels). Then, an-
gles c1, c2 and c3 are calculated according to the relative position between the
636 P. Mitrouchev et al.

camera and the targets (screws). Finally, the overall score OS for the difficulty op-
eration evaluation is calculated by eq. (4).

Fig. 4. a). Integration of the human body. b). Original positions of the camera and the targets.
Let us note that the values for visibility score VS depend on the way that the op-
erator is handling the components in the VRE. The disassembly operation for two
screws were involved in the performed experiments for disassembly simulation.
According to the proposed method for disassembly operation evaluation, the re-
sults for overall score (OS) for each disassembly operation (screw 1 and 2) are
shown in Table 1.

Table 1. Overall score for screws disassembly operation evaluation


Geometric Removability Analysis
Visibil. Neck Bending Overall
Operations
score score score score
VS NS BS OS

Screw1 0.6543 0.7183 1.0 2,3726

Screw2 0.5479 0.3690 0.8324 1,7493

It is seen that Screw 2 is more painful for the operator neck as NS of Screw 2 is
smaller than NS of Screw 1. Concerning bending score, BS of Screw 2 is smaller
than BS of Screw 1, which means that the operator needs to bend over more for
disassembling Screw 2. With regard to visibility score (VS); Screw 2 is more diffi-
cult to be seen compared with Screw 1 as its VS is smaller than Screw 1. In con-
clusion, the overall score of Screw 1 is bigger than Screw 2, which indicates that it
will be easier to be disassembled in ergonomic point of view.

4 Virtual Reality Disassembly Environment (VRDE)


The methods for generation of the possible disassembly sequences presented in
[11] and the method for disassembly sequences evaluation, presented here, were
Disassembly Process Simulation in Virtual Reality Environment 637

integrated in a Virtual Reality Disassembly Environment (VRDE. The general


structure of the VRDE is shown in Fig.5. The whole system is based on Python
programing language. The outputs are 3D sound and stereo displays. The interface
is developed based on Visualization Toolkit (VTK) library for creation, interaction
and displaying 3D model. The central structure of VTK is a pipeline of data, start-
ing from a source of information and arriving to an image rendered on the screen.
In VTK, the 3D models are presented by Actors.

3D model data
Showing Render
assembly Light,
window color,camera ...

Actor picking
Catch target VTK Picker
part event Prop picking

Translation,
Rotation...
Move the Render window
part interactor Interaction
style
Force
Contacts feedback
detection
Collision
detection Creat joints
ODE loops
ODE body and
geom.
Python

Fig. 5. Structure of the mixed VTK and ODE VR Disassembly environment (VRDE).

During the disassembly operation simulation, the generation of the possible disas-
sembly sequences is based on contact identification method performed by ODE li-
braries. The collision detections amongst the parts in the assembly are performed
by VTK libraries (programming in C++, Python) [11]. The position and the orien-
tation of the objects in ODE are resend to the center of Actors in VTK in real time.
In order to apply constraint forces to an object in the ODE world, a model called
Body is created inside containing full information for the part such as: material,
mass, dimensions, inertia, gravity center etc. At the same time, another model
called Geometry (geom.) is defined for presenting the shape information of the
part. It is used to detect collisions between bodies and affect forces among them.

5 Conclusion
Some limitations of the available techniques for disassembly operation simula-
tions stimulated this research on disassembly operation evaluation. In comparison
to the analyzed literature, where different concepts to evaluate disassembly opera-
tions are used, our work is based on three criteria for ergonomic evaluation. They
allow the evaluation of the disassembly operation complexity during the initial
stage of product design or during the Product Life Cycle (PLC) in: production
processes, product maintenance and at the end of PLC. Based on the proposed
638 P. Mitrouchev et al.

methods for disassembly sequences evaluation, an application integrated in a Vir-


tual reality disassembly environment (VRDE) based on Python programming lan-
guage associated with mixed VTK and ODE libraries is developed. The example
studies demonstrated the efficiency of the proposed methods for disassembly se-
quences generation and evaluation. The overall score result of the proposed crite-
ria for ergonomic evaluation allows to estimate the difficulty while performing
disassembly operations. It was confirmed by experimental tests, thus allowing val-
idating the proposed method. However, at this stage, the work is not considering
the ranking of the proposed criteria. Future work will focus on ranking the criteria
according to their importance. For this purpose, different weights will be allocated
to each of them, thus allowing a more comprehensive evaluating method.

Acknowledgments The research leading to these results has been partly supported by the
LabEx PERSYVAL-Lab (ANR-11-LABX-0025) » (http://www.persyval-lab.org/index.html)

References
1. Su Q. Lai S.J. and Liu J. Geometric computation based assembly sequencing and evaluating in
terms of assembly angle, direction, reorientation, and stability, Computer-Aided Design,
2009, 41(7), 479–489.
2. Mohd F.F.R. Windo H. And Ashutosh T. A review on assembly sequence planning and as-
sembly line balancing optimisation using soft computing approaches, International Journal of
Advanced Manufacturing Technologies, 2012,59 (1–4), 335–349.
3. Yi H.C. Yu B. Du L. Li C. and Hu D. A study on the method of disassembly time evaluation
of a product using work factor method, In Proceedings of the 2003 IEEE International Con-
ferences on Systmes. Man. Cybernetics, 2003, 1753-1759.
4. Seth A. Seth A. Vance J.M. James H.O. Virtual reality for assembly methods prototyping: a
review, Virtual Reality, 2011, 15(1), 5-20.
5. Jayaram, U. Jayaram, S. Shaikh I., Kim Y. Palmer C. Introducing quantitative analysis meth-
ods into virtual environments for real-time and continuous ergonomic evaluations, Computer
in industry, 2006, 57 (3), 283-296.
6. Atsuko E. Noriaki Y. and Tasuya S. Automatic evaluation of the ergonomics parameters of as-
sembly operations, in CIRP Annals - Manufacturing Technology, 2013, 62(1), 13-16.
7. Niu J.W. Zhang X.W. Zhang X. Ran and L.H. Investigation of Ergonomics in Automotive As-
sembly Line Using Jack, In Proceedings of the 2010 IEEE IEEM, 2010, 1381–1385.
8. Wang, C., Mitrouchev, P., Li G, and Lu, L., (2015), Disassembly operations’ efficiency evalu-
ation in virtual environment; Int. J. of Comp. Integrated Manufacturing, Ed. Taylor & Fran-
cis, (DOI: 10.1080/0951192X.2015.1033752.
9. Wang, C., (2014), Disassembly sequences generation and evaluation. Integration in a virtual
veality environment; PHD Thesis, University Grenoble Alpes, France, Novembre 2014.
10. McAtamney L. and Corlett, E.N. RULA: a survey method for the investigation of work-
related upper limb disorders, Applied Ergonomics, 1993, 24(2), 91-99.
11. Peter Mitrouchev, Chenggang Wang, Guiqin LI, Lixin Lu, Sequences planning for products
disassembly based on lowest levels disassembly graph method, International Journal of Ad-
vanced Manufacturing Technologies: Volume 80, Issue 1 (2015), pp. 141-159.
Development of a methodology for performance
analysis and synthesis of control strategies of
multi-robot pick & place applications

Gaël Humbert1, Minh Tu Pham1, Xavier Brun1, Mady Guillemot2, Didier


Noterman3
1
Université de Lyon, INSA-Lyon, Laboratoire Ampère, 20 Avenue Albert Einstein,
Villeurbanne 69621, France
2
Université de Lyon, INSA-Lyon, Laboratoire LAMCOS, Villeurbanne 69621, France
3
Université de Lyon, INSA-Lyon, Laboratoire DISP, Villeurbanne 69621, France
* Corresponding author. E-mail address: gael.humbert@@insa-lyon.fr 䯠

Abstract This paper deals with a new simulation tool for the improvement of
multi-robot pick & place applications performance combining behavioral simula-
tion of multiple robots and products flows. A novelty of the proposed work is to
take into account in the simulation not only the scheduling rules of each robot, but
also the robots collaborative aspect to ensure the desired overall performance for a
given task. The transition from simulation to implementation of pick & place
strategies is also an issue tackled in this paper. By using a typical example consist-
ing of comparing techniques to optimize the workflow, the utility of the simula-
tion tool is proven. First experimental results validate the simulation results.

Keywords: pick & place application; collaborative strategies; scheduling rules;


software tool; experimentation

1 Introduction

In recent years, the customer’s demand of productivity and flexibility for their
production lines has largely increased. This is why robots and robotic pick & place
cells are more and more present in some industrial fields such as the food industry.
In high-performance applications, typical characteristic of a pick & place robot
can reach the following values : velocity 10 m/s, acceleration 100 m/s² precision
+/- 0.1mm, pick & place cycle 0.40s on average. To improve the performance of
these applications it is necessary to improve the design of current production sys-
tems (number of robots, performance etc.) whilst also improving the management
of flows and workload management when several robots are used.

© Springer International Publishing AG 2017 639


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_64
640 G. Humbert et al.

A pick & place application is usually composed of a series of several robots in-
stalled in a line one after the other taking products on a first conveyor and placing
them in boxes located on a second conveyor, see Figure 1 [1, 2].
On a multi-robot packaging cell, when there is no workflow optimization sys-
tem, "pick" instructions are divided equally between the first robots. A final robot
is added to try to recover the products that could not be taken by previous robots.
Products initially assigned to a robot may not be taken because they finally are out
of the robot workspace because of a lack of boxes to fill, for example. In addition
there is an unbalanced of the workload between robots.

Fig. 1. Robotic cell with delta robot

To the best of our knowledge, in industrial and academic context, there are no
simulation tools that take into account the four following aspects: a behavioral
simulation of the robots, a simulation of the work environment (product flow,
boxes flow), the collaborative work of several robots and finally the possibility to
go from simulation to experimentation.
The first contribution of this paper is the development of a software interface
that represents the robotic cell part in a 3D environment. The developed software
is able to simulate realistic product and box flows, generate the trajectory of the
end effector, and propose several collaboration strategies between robots.
The second contribution is to propose a tool that includes experimental aspects
in order to directly go from simulation to implementation. Simulation must be
done in such a manner that the translation is as easy as possible to have a fast im-
plementation in-situ, simple language and similar controller architecture are used
in simulation and practice.
The third contribution is to show a comparative study of the simulation of dif-
ferent pick & place strategies for several robots. An experimental validation is al-
so presented. The results show that simulation and experimentation results are
closed.
Section 2 presents a new simulation tool dedicated to pick & place applications,
the software environment and pick & place strategies are shown. Section 3 shows
a comparative study of the simulation of different pick & place strategies for sev-
eral robots. An experimental validation is also presented. The results show that
simulation and experimentation results are closed.
Development of a methodology for performance … 641

2 A new simulation tool dedicated to pick & place applications

2.1 Software tool

In the literature, there are several works dedicated to robotic pick & place
simulation, which are only used for visualization, to verify the kinematics and dy-
namics. They are also used for robot design to validate its behavior, its movements
and its interaction with the environment (collision detection). Johari et al. [3] have
used Workspace5 to visualize an entire robotic application system in order to de-
tect collision between robots and the environment. Sam et al. [4] have designed a
pick & place robotic system using SolidWorks Softmotion software to study the
motion of the modeled articulated robot.
To improve the productivity of a pick & place multi-robots application, the
flow management has to be improved. There are several programs that are able to
simulate this. Mirzapourrezaei et al. [5] have used Witness to evaluate various as-
pects of manufacturing systems. The objective was to escalate the productivity and
efficiency of the line. Hindle et al. [6] have used Simul8 to answer the complex
scheduling problem of sequencing part requirements through a composites manu-
facturing center. Nikakhtar et al. [7] have compared two simulation tools: Arena
and Witness. However, these programs are dedicated to flow simulation. Visuali-
zation is very basic, and mainly focused on the flows. 3D visualization does not
exist and it is difficult to represent the kinematic and dynamic behavior of the ro-
bots.
Unlike other works, the environment software used allows the creation of a vir-
tual machine, in 3D and in real time. This simulated robotic cell will have the
same kinematics and the same dynamics as the real one, its environments can also
be simulated: products arriving on a conveyor etc. Scenarios can be implemented
to verify its behavior. A high-level layer can be used to implement a products pick
& place strategy and a collaboration strategy between several robots if necessary.
This software is also modular, it can configure the production system (robot, con-
veyor etc.), its environment (products, boxes etc.) and the different scenarios.
The pick & place application creation consists of several steps, first the defini-
tion of the simulation model is carried out. At this stage the graphic objects and
the kinematic behaviors of the application objects are defined (robots, conveyors
etc.). The second step is the development of collaborative strategies between ro-
bots in simulation, see sub-section 2.2. Finally, the simulations could be run, see
Figure 2, to test the model behavior and analyze the results of the different strate-
gies, see sub-section 3.1. Once the parameters of simulation are optimized, exper-
imentation in-situ is carried out, see sub-section 3.2. Experimental tests can be
done to check the algorithms’ operation and test their performance.
642 G. Humbert et al.

Fig. 2. Simulation example with four robots and two co-current conveyors.

2.2 Pick & place strategies

When a single robot is used a queue or a basic sort direction is sufficient.


Matton et al. [8] have proposed innovative online scheduling rules based on
queue.
If several robots are used, more complicated algorithms than queue are neces-
sary. The aim is to manage robots towards products, boxes and conveyors. To do
this, it is better to use optimization algorithms. Research works are related to op-
timization algorithms used in some robotic applications. Huang et al. [9] have uti-
lized the greedy randomized adaptive search procedure to search for the optimal
combination of part dispatching rules. Slim et al. [10] have compared three
metaheuristics: ant colony optimization, genetic algorithm and particle swarm op-
timization. The aim is to maximize the throughput rate taking of a pick & place
robotic system into account the execution time. Fujimito et al. [11] have used a
genetic algorithm to seek the best combination of dispatching rules in order to ob-
tain an appropriate production schedule. In these works, only simulation is used.
Experimentation the translation from simulation to experimentation is not tackled.
In the literature, a few patents are related to pick & place strategies. Izumi et al.
[12] have filed a patent about conveyors sharing in order to share the robots work-
load.
The tool developed incorporates two levels of strategies shown in Figure 3.
Simple individual scheduling rules for a single robot can be:
x FIFO: First In First Out. The robot picks the first product in its work-
space.
x LIFO: Last In First Out. The robot picks the last product in its workspace.
x SPT: Shortest Processing Time. The robot picks the nearest products of
its end effector.
Development of a methodology for performance … 643

Fig. 3. Simulation architecture with two levels of strategies.

There are also collaborative strategies that assign the products to the robots be-
fore they arrive in their workspace. An example where four robots are used is giv-
en in figure 4. The products 1 are assigned to the robot 1, the products 2 are as-
signed to robot 2...
x DownToUp: Assign to the robots the products one by one from the
downstream to upstream of the conveyor (Figure 4.a).
x Horizontal: Assign to the robots a horizontally area corresponding to its
number (Figure 4.b).
x Vertical: Share Assign to the robots a vertically area corresponding to its
number (Figure 4.c).

Fig. 4. Example of different collaborative strategies.

3 Simulation and experimentation results

3.1 Simulation results

Tests were conducted with the algorithms explained in section 2.2. The perfor-
mance of different algorithms can be assessed using several indicators: number of
picked products by each robot, the total number of picked products, average pick-
ing-placing time and finally the workload percentage which is defined by the fol-
lowing equation (1) with TPick, TPlace and TWait respectively picking, placing and
waiting time in seconds.
TPick  TPlace (1)
Workload
TPick  TPlace  TWait
The simulations are performed with the following arbitrary parameters:
644 G. Humbert et al.

3 robots: speed 10 m/s, acceleration 100 m/s²; linear movement; conveyors


speed: 0.15m/s in co-current; 5.5 products per second with random position; simu-
lation time: 30 min.

Table 1. Results of the individual scheduling rules in steady state.

Picking / Placing Product picked Workload (%) Average pick-place time (s)
rules R1 - R2 - R3 - Total R1 - R2 - R3 R1 - R2 - R3
FIFO / FIFO 4242 / 4316 / 1344 / 9902 92.2 / 92.2 / 30.4 0.414 / 0.407 / 0.431
FIFO / LIFO 3884 / 3974 / 2042 / 9900 94.3 / 94.2 / 43.6 0.455 / 0.442 / 0.403
SPT / SPT 4605 / 4080 / 1229 / 9914 92.1 / 91.6 / 29.1 0.379 / 0.429 / 0.451

Table 2. Results of the collaborative strategies in steady state with FIFO rule.

Collaborative Product picked Workload (%) Average pick-place time (s)


strategies R1 - R2 - R3 - Total R1 - R2 - R3 R1 - R2 - R3
DownToUp 3302 / 3299 / 3300 / 9901 76.8 / 76.4 / 75.8 0.442 / 0.439 / 0.437
Horizontal 3662 / 3263 / 2899 / 9824 73.9 / 67.7 / 67.1 0.396 / 0.421 / 0.451
Vertical 3310 / 3312 / 3279 / 9901 74.3 / 73.5 / 75.8 0.425 / 0.420 / 0.42
The Table 1 shows the results in steady state of a simulation where only one
individual scheduling rule is applied. First it appears without any collaborative
strategies, the workloads of the robots are unbalanced for all the scheduling rules.
The first robots pick the maximum of products while the last one picks the remain-
ing products. SPT rule increase the unbalance between robots. In FIFO/FIFO rule
the robot remains in one side of its space while in FIFO/LIFO rule the robot will
move in a larger area. This is why picking and placing time in FIFO/FIFO rule are
smaller than FIFO/LIFO rule. The Tables 2 gathers the results in steady state of a
simulation where the individual scheduling rules FIFO/FIFO but with additional
collaborative strategies between the robots. The workloads of the robots are bal-
ance because the products are distributed equally. It is noteworthy that horizontal
strategy is not a good assignment method because the picking and placing time in-
creases from the first robot to the last robot. The main reason is that the assigned
areas to the robots are increasingly far of the robot centers. It is clear that the best
one is the vertical strategy, the workload is equal and with a reduced picking and
placing time.

3.2 Experimentation validation


After the simulations, algorithms and strategies translation in PLC language is
carried out. To facilitate this translation, the programs are written with the sim-
plest possible functions, which also reduce the execution time. In addition, the
controller software uses an object-based language similarly to the simulation soft-
ware. The architecture of the Figure 5 is the implementation of that of Figure 3. It
is composed of the same scheduling rules and collaborative strategy blocks than
for the simulation.
Development of a methodology for performance … 645

Fig. 5. Controller architecture with two levels of strategies.

A demonstrator is used to check the operation of the program architecture, al-


gorithms and strategies translated. It is composed of a Schneider Electric P4 delta
robot to the pick & place, a ring conveyor, with a sensor for detecting the products
and a vacuum gripper for taking the objects and is commanded by a Schneider
Electric controller LMC400C and program by SoMachine Motion. The results of
the Tables 3 are obtained with the following nominal conditions: Conveyor speed:
0.1 m/s in counter-current, end effector nominal speed: 1.2 m/s, end effector ac-
celeration: 20 m/s², linear movement, products every 50 mm, boxes with two plac-
es every 200 mm, picking logic: FIFO, placing logic: LIFO, time: 10 min. The dif-
ferent tests are done with the following conditions:
1. Nominal condition.
2. Nominal condition with picking logic: LIFO.
3. Nominal condition with picking logic: SPT.

Table 3. Simulation and experimentation results.

Test condition Average pick- Products


place time (s) picked
1 1.65 / 1.64 298 / 296
Simulation /
2 1.82 / 1.84 284 / 282
Experimentation
3 1.51 / 1.51 319 / 298
Table 3 shows a comparison between simulation and experimentation results.
For all the simulation conditions, the picking and placing time and the number of
taken products are similar, the error is very low. The experimental results are in
accordance with those presented in Table 1. The results of LIFO/FIFO and
LIFO/LIFO rules test are reversed because in the simulation test the conveyors are
in the same direction while they are in the opposite direction in practice. This
shows the interest of the simulation tool to test and improve the pick & place mul-
ti-robots performance.

4 Conclusion
In this paper, we proposed a new tool to improve the performance of multi-
robot pick and place applications. This tool is based on the real-time 3D simula-
tion of the robot tool and of its environment, allowing also the implementation of
individual and collaborative control strategies. Several tests have been done, first-
646 G. Humbert et al.

ly, a simulation test to compare different individual and collaborative control


strategies. Then, a comparison between simulation and experimental results is
conducted and show that the simulation is very close to reality. This tool also al-
lows a fast translation of algorithms from simulation to implementation. One of
the interests of this tool is to test different algorithms for the robots before imple-
mentation in-situ, to check if they operate properly and to know what is best. This
avoids stopping a production line for these tests or to save time if the line is in de-
velopment. Another interest is to take into account the four following aspects: a
behavioral simulation of the robots, a simulation of the work environment, the col-
laborative work of several robots and finally the possibility to go from simulation
to experimentation. Future work is to develop other multi-robot collaboration al-
gorithms using this tool before a validation and a performance analysis on a test
bench composed of three robots and two independent conveyors.

Acknowledgment The research work reported here was made possible by Schneider Elec-
tric with the CIFRE 158/2013.

References
1. Schubert, R. (2000). Process and apparatus for introducing products into containers. Patent US
6122895 A.
2. Sahin, H. (2005). Design of a secondary packaging robotic system. PhD thesis, Middle est
technical university.
3. Johari, N., Haron, H., and Jaya, A. (2007). Robotic modeling and simulation of palletizer ro-
bot using workspace5. In 4th International Conference on Computer Graphics, Imaging and
Visualization (CGIV 2007), pages 217 _ 222. IEEE.
4. Sam, R., Arri_n, K., and Buniyamin, N. (2012). Simulation of pick and place robotics system
using solidworks softmotion. In International Conference on System Engineering and Tech-
nology (ICSET), pages 1 _ 6. IEEE.
5. Mirzapourrezaei, S., Lalmazloumian, M., Dargi, A., and Wong, K. Y. (2011). Simulation of a
manufacturing assembly line based on witness. In Third International Conference on Com-
putational Intelligence, Communication Systems and Networks (CICSyN), pages 132 _ 137.
6. Hindle, K. and Du_n, M. (2006). Simul8-planner for composites manufacturing. In Proceed-
ings of the Winter Simulation Conference, 2006. WSC 06., pages 1779 _ 1784. IEEE.
7. Nikakhtar, A., Wong, K. Y., Zarei, M., and Memari, A. (2011). Comparison of two simulation
software for modeling a construction process. In Third International Conference on Compu-
tational Intelligence, Modelling and Simulation (CIMSiM), pages 200 _ 205. IEEE.
8. Mattone, R., Adduci, L., andWolf, A. (1998). Online scheduling algorithms for improving per-
formance of pick-and-place operations on a moving conveyor belt. In Proceedings of the
IEEE International Conference on Robotics and Automation, ICRA-98, pages 2099 _ 2105
9. Huang, Y., Chiba, R., Arai, T., Ueyama, T., and Ota, J. (2012). Part dispatching rule-based
multi-robot coordination in pick-and-place task. In 2012 IEEE International Conference on
Robotics and Biomimetics (ROBIO), pages 1887 _ 1892. IEEE.
10. Daoud, S., Hicham, C., Farouk, Y., and Lionel, A. (2014b). E_cient metaheuristics for pick
and place robotic systems optimization. Journal of Intelligent Manufacturing, 25 :27_41.
11. Fujimoto, H., Tanigawa, I., Yasuda, K., and Iwahashi, K. (1995). Applications of genetic al-
gorithm and simulation to dispatching rule-based fms scheduling. In Proceedings 1995 IEEE
International Conference on Robotics and Automation , volume 1, pages 190_195. IEEE.
12. Izumi, T., Koyanagi, K., Matsukuma, K., and Hashiguchi, Y. (2013). Robot system. Patent
US 8606400 B2.
3D modelling of the mechanical actions of
cutting: application to milling

Wadii YOUSFI1*, Olivier CAHUC1, Raynald LAHEURTE1, Philippe


DARNIS1 and Madalina CALAMAZ2
1
University of Bordeaux, I2M, UMR 5295, F-33400 Talence, France.
2
Arts et Métiers Paris Tech, I2M, UMR 5295, F-33400 Talence, France
* Wadii Yousfi. Tel.: +33- (0)5 56 84 79 77. E-mail address: wadii.yousfi@u-bordeaux.fr

Abstract: Along the cutting edge, the geometric and kinematic parameters vary
greatly and the velocity vector at each point is very sensitive to the current posi-
tion of the point considered on the cutting edge. The proposed study includes, for
each of the three shear zones, the effect of velocity gradients on the strain fields
and strain rates. These velocity gradients generate additional displacements of the
chip, in three dimensions and, therefore, new force components and cutting mo-
ments. This study presents the overall approach for calculating cutting action start-
ing with a detailed description of each feature area. The wrench of action is de-
termined at the tip of the tool based on the elementary forces along the edge.

Keywords: Milling, strain gradients, strain rate gradients, cutting actions torsor,
cutting moment.

1 Introduction

Analytical and semi-analytical modeling of manufacturing processes by machin-


ing present great scientific and industrial interests. This modeling identifies the
optimal cutting parameters from the geometrical and thermomechanical greatness
without having to go through experimental tests or expensive simulations. Com-
paring with other machining operations, the milling operation presents additional
complexities arising from the variation of the geometrical parameters in machin-
ing configuration and kinematics during the operation [1].
In orthogonal cutting configuration, in the three cutting areas [2, 3] (primary,
secondary and third shear zones), displacements, strains and strain rates are de-
termined from the trajectory of each material particle. An element through the
primary shear zone is assumed to follow a hyperbolic trajectory [4]. The compo-
nent of the velocity parallel to the shear plane is assumed to be constant. The sec-
ondary shear zone reflects the sliding speed in the primary shear zone [2]. The ve-

© Springer International Publishing AG 2017 647


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_65
648 W. Yousfi et al.

locity component perpendicular to the rake face is considered. Its value varies lin-
early along the length of contact chip tool. Consider a linear variation of the chip
sliding speed at the interface, starting with a sticky contact (zero velocity) to a
maximum value at the end of contact (chip velocity).
The tool acuity generates a discharge of the material portion along the flank
face by deforming plastically. Elastic return occurs on the surface of the machined
material. The section of the tool in the calculation plane is represented by a circu-
lar arc whose two asymptotes are the flank face and the rake face. A power bal-
ance between the area above and below the stagnation point is used to determine
the exact position of this point.

2 Volumic modeling of strain fields and strain rate

In each shear zone [3, 5, 6], and for each direction, the velocity field is determined
according to the kinematic and geometric parameters of the cutting operation.
Maximum displacements in all three directions in the cutting area are calculated
for an elementary time, which corresponds to the time taken for a volume element
to pass along the entire thickness of the primary shear zone. In the next paragraph
we give a detailed description of the different velocity fields in the main cutting
areas.

2.1 Study of the primary shear zone

For each direction of the local coordinate system RM ( xM , yM , zM ) associated with this
zone, determination of the velocity fields is based on boundary conditions. Two
extreme points are defined on the cutting edge P2,inf and P2,sup (Fig. 1). The orienta-
tion of the cutting edge within the modelling space generates a gradient dVN of ve-
locity VN (Fig. 1). On the contact surface between the tool and the entire thickness
of the primary shear zone, tool-material contact is sticky (VN 0) for yI 0 .
The cutting speed gradient between the two extreme points on the cutting edge
(Fig. 1) generates a velocity gradient normal to the primary shear zone VN . Coming
out of the primary shear zone xM hmoy , sliding speed varies linearly between
point P2,inf and point P2,sup . This speed is considered to be zero on the part
side ( xM 0) [7]. The speed carried by zM is zero at the part-primary shear zone in-
terface ( xM 0) and its value is at its maximum on leaving this zone. Starting from
the material-tool contact surface ( yM 0) and moving towards the free surface of
the primary shear zone, this new velocity component varies linearly.
3D modelling of the mechanical actions of cutting … 649

xM .
Fig. 1. Velocity distribution and displacement field carried by x

The minimum value at interface ( xM hmoy , yM 0) depends on sliding


speed. The expression of the velocity and displacement are defined in Table 1.

Table 1. Velocity and displacement vectors in the primary shear zone.

dVN V
Velocity VN yM , zM . yM .zM  N ,P2inf . yM (1)
a p .l
'
l
xM
dVN V
Displace-
ment
UxM yM , zM , t VN yM , zM .t . yM .zM .t  N ,P2inf . yM .t (2)
a p .l
'
l
dVS V
Velocity
VS xM ,zM a p .hmoy
'
.xM .zM  S ,P2inf .xM
hmoy
(3)
yM
dVS V
Displace-
ment
U yM xM ,zM , t VS xM ,zM .t .xM .zM .t  S ,P2inf .xM .t (4)
a p .hmoy
'
hmoy

Velocity VzM ( xM , yM )
V V .x . y
z gz

Vgz
.x (5)
M M
l.hmoy hmoy M
zM
Displace-
ment UzM ( xM , yM , t) VzM ( xM , yM ).t
V V .x . y .t  V
z gz gz
. xM .t (6)
M M
l.hmoy hmoy

2.2 Study of the secondary shear zone

The secondary shear zone is assumed to be triangular (Fig. 2). The velocity carried
by xc varies linearly along the axis normal to the cutting face, starting with the
sliding speed at the interface to reach a value equal to the speed of the chip at the
650 W. Yousfi et al.

surface between the shear zone and the rest of the chip (plane zc xc' ). Along axis
xc (chip-cutting face contact), speed increases from zero at the tool tip to a value
equal to the speed of the chip as it leaves the contact zone. Starting from point
P2,inf and moving towards point P2,sup this speed component increases linearly
(Fig. 2a). The trajectory of a volume element entering the secondary shear zone is
considered to be curvilinear (Fig. 2b).

Fig. 2. Velocity fields in the secondary shear zone: carried by xc (a); carried by yc (b).

A new velocity component Vy normal to the cutting face varies from a maxi-
mum value at the entry to this zone to zero at the exit. Fig. 2b shows the variation
in this velocity between the two extreme points on the edge. The velocity compo-
nent carried by the cutting edge varies linearly along the thickness of the second-
ary shear zone. It increases from a value equal to the sliding speed across the en-
tire cutting face to a maximum value on entering the secondary shear zone.
Velocity and displacement vectors were determined in each direction in the same
way as the primary shear zone.

3 Determination of mechanical actions in the case of a round


insert

3.1 Edge discretization

For the round insert and to be closer to the linear velocity distribution fields in dif-
ferent cutting areas, the cutting edge was considered with discrete constant lengths
d y 4 (Fig. 3a). This quantification generates elements with different cutting
depths a pi . The number of elements is determined with two criteria (Fig. 3b):
3D modelling of the mechanical actions of cutting … 651

- The evolution of the temperature calculated by the Komanduri approach [8] is


a parameter witch is strongly influenced by the cutting speed and therefore the
position along the cutting edge.
- The computing time evolution.

Fig. 3. (a) Discretization of the edge for a round insert, (b) Change of the temperature gradient
between the two extreme points of the cutting edge and the calculating model time based on the
number of elements.

For a range of 2 to 39 elements, the temperature gradient 'T reaches a


threshold equal to 9.2°C. The discretization of the edge in 9 elements (instead of
39) modifies 'T of 9% and reduces the time of calculation of 330%. The ele-
ments number used here is equal to 9 and will be used for all following calcula-
tions.

3.2 Calculating the strain and the strain rate along the cutting
edge

The equivalent strain is determined from the strain tensor calculated by a spatial
derivation of the displacement field. The generalized strain rate is determined
from the strain rate tensor calculated by a spatial derivation of the velocity vector.

Fig. 4. Discretization of the cutting edge to determinate the strain fields and stain rate fields
(round insert) and (b) the angular positions.
652 W. Yousfi et al.

To take into account of the instantaneous variation of feed, two angular posi-
tions T1 and T2 were chosen corresponding to rotation angles of 90 ° and 135 °
respectively (Fig. 4b). The curves in Fig. 5a and 5b show the evolution of the
equivalent strain and the generalized strain rate along the cutting edge in the sec-
ondary shear zone. For these two angular positions T1 and T2 , the equivalent
strain increases sharply, approaching to the generated surface (Fig. 5a) and this
variation is due to the geometry of the insert. The generalized strain rate decreases
along the cutting edge in approaching to the generated surface (Fig. 5b).

Fig. 5. Variation (a) of the strain and (b) the strain rate in the ZCS for each element of the cutting
edge (round insert - 9 elements) for rH 4 mm .

Strain and strain rate calculated in the secondary shear zone are used to calcu-
late the perpendicular force to the cutting face in this area. The equivalent stress is
calculated with a Johnson-Cook behavior law [9]. A and B are the strain hardening
parameters, whereas C is a dimensionless strain rate strengthening coefficient. Pa-
rameters n and m are power exponents of the strain hardening and thermal soften-
ing term. The values of these parameters for the 42CD4 are given in Table 2.

Table 2. Mechanical characteristic of 42CD4 and Johnson Cook parameters [10].

Hardness Young's modulus A B C m n


260 Hv 210 GPa 598 768 0,013 0,209 0,807

To calculate the stress, the average temperature in the secondary shear zone is
determined with the elementary cutting model by integrating the velocity of cut-
ting corresponding to each length element. The curves illustrated in Fig. 6b show
the evolution of the temperature along the cutting edge for each element. The av-
erage temperature in the secondary shear zone increases while approaching the
generated surface of the workpiece. This temperature gradient is important for the
round insert. It is due to the significant decrease of the local chip section.
The temperature gradient decreases starting from the position T1 to position T2 .
This variation is due to greater propagation of heat, in T2 , generated by the in-
crease of the section of the chip (variation from T1 to T2 ).
3D modelling of the mechanical actions of cutting … 653

Fig. 6. (a) Variation of normal force on the cutting face, (b) variation of the average temperature
in the ZCS for each element of the cutting edge.

In the coordinate system R 4 bound to the insert after orientation (Fig. 7), the
elementary action torsor is applied on each element of the cutting length.

Fig. 7. Round insert orientation in the coordinate system R 4 .

This torsor is based on cutting forces which are determined by an orthogonal


cutting model, incorporating the normal force applied on the rake face. The global
action torsor represented in the reference R 4 to the theoretical tool tip is then:

ª n
º
« ¦ Fci ,Pi »
« i 1
»
« n
» (7)
« ¦ Fti ,Pi » ª Fc 4 º
« i 1
» «F »
« n
»
« ¦ Fzi ,Pi »
« c4 »
« Fc 4 »
ª 0 º
« »
>W PièceoOutil @R « i 1
» « » , PPi : «d1iy ( y)»
« M x4 »
P
« n
»

4

« ¦ d1iy ( y).Fzi ,Pi .cos(J 0 )  d1iz ( y).Fti ,Pi » «M y4 » ¬« d1iz ( y) ¼»R4


« i 1
» « »
« n
» M z 4 ¼R 4
« ¦
d1iz ( y).Fci ,Pi  d1iy ( y).Fzi ,Pi .sin(J 0 ) »

« i 1
»
« n
»

«¦ d1iy ( y).sin(J 0 ).Fti ,Pi  d1iy ( y).cos(J 0 ).Fci ,Pi »
¼R 4
P ¬i 1
654 W. Yousfi et al.

After, this torsor will be calculated into the coordinate system R1 (before orien-
tation). The moments measured by the modeling participates on average about ten
percent on the total cutting power. This percentage represents the contribution of
the moments generated by the geometry and kinematics on the total power and
does not consider the moments created by the rotation of the material in the cut-
ting regions and tribological phenomena occuring of tool-material interfaces[11].

4 Conclusions and prospects

This work presents a part of a new approach for calculating cutting actions
based on the calculation of strain and strain rates in the major shear zones. The
orientation of the insert in the space creates strong stress gradients thus directly af-
fect the basic cutting forces. These gradients generate a cutting force gradient be-
tween the volume elements and it is the cause of the occurrence of the cutting
moments to the tip of the tool involved in the consumption of the total power and
permitted to understanding the phenomena encountered during milling operation.
The model results are compared with experimental tests [12] and the continuity of
this work is to achieve a mechanical actions simulator in the case of milling.

References
1. Yousfi, W., et al., 3D Modelling of kinematic fields in the cutting area: application to milling.
International Journal of Advanced Manufacturing Technology, 2016. 82.
2. Merchant, M.E., Mechanics of the Metal Cutting Process. I. Orthogonal Cutting and a Type 2 Chip.
Journal of Applied Physics, 1945. 16: p. 267-275.
3. Oxley, P.L.B., Mechanics of metal cutting. international journal of machine tool design research,
1961. 1: p. 89-97.
4. Dargnat, F., Modélisation semi-analytique par approche énergétique du procédé de perçage de
matériaux monolithiques. 2006, Thèse Université de Bordeaux 1, N° d'ordre : 3216. p. 204.
5. Calamaz, M., Approche expérimentale et numérique de l'usinage à sec de l'alliage aeronautique
Ti6V. 2008, Thèse Université Bordeaux 1, N° d'ordre : 3605.
6. Laheurte, R., Application de la théorie du second gradient à la coupe des métaux 2004, Thèse
université de Bordeaux 1, N° d'ordre : 2935.
7. Yousfi, W., et al., 3D modeling of strain fields and strain rate in the cutting area : application to
milling. International Journal of Advanced Manufacturing Technology, 2015. 81: p. 1-12.
8. Komanduri, R. and Z.B. Hou, Thermal modeling of the metal cutting process — Part III:
temperature rise distribution due to the combined effects of shear plane heat source and the tool–
chip interface frictional heat source. International Journal of Mechanical Sciences, 2001. 43(1): p.
89-107.
9. Johnson, G.R. and W.H. Cook, A constitutive model and data for metals subjected to large strain,
high strain rates and high temperatures. Proceedings of the 7th International Symposium on
Ballistics, 1983: p. 541–547.
10. Hamann, J.C., V. Grolleau, and F.L. maître, Machinability improvement of steels at high cutting
speeds-study of tool/work material interaction. Annals of the CIRP, 1996. 45: p. 87-92.
11. Royer, R., Finite strain gradient plasticity theory for machning simulation 2012, Thesis University
of Bordeaux, N° : 4640.
12. Albert, G., Identification et modélisation du torseur des actions de coupe en fraisage. 2010, Thesis
University of Bordeaux 1, N° : 4152. p. 240.
Engineering methods and tools enabling
reconfigurable and adaptive robotic deburring

Giovanni Berselli1*, Michele Gadaleta2, Andrea Genovesi2,


Marcello Pellicciari2, Margherita Peruzzini2, Roberto Razzoli1
1
Department of Mechanics, Energetics, Management and Transportation, University of
Genova, Via all’Opera Pia 15/A, 16145 Genova, Italy
2
Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, Via
Vivarelli 10, 41125 Modena, Italy
* Corresponding author. Tel.: +39335809236; E-mail address: giovanni.berselli@unige.it.

Abstract According to recent researches, it is desirable to extend Industrial Ro-


bots (IR) applicability to strategic fields such as heavy and/or fine deburring of
customized parts with complex geometry. In fact, from a conceptual point of view,
anthropomorphic manipulators could effectively provide an excellent alternative
to dedicated machine tools (lathes, milling machines, etc.), by being both flexible
(due to their lay-out) and cost efficient (20-50% cost reduction as compared to
traditional CNC machining). Nonetheless, in order to successfully enable high-
quality Robotic Deburring (RD), it is necessary to overcome the intrinsic robot
limitations (e.g. reduced structural stiffness, backlash, time-consuming process
planning/optimization) by means of suitable design strategies and additional engi-
neering tools. Within this context, the purpose of this paper is to present recent
advances in design methods and software platforms for RD effective exploitation.
Focusing on offline methods for robot programming, two novel approaches are
described. On one hand, practical design guidelines (devised via a DOE method)
for optimal IR positioning within the robotic workcell are presented. Secondly, a
virtual prototyping technique for simulating a class of passively compliant spin-
dles is introduced, which allows for the offline tuning of the RD process parame-
ters (e.g. feed rate and tool compliance). Both approaches are applied in the design
of a robotic workcell for high-accuracy deburring of aerospace turbine blades.

Keywords: Virtual prototyping; Engineering methods; Industrial robotics; Intel-


ligent factory.

1 Introduction

Modern factories are required to be efficient and flexible, assuring the highest
manufacturing quality while appropriately responding to fast-varying changes in

© Springer International Publishing AG 2017 655


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_66
656 G. Berselli et al.

market requests [1,2]. In this context, generic machining processes of complex


components may heavily affect the overall factory productivity, due to the relevant
and hardly predictable set-up and cycle times. For instance, deburring operations
are usually performed on cast parts characterized by variable geometries, material
properties and burr thickness, so that a lot of time is wasted, especially at change-
over, in order to tailor the process parameters to the application at hand. Ideally,
these kinds of process would require a level of adaption and flexibility that can be
achieved only by human operators. In parallel, the ever-increasing quality specifi-
cations demand for accuracy and repeatability that are achievable only by CNC
machines, which provide the necessary stiffness, low downtimes, and high
productivity required by the market. However, CNC machining centers suffer of
limited workspace, inflexibility (due to limited number of axes), and involve ra-
ther high initial investments.
Within this scenario, due to the intrinsic cost-effectiveness and ease-of-
installation, Robotic Deburring (RD) may provide the ideal trade-off between op-
erational accuracy/repeatability and re-configurability/adaptability. As an exam-
ple, Fig. 1 depicts two industrial RD processes, the workpiece being either fixed
(tool-in-hand) or attached to the robot end-effector (part-in-hand).

Fig. 1. Examples of RD processes, starting from a workpiece CAD representation (a). Tool-in-
hand (b) or part-in-hand (c) approach. Courtesy of SIR SpA, Italy.

In particular, generic RD processes may be classified as follows [3]:


• Robotic heavy deburring. In this case, the main technical challenges are relat-
ed to the limited stiffness of industrial robots, as compared to CNC tooling
[4,5], along with the relevant geometrical shape errors on the workpieces. On
one hand, the significant process forces arising during heavy deburring, if not
correctly predicted (and possibly compensated), may cause non-negligible de-
flections or even compromise the robot structure, thus leading to a process fail-
ure. In parallel, workpieces with large geometrical errors and variations require
custom-defined process sequences and deburring parameters.
• Robotic high-accuracy deburring. In this case, the main technical challenges
are not solely related to the process forces, but also to the complex motions that
must be performed with great accuracy than the ones available with state-of-
the-art industrial robots [4]. This is still an unsolved problem, since the robot
kinematic calibration is not sufficient for a complete compensation of the mo-
Engineering methods and tools enabling … 657

tion errors. In addition, similarly to the previous case study, the process control
is complicated by the presence of uneven burrs that may introduce disturbances
and oscillations. Henceforth, in case strict tolerances and good surface rough-
ness are required, a uniform contact pressure between the tool and the
workpiece must be guaranteed at all times, despite the burr thickness. In these
instances, either an active force feedback or a passive compliant tool are usual-
ly adopted, the passive solution being more industrially common thanks to its
cost-effectiveness, ease of use and seamless/faster adaptation to unexpected
process variations or collisions [5,6,7].
Despite the considered type of deburring operation, the initial design of the cell
layout and the subsequent parameter tuning of a RD process (e.g. choice of the
tools, feed-rate, optimal deburring path) are usually based on the designer experi-
ence and on several physical tests, which actually reduce the cell productivity and
its real operating flexibility. Therefore, a virtual engineering approach would be
advisable, in order to predict the RD performance without any on-field testing,
possibly leading to a “first-time-right”, “plug-and-produce” technology applica-
tion. To this purpose, the Intelligent Factory Cluster Adaptive, an Italian collabo-
rative industry-academia joint project, aims at developing novel engineering
methods and, possibly, computer-aided tools to optimally design and manage flex-
ible RD cells. In particular, as a part of a more general design framework partially
described in past publications [3], this paper reports about recent advances in RD
design methods by focusing on two main innovations:
• A set of practical design guidelines (devised via a DOE method) for the op-
timal IR positioning within the robotic workcell. These guidelines will help the
RD cell designer in the choice of a near-optimal layout.
• A virtual prototyping technique for simulating a class of passively compliant
spindles. On one hand, these CAD-based models allow for the offline tuning of
the spindle parameters (i.e. feed rate and tool compliance) before physical test-
ing. In parallel, cutting forces and, consequently, process & robot induced er-
rors can be readily estimated.
As an example of the practical applicability of the abovementioned tools, an in-
dustrial case study is briefly presented, concerning the design of a robotic
workcell for high-accuracy deburring of aerospace turbine blades.

2 Related Work on Offline RD Cell Design and Optimization

The main drawback of robotic machining (e.g. deburring, grinding, milling) as


compared to the CNC alternative, is the limited accuracy of industrial manipula-
tor, which strongly depends on the robot mechanical structure, control system, and
full chain of components placed between Tool Center Point (TCP), spindle and
floor. In particular, the error sources in a RD process may be classified as follows:
658 G. Berselli et al.

• External (environmental) sources of error due to the overall cell design, here
including a non-ideal behavior of any auxiliary device that should be properly
selected/located within the cell. According to Schneider et al. [4], environmen-
tal errors comprise temperature-dependent disturbances, deflections/vibrations
of the robot base attachment, and non-negligible compliance of workpiece
holder (gripper) and tooling (spindle). As a particular case, the tool compliance
potentially represents an important error source (rather than a positive effect) if
its contribution is neglected when programming the robot path.
• Robot-dependent sources, which may furtherly subdivided into geometrical
and non-geometrical errors [8]. Geometrical errors, which entail a discrepancy
between predicted and actual TCP pose (whether the robot is moving or not),
are due to imperfect geometries and mating/assembly errors. Non-geometrical
errors arise during the robot motion and comprises un-modeled dynamic ef-
fects, which are not compensated by the traditional robot controller (e.g. struc-
tural deformations, non-linear joint stiffness, backlash in the gear reducers,
stick-slip effects, electric motors hysteresis). For instance, as reported in [4],
link and joint compliance during free motions may contribute up to 8÷10% of
TCP pose error.
• Process-dependent errors, due to force-induced vibrations (chatter) during
deburring. Given the overall cell design, and once the main components are de-
cided, the values of the machining forces strongly depend on the process pa-
rameters (i.e. spindle speed, axial/radial depth of cut, and chip load [9]). Tool
forces are exchanged at the robot TCP, henceforth causing undesired deflection
(i.e. robot-dependent errors).
Owing to the abovementioned issues, several error compensation strategies
have been proposed in the literature, which may be conceptually divided into
online methods and offline methods. The latter include the robot kinemat-
ic/dynamic calibration and the offline generation of compensated robot paths. For
instance, a recent outcome of a EU-funded project named COMET [10] is a set of
software tools that allow to quickly compute a TCP trajectory suitably adjusted
according to the predicted variations from the desired (ideal) IR motion. In prac-
tice, the main idea is to program the robot in order to nominally provide a trajecto-
ry which differs from the desired one, but that will closely approximate the ideal
path as consequence of all the errors affecting the robot arm. Necessary input for
such a “compensated” offline path planner are the process forces, on one hand,
and a rather complex robot model, which ideally accounts for all the main error
sources.
As for online compensation techniques, they employ a set of supplementary
sensors (e.g. vision based systems, both commercial or custom made such as the
one described in [11], vibration compensating mechanisms, and force-feedback
devices). Surely, online compensation methods positively affect the final product
quality. However, they may be hardly implemented in the state-of-the-art industri-
al scenario due to initial hardware costs and the scarcely interfaceable IR control
architectures. In summary, the typical RD cell components, related error sources
Engineering methods and tools enabling … 659

Order of Magnitude (OoM), and possible compensation strategies are depicted in


Fig. 2.

Fig. 2. Main Components of a RD Workcell; error sources and related Order of Magnitude
(OoM); possible compensation strategies (online strategies are underlined).

In any case, a quick overview of the state of the art confirms that engineering
methods and software tools which allows to improve RD without the necessity of
further hardware/software investments are actually needed, so that further research
efforts in this direction are fully motivated.

3 Optimal Workcell Design

The RD design procedure starts with the analysis of the workpiece geome-
try/material (collected into a CAD model) and, therefore, a rough definition of the
process requirement (e.g. heavy or high-precision deburring). Then, a set of design
steps should be consequently performed, namely:
• Step 1: an initial cell layout is defined and the main off-the-shelf components
are selected. In this phase, a computer-aided OLP (offline programming) soft-
ware is used to simulate the robot motions and check for reachability & colli-
sion avoidance between IR and auxiliary devices. The main advantage of OLP
packages (e.g. Delmia Robotics or Siemens Technomatix) is the possibility to
directly generate the IR code to be fed into the industrial controller. On the oth-
er hand, the main OLP limitation is their purely kinematic nature, so that IR
dynamic effects and process forces cannot be computed. In any case, once
reachability constraints are enforced, multiple choices for the IR base position
are always available, since an RD process usually employs an extremely lim-
ited portion of the accessible robot workspace. Therefore, it is desirable to se-
lect an IR placement that allows minimizing (if possible) robot-induced errors
(see Sec. 3.1).
• Step 2: a CAM system is employed to determine the deburring parameters and
the magnitude/direction of the cutting forces along the tool path. Nonetheless,
when a passively compliant spindle is used, commercial CAM packages do not
allow a correct prediction of the process-induced errors. As an alternative solu-
tion, the tool/workpiece interaction can be modelled within a general-purpose
660 G. Berselli et al.

Multi-Body Dynamic software, where the spindle deformations may be accu-


rately estimated.
• Step 3: if a robot dynamic model (robot signature) is available, the cutting
forces from Step 2 can be used to simulate the actual robot motion and, eventu-
ally, provide compensated trajectories. At last, kinematic calibration is applied
(e.g. employing the method described in [3], namely a subset of the design pro-
cedure described hereafter).
• Step 4: finally, the compensated IR path is re-introduced in the initial OLP
software for robot code generation.
The cell design workflow is summarized in Fig. 3. Focusing on the first two de-
sign steps, it is necessary to provide a suitable initial guess for the robot base posi-
tion and, subsequently, a simulation model of the (compliant) tool behavior.

Fig. 3. Workflow for RD process design & parameter tuning.

3.1 Guidelines for Workcell Layout Design


Once an IR model is chosen, a good estimation of a nearly optimal robot
placement can be achieved by simply measuring the trajectory tracking error in the
proximity of the RD working area. In this work, the IR actual pose is captured via
a Faro Laser Tracker (the receiver being mounted on the TCP - accuracy up to
0.015 mm). As depicted in Fig. 4, a set of linear paths along the vertical (z-axis),
lateral (y-axis) and forward (x-axis) directions are performed, followed by a set of
circular motions.

Fig. 4. Experimental evaluation of the most practical IR base position for a given task.
Engineering methods and tools enabling … 661

It should be highlighted that: a) the actual IR trajectories during deburring will


surely differ from the test paths, although a linear motion roughly approximates
the actual tool path for high curvature radius of the workpiece; b) no external forc-
es are applied on the TCP, so that experimental results are mainly useful when
process force are low (i.e. high-precision deburring). In summary, the following
conclusions can be drawn:
• Robot-induced errors increase as the number of joints involved in the move-
ments increases; • The linear motions along the x direction are (on average)
those suffering from major errors, movements along the y and z directions be-
ing generally preferable; • As the TCP speed increases, the y-axis motion errors
remains mostly unaltered, whereas errors in the x and z directions increase;
• Concerning y-axis motions, the error abruptly increases as the robot symmetry
axis is crossed (i.e. during the transition from positive to negative y coordinate),
due to the motion inversion of shoulder and elbow IR joints.
In conclusion, lateral and vertical paths should be preferred, that is the spindle
should be accessible by using y-axis and z-axis motions mainly. As an example of
the experimental campaign outcomes, Fig. 5 reports the average error between
ideal and actual paths along equally spaced vertical lines (hereafter referred to as
“zones”, the zone numbering method being highlighted in Fig. 6). As it can be
seen, certain zones should be preferred since the motion error is lower.

Fig. 5. Example of mean error evaluation for vertical (linear) TCP motions.

3.2 Offline Tuning of RD Process Parameters


Owing to their simplicity as compared to force-feedback devices, passive com-
pliant tools are commonly adopted in industry. The tool considered in this paper
(chosen among other alternatives due to its widespread adoption and classical ar-
chitecture) is characterized by pneumatic actuation and radial compliance (see
[12,13] for a detailed description of the device and [14] for a possible design strat-
egy of pneumatic spindles). In this case, a suboptimal parameter tuning may lead
to either partial or excessive deburring (where part of the workpiece is accidental-
ly removed). With reference to Fig. 6, the spindle comprises a pneumatic motor
(fed through inlet 1) that provides the energy for the cutter rotation, and a pressur-
ized compliant support mechanism (fed through inlet 2). The prediction of the cut-
ting forces and of the optimal process parameters is achieved by means of Virtual
662 G. Berselli et al.

Prototyping approach. A spindle CAD model is exported (via .igs files) into a
Multi-Body Dynamic software (Recurdyn MBD package), which allows for a co-
simulation procedure with an external model of the process forces (built into a
Matlab/Simulink environment). The MBD model accounts for the dynamic and
frictional effects of every moving component, here including the contribution of
the compliant mechanism (always neglected in previous studies). In parallel, the
cutting forces are predicted via a modified version of the Altintas model (the inter-
ested reader may refer to [15,16]). A simulated measure of the process error, e, is
then defined as a function of workpiece material, feed-rate, vf, and spindle compli-
ance (via the inlet pressure, p). A positive error indicates a partial deburring,
whereas a negative error indicates an excessive deburring. As an instance, Fig. 6
reports a contour map of the function e(vf, p), highlighting the point locus leading
to an optimal parameter tuning (i.e. e=0). Note that, due to the high number of in-
teracting physical phenomena, the optimal process parameters are hardly predicta-
ble without an integrated modeling technique. Currently, the same virtual proto-
typing method is being used to describe also other types of cutting devices (e.g.
axially compliant spindles), in order to provide a library of re-usable solutions for
the RD designer.

Fig. 6. Description of the compliant spindle (a). Contour map of process-induced errors and de-
termination of optimal feed-rate, vf, and tool compliance, via input pressure p (b).

3.3 Case Study: High-accuracy deburring of aerospace turbine


blades
The final cell layout is depicted in Fig. 7 and comprises an industrial robot ABB
IRB 6640 185 kg / 2.8 m, a set of interchangeable grippers (Schunk 160 plus), five
compliant spindles (e.g. ATI RC-660, with ±9 mm of compliance-induced dis-
placement capabilities) mounted on an indexed table, a Renishaw touch probe for
cell calibration, and a rotating table for loading/unloading workpieces.
Engineering methods and tools enabling … 663

Fig. 7. Final Workcell Layout.

The robot base positioning has been chosen according to the design guidelines
reported in Sec. 3.1, whereas the spindle parameters have been initially set
according to the outcomes of Sec. 3.2. In particular, the DOE activity has
suggested to increase the height of the robot base of about 500 mm (or,
alternatively, decrease the height of the tool holder of the same quantity).
Experimental evaluation is in progress in order to assess the added benefit of an
offline compensation method for robot path generation [4].

4 Discussion and Conclusions

After a brief description of the state-of-art design practice in robotic deburring,


this paper has presented two main innovations for improving the initial cell layout
and for tuning the spindle parameters according to the process requirements. The
first innovation, concerning design guidelines for robot base positioning, is based
on a set of experimental measures rather than a mathematical model. In this phase,
it is simply assumed that, generally speaking, the overall robot workspace is al-
ways larger than the effective workspace required for deburring operations. The
second innovation concerns the model of a class of passively compliant spindles,
and follows from the observation that tool compliance is mostly neglected in
commercial CAM software. Both these results can be used as an effective aid for
the RD engineer in the initial design phases of a complete cell. In addition, the
outcome of the spindle model (namely, the process forces) can be used as input of
more complex (offline) trajectory planning techniques.

Acknowledgments The authors want to acknowledge SIR SpA for the fundamental technical
and managerial contribution to the development of the present research project.
664 G. Berselli et al.

References

1. Heisel U. and Meitzner M. Progress in Reconfigurable Manufacturing Systems. Journal for


Manufacturing Science and Production, 2011, 6(1-2), 1-8.
2. Chen Y. and Dong F. Robot machining: recent development and future research issues. Inter-
national Journal of Advanced Manufacturing Technology, 2013, 66(9-12), 1489-1497.
3. Leali F., Vergnano A., Pini F., Pellicciari M. and Berselli G. A Workcell Calibration Method
for Enhancing Accuracy in Robot Machining of Aerospace Parts. International Journal of
Advanced Manufacturing Technology, 2014, DOI: 10.1007/s00170-014-6025-y, (available
online).
4. Schneider U. et al. Improving robotic machining accuracy through experimental error investi-
gation and modular compensation. International Journal of Advanced Manufacturing Tech-
nology, 2014, DOI: 10.1007/s00170-014-6021-2y, (available online).
5. Liang L., Xi F. and Liu K. Modeling and Control of Automated Polishing/deburring Process
Using a Dual-Purpose Compliant Toolhead. International Journal of Machine Tools and
Manufacture, 2008, 48(12-13).
6. Acaccia G.M., Callegari M., Michelini R.C., Molfino R.M. and Razzoli R.P. Functional as-
sessment of the impedance controller of a parallely actuated robotic six d.o.f. rig. In Proc. 6th
IEEE Mediterranean Conference on Control and Systems (MCCS), June 9-11, 1998, Alghero,
Italy, pp. 397-402.
7. Acaccia G.M., Callegari M., Michelini R.C., Molfino R.M. and Razzoli R.P. Dynamics of a
co-operating robotic fixture for supporting automatic deburring tasks. In ICI&C ‘97 - Interna-
tional Conference on Informatics and Control, St. Petersburg, Russia, June 9-13, 1997, pp.
1244-1254.
8. Mustafa S.K., Pey Y.T., Yang G. and Chen I. A geometrical approach for online error com-
pensation of industrial manipulator. In: IEEE/ASME International Conference on Advanced
Intelligent Mechatronics, 2010, pp. 738-743.
9. Denkena B. and Hollmann F. Process Machine Interactions - Prediction and Manipulation of
Interactions between Manufacturing Processes and Machine Tool Structures, 2013 (Springer,
Berlin).
10. COMET Project – Plug-and-Produce COmponents and METhods for Adaptive Control of In-
dustrial Robots Enabling Cost Effective, High Precision Manufacturing in Factories of the
Future. European 7th Framework Programme, reference number 258769,
http://www.cometproject.eu.
11. Furferi R., Governi L., Volpe Y. and Carfagni M. Design and assessment of a machine vision
system for automatic vehicle wheel alignment. International Journal of Advanced Robotic
Systems, 2013, 10(1).
12. Ryuh B. and Pennock G.R. Robot Automation Systems for Deburring. Industrial Robotics:
Programming, Simulation and Applications, L. K. Huat (Ed.), 2006, ISBN: 3-86611-286-6,
InTech,.
13. Lawson D.K. Deburring tool. U.S. Patent 6,974,286 B2, filed Jul. 25, 2003, and issued Dec.
13, 2005.
14. Carfagni M., Furferi R., Governi L. and Volpe Y. A vane motor automatic design procedure,
International Journal on Interactive Design and Manufacturing, 2013, 7(3), 147-157.
15. Altintas Y. Manufacturing automation, Metal Cutting Mechanics, Machine Tool Vibrations,
and CNC Design, 2012 (Cambridge University Press, New York).
16. Berselli G., Pellicciari M., Bigi G. and Andrisano A.O. Virtual prototyping of a compliant
spindle for robotic deburring. Springer, Lecture Notes in Electrical Engineering, 2016, ISBN:
978-981287986-8, vol. 365, pp. 17-30.
Tolerances and uncertainties effects
on interference fit of automotive steel wheels

Stefano Tornincasa1, Elvio Bonisoli1,*, Marco Brino1


1
Politecnico di Torino, Dept. of Management and Production Engineering, Torino, Italy
* Corresponding author. Tel.: +39-011-090-7274 ; fax: +39-011-090-7299.
E-mail address: elvio.bonisoli@polito.it

Abstract Indirect estimation of the stiffening effect caused by the fitting process
of an automotive wheel is hereby presented to detect optimal interference of au-
tomotive steel wheels. The effects are related to components and assembly charac-
teristics, such as masses and natural frequencies. Both the components of the
wheel, which are disc and rim, are subject to generalised tolerances and uncertain-
ties, mainly related to elasto-plastic material properties, dimensional and geomet-
rical tolerances and manufacturing process parameters. Taking into account the
theoretical change in the dynamic properties of a pre-stressed structure with re-
spect to its non-stressed condition, the stiffening effect caused by the fitting pro-
cess is expected to bring consequences on the natural frequencies of particular and
representative modes of the assembly. Moreover, the dynamic behaviour of the as-
sembly can be related to the one of the two separate components, in order to im-
prove the indirect estimation of the pre-stressed condition. The methodology is
developed starting from numerical and experimental modal analysis, building a
meta-model based on these training data, then evaluating the performance of that
on a production wheel case. The optimal interference fit estimations are tested on
a standard steel wheel for the Iveco Ducato commercial vehicle. Then to evaluate
the robustness of the method, the meta-model is used for a compact spare tyre of a
saloon car.

Keywords: Dimensional and geometrical tolerances, automotive wheels, inter-


ference fit, finite element method, experimental modal analysis.

1 Introduction

Indirect estimation of the stiffening effect caused by the fitting process of an au-
tomotive wheel is the aim of a research collaboration between the Virtual Product
Development Team of Politecnico di Torino and Magnetto Wheels company
(MW) [1-3]. The industrial partner is leader in the manufacturing of automotive

© Springer International Publishing AG 2017 665


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_67
666 S. Tornincasa et al.

steel wheels. Tolerances and uncertainties on materials, dimensional and geomet-


rical properties, and manufacturing process parameters affect the final result of op-
timal fitting of the components of a steel wheel [4-6]. The idea to control the op-
timal result in complex industrial applications is handled using emerging
numerical techniques such as meta-modelling approaches coupled to well-known
dynamic tools like modal analysis [7, 8]. The effects of interference fit of the two
components of a wheel, disc and rim, are related to components and assembly
characteristics, such as masses and natural frequencies [3, 9-12]. Suitable experi-
mental tests or numerical simulations of the fitting process can be tuned to define
a training phase for evaluating a black-box meta-model able to detect the interfer-
ence fit between disc and rim through indirect estimations. The final goal is the in-
line evaluation during the production of the interference fit realised between disc
and rim in a steel wheel through indirect measurements [12]. The internal stress
induced in the fitting process is a direct estimator of the interference fit. Natural
frequencies of the wheel are related to the internal stressed structure, thus with re-
spect to disk and rim natural frequencies and their masses.

The research activity is divided in two main steps: in the first one the methodology
is defined and applied selecting a case-study, the Iveco Ducato commercial vehi-
cle. The main parameters related to the fitting process are related to their reference
values and range of acceptability. A meta-model is built starting from a training
phase on the model. Experimental modal analysis of disc, rim and then of the
stressed wheel are used to define and verify the indirect estimation of interference
fit realised on the wheel. In the second phase the methodology is applied on a dif-
ferent case-study, a compact spare tyre of a saloon car, where the knowledge was
minor and then not yet fully analysed.

2 Test case, geometry and process

The optimal interference fit estimations are tested on standard steel wheels for the
Iveco Ducato commercial vehicle. Then to evaluate the robustness of the method,
the meta-model is used for a compact spare tyre of a saloon car.
In this last case, Figure 1 shows the two components, disc and rim, in the nominal
case. Both components are obtained from a calibrated steel sheet and different
steps of cutting, stamping and forming.
Semi-products are sensitive to sheet thickness, plastic deformations and spring-
backs, residual internal stresses of the process itself. MW provides the acceptabil-
ity ranges for the most important parameters involved in the semi-products.
The automotive wheels are assembled through a manufacturing process called “fit-
ting” where, by mutual deformation of the disc and the rim during the process, the
wheel is assembled. Due to the interference fit condition defined on the compo-
nent design stage, residual stresses are present after the process (Figure 2). These
Tolerances and uncertainties effects … 667

residual stresses might have small or large effect on the wheel behaviour, thus
they must not be neglected [11, 12]. Figure 3 shows the equivalent Von Mises
stresses after a nonlinear simulation of the fitting process. The vehicle loads acting
on a pre-stressed structure produce relevant differences in the overall stress pattern
and, thus, the internal stresses due to the process can not be neglected. The quanti-
fication of these stresses can not be experimentally performed on each wheel,
therefore a simple, fast and reliable method for an indirect estimation could be
useful for the purpose.

Fig. 1. Isometric view (left) and cross section (right) of the disc (a) and rim (b) considered.

Fig. 2. Fitting process in elasto-plastic FEM simulation.

A list of 17 parameters are defined as independent and relevant to describe the fit-
ting process. The most relevant ones are selected through nonlinear simulations of
the fitting process with finite element (FE) approaches.
The main dimensional and geometrical parameters which affect more the residual
stress condition are:
x the diameter of the disc Idisc and the diameter of the rim Irim , which are the
two direct causes of the interference fit amount of the assembled wheel

I fit Irim  Idisc , (1)

x the thickness of the disc tdisc and the thickness of the rim trim , which deter-
mine different associated stiffness of the sections involved in the mutual de-
formation of the components,
668 S. Tornincasa et al.

Fig. 3. Simulations of the Von Mises equivalent pattern on the pre-stressed wheel after fitting
process (left) and effect of vehicle loads (right).

x the flange angle of the disc c f , which affects the surface involved, together
with the axial symmetry of the two components which lead to some particu-
lar properties.
The main physical parameters taken into account are:
x the mass of the two components mdisc and mrim ,
x the elastic isotropic material property of Young’s modulus Edisc and Erim ,
x the friction coefficient f wheel (dry or greased).
The main manufacturing process parameter considered is the reaction force Ffit
between disc and rim during the fitting process. Additionally, the maximum Von
Mises stress can be monitored.

3 Modal analysis and effects of fitting

Modal analysis on both the components and the assembly can be performed nu-
merically, using Finite Element Method (FEM), and experimentally, using Exper-
imental Modal Analysis (EMA) techniques, in order to find possible relevant rela-
tionships and validate them on the real components.
Tolerances and uncertainties effects … 669

The EMA involved is an impact testing with load cell hammer and two uniaxial
accelerometers. For both component and assembly EMA three setups are consid-
ered in different sections of the objects to acquire the information needed. The ac-
quisition and the post-processing are performed in LMS Test.Lab.
Taking into account the theoretical change in the dynamic properties of a pre-
stressed structure with respect to its non-stressed condition, the stiffening effect
caused by the fitting process is expected to bring consequences on the natural fre-
quencies of particular and representative modes of the assembly.
The reaction force between disc and rim produces a state of compression in the
disc and of traction in the rim. For the wheel, the system is linearised in the final
state of the fitting process [10, 11]. Thus, the dynamics can be described by the
expression:


M x  K geom  'K stress x 0 (2)

where the mass matrix of the wheel M is described by the corresponding mass
terms of disc and rim, while the global stiffness matrix is composed of a similar
stiffness matrix K geom , due to the two components, and a differential matrix term
'K stress, due to the fitting process and the internal stresses inside the system.

The analyses considered are: a numerical modal analysis of the single compo-
nents, a quasi-static nonlinear analysis of the fitting process, then a numerical
modal analysis of the assembly with a pre-stressed condition after the fitting step.
The inputs of the analyses are, in particular, the parameters related to the dimen-
sional and geometrical characteristics, the material properties and the friction co-
efficients, while the outputs of the modal analyses are the natural frequencies and
the mode-shapes, and for the fitting process the reaction force and the maximum
Von Mises stress. These FEM models are preformed using Abaqus and, in particu-
lar for the fitting process, with large displacements approach.
A parametric sensitivity analysis is carried out in order to determine which param-
eters, among all the aforesaid ones, affect more these outputs, together with the re-
lated correlations.
Moreover, the dynamic behaviour of the assembly can be related to the one of the
two separate components, in order to improve the indirect estimation of the pre-
stressed condition. Regarding the axial symmetric characteristic, a direct liner cor-
relation between the mass of the single components and some relevant vibration
modes is evidenced, in contrast to the general concept that for higher mass a lower
resonance frequency would be expected.
The main responsible for the difference in mass is the starting coil thickness
which, for some particular mode-shapes, is related to the height of the cross-
section with a consequence of a cubic influence on the stiffness and a linear influ-
ence on the mass.
670 S. Tornincasa et al.

The linear relationship between the natural frequency of the component mode r
and the metal sheet thickness t results [3]:

1 kr 1 t E
fr v (3)
2S mr 2S S 2I 2 U

This result allows an indirect estimation of these particular resonant frequencies


by only means of the mass of the component, without performing experimental
modal identification on each component.

Regarding the reaction force during the fitting process, direct relationship between
the amount of interference fit and the amount of force required is evidenced, as
expected, in both numerical analysis and experimental test, while a significant in-
fluence of small uncertainties on the friction coefficient choice is numerically ob-
tained. Due to the difficulty on the correctness of the friction coefficient choice,
considerations on the reaction force alone could not be made, but its behaviour can
be helpful together with the other outputs.
A component-to-assembly correlation of the mode-shapes is then carried out in
order to couple the position on the mode order of each component mode to the
same mode-shape on the final assembly. This relationship allows to relate the
change in frequencies of the component to the change in frequencies of the as-
sembly. The possibility to “follow” the modes and relate them to the interference
fit amount is achieved. The stiffening effect due to the interference fit is thus evi-
denced with an increase of the natural frequencies of the modes with deformed
shape that are influenced by the components interactions in the assembly.

4 Meta-modelling and prediction

A simulation in the reference condition for all the parameters can provide the de-
sired wheel, neglecting the variability for each parameter. This is the ideal case,
but it is not the representative one. Moreover, also the sensitivity analysis is not
representative of the reality. In this analysis all the parameters but one are set to
the nominal value and two simulations are considered with lower or upper value in
the range of the last parameter. This One Factor A Time (OFAT) analysis proves
that the most critical dimensional tolerance concerns Irim , due to the largest range
of controlling this size in the manufacturing of the rim. This tool is practically use-
less considering the interactions due to multiple variations of parameters.
Meta-models can represent suitable approaches to overcome the identification in-
line production of complex interactions and they can predict simplified models
suitable to be implemented with Monte Carlo simulations [7, 8].
Tolerances and uncertainties effects … 671

A Polynomial Chaos Expansion (PCE) approach is proposed for describing the re-
lationship of the interference fit with disc and rim masses and some natural fre-
quencies of specific flexible modes of the entire wheel:

2
I PCE
fit fit mdisc, mrim, f wheel
I PCE ¦ a j <j [1 ,[2 ,[3 (4)
j 0

A training phase to evaluate the unknown coefficient of the PCE model is con-
ducted, starting from experimental measurements:

2
I PCE
fit ¦ a j <j [1, [2 , [3 (5)
j 0

Rewriting eq.(5) in function of the unknown coefficients, a least mean square in-
verse problem is detected and solved:

ª y1 º ª <0(1) x1 <1(1) x1  <P(1)1 x1 º ª a0 º


«y » « ( 2) »« »
«<0 x2 <1 x2  <P(2)1 x2 » « a1 »
( 2)
« 2»
« 
><@>A@ (6)
«»    »«  »
« » « ( n) »« »
¬ yn ¼ «¬<0 xn <1 xn  <P1 xn »¼ ¬aP1 ¼
( n) ( n)

This meta-model is, then, used to compare experimentally or to simulate numeri-


cally discs, rims and corresponding wheels for predicting the interference fit. This
indirect diagnostic method is the core of the possible in-line production applica-
tion.
The flow-chart of the interference fit estimation is sketched in Figure 4, and the
implementation is built with MATLAB routines.

Fig. 4. Flow-chart of the interference fit evaluation with 2 stages of meta-modelling.

The inputs of the approach are:


x the experimental mass of the two components mdisc and mrim ,
672 S. Tornincasa et al.

x some experimental frequencies f r of the corresponding wheel, acquired


through EMA after the fitting process.
A first linear meta-model provides the frequencies of components modes sensitive
to fitting of disc and rim. It is obtained from the components weights. A second,
more complex, nonlinear meta-model is used to estimate the interference fit start-
ing from some disc and rim natural frequencies of specific flexible modes of other
natural frequencies of the entire wheel.
The outputs of the method are:
x the maximum force during the fitting process Ffit and an indirect estimation
of the interference fit I PCE
fit ,

x additional in-line production parameters, such as the stiffness sensitivity to


parameter variability or their effect on the manufacturing process.
Figure 5 shows the results achieved on Monte Carlo simulations acting on two dif-
ferent modes. The wheel mode 3 is mainly a “bell” mode of the flange
(modeshape represented in the left side of Figure 5). It is monotonically sensitive
to the fitting process and it is used in the meta-model to estimate the maximum fit-
ting force and the interference fit. The first bell modes are taken into account, be-
ing more sensitive to the interference fit amount with respect to the higher order
ones. The wheel mode 11 is, instead, mainly a “drum” mode of the central part of
the disc (modeshape represented in the right side of Figure 5). It represents a false
candidate in the evaluation of the interference fit and it is not considered in the in-
terference fit estimation. In both graphs black dots are experimental measures to
train the meta-model. In case, also numerical training data can be adopted. The red
triangles are the corresponding outputs to the training data, obtained with the same
input of the black dots.

Fig. 5. Comparison of natural frequencies of experimental training data, meta-model results and
meta-model interpolations in the case of a sensitive mode (left) and of a negligible mode (right)
for the evaluation of the interference fit.
Tolerances and uncertainties effects … 673

They are approximately close to the training data, due to the interpolation of the
meta-model approach. Finally, blue dots represent Monte Carlo simulations, using
the meta-model.

From this example, the approach seems to be sufficiently robust for this industrial
application. The future possibility of formalisation could imply an in-line imple-
mentation for a warning-based check of out-of-bound wheels with product quality
improvement.

4 Conclusions

The indirect estimation of the stiffening effect caused by the fitting process of an
automotive wheel is proposed to detect the optimal interference of automotive
steel wheels. The method is based on components and assembly dynamic proper-
ties and their effects on natural frequencies of the stressed wheel structure, after
the fitting process. Generalised tolerances and uncertainties of disc, rim and manu-
facturing process are taken into account in the stiffening effect caused by the fit-
ting process through meta-modelling approaches. The indirect estimation of the
pre-stressed condition and the interference fit are tested on two different cases of
standard steel wheels. The FEM analysis process time for a single wheel (compo-
nent modes, fitting and wheel modes) is in the order of an hour, while the PCE
meta-modelling generation from train data and the estimation time for a sample of
500 generated items is in the order of 4-5 minutes. The future possibility of ob-
taining an in-line implementation is introduced for a warning-based check of out-
of-bound wheels with product quality improvement.

Acknowledgments The authors thank D. Rovarino, R. Majocchi and G. Gallio of MW Italia for
providing the technical material, suggestions and supporting the research in this study. The au-
thors would like to thank the MW Italia S.p.A. for permission to publish this work. Special
thanks go to all of those who have contributed to the result achievement of this project.

References

1. Bonisoli E., Marcuccio G., Tornincasa S., Detection of stress-stiffening effect on automotive
components. Model Validation and Uncertainty Quantification, Vol. 3, 2014, 335-343.
2. Bonisoli E., Marcuccio G., Tornincasa S., Vibration-based stress stiffening effect detection
on automotive wheels through PCE meta-model. Sumbitted for publication, 2015.
3. Bonisoli E., Brino M., Scapolan M., Lisitano D., Stochastic modelling and experimental out-
comes of modal analysis on automotive wheels. International Journal of Mechanics and Con-
trol, 2015, 16(2), 17-23.
4. Wang X., Zhang X., Simulation of dynamic cornering fatigue test of a steel passenger car
wheel. International Journal of Fatigue, 2010, 32, 434-442.
674 S. Tornincasa et al.

5. Firat M., Kozan R., Ozsoy M., Mete O.M., Numerical modeling and simulation of wheel ra-
dial fatigue tests. Engineering Failure Analysis, 2009, 16, 1533-1541.
6. Grubisic V., Fischer G., Procedure for optimal lightweight design and durability testing of
wheels. International Journal of Vehicle design, 1984, 5(6), 659-671.
7. Forrester A.I.J., Sobester A., Keane A., Engineering design via surrogate modelling. 2008
(John Wiley & Sons, Chichester).
8. Blatman G., Sudret B., Efficient computation of global sensitivity indices using sparse poly-
nomial chaos expansion. Reliability Engineering and System Safety, 2010, 95, 1216-1229.
9. Lieven N.A.J., Greening P., Effect of experimental pre-stress and residual stress on modal
behaviour. Philosophical Transaction of the Royal Society Lond., 2001, 359, 97-111.
10. Zhang Y., McClain B., Fang X.D., Design of interference fits via finite element method. In-
ternational Journal of Mechanical Sciences, 2000, 42, 1835-1850.
11. Kompella R.S., Bernhard R.J., Variation of structural-acoustic characteristics of automotive
vehicles. Noise Control Engineering Journal, 1996, 44(2), 93-99.
12. Hasselman T., Chrostowski J.D., Effects of product and experimental variability on model
verification of automobile structures. Proceedings of IMAC, International Modal Analysis
Conference XVI, 1997, 1, 612-620.
An effective model for the sliding contact forces
in a multibody environment

Michele Calì1*, Salvatore Massimo Oliveri1, Gaetano Sequenzia1 and Gabriele


Fatuzzo1

Dipartimento di Ingegneria Elettrica, Elettronica e Informatica, V.le A. Doria, 6 – 95125


1

Catania (Italy)
*Corresponding author: Calì Michele Tel.: +39.095.738.2400; fax: +39.095.33.79.94. E-mail
address: mcali@dii.unict.it

Abstract This work describes an integrated method of 3D modelling algorithms


with a modal approach in a multibody environment which provides a slimmer and
more efficient simulation of flexible component contacts realistically reproducing
system impacts and vibrations. A non-linear numerical model of the impulse con-
tact forces based on the continuity approach of Lankarani and Nikravesh is devel-
oped. The model developed can evaluate deformation energy taking into account
the material's characteristics, surface geometries and the velocity variations of the
bodies in contact. ADAMS®-type modelling is applied to the sliding contacts of
the links of a chain and its mechanical tensioner (“blade”) in the timing of an in-
ternal combustion engine. The blade was discretized by subdividing it into smaller
components inter-connected with corresponding centres of gravity through 3D
General Forces. Static and dynamic tests were performed to evaluate the stiffness,
damping and friction parameters for the multibody model and to validate the
methodology.

Keywords: Impact, friction forces, flexible body, slip, hysteresis damping.

1 Introduction

Analysing the dynamic behaviour of mechanical systems with numerous sliding


contacts between deformable bodies can resolve complex algebraic-differential
equations numerically using calculation codes especially in multibody simulations
[1-3]. The need to assess structural elasticity for stress and strain and set up effi-
cient control systems for positioning and/or vibration which take into account sys-
tem or component flexibility requires an integrated approach using dynamic mul-
tibody and structural analysis simulation [4, 5].

© Springer International Publishing AG 2017 675


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_68
676 M. Calì et al.

There are two approaches to resolving impacts within multibody systems: con-
tinuous and discontinuous.. Both methods resolve contact issues by considering
impacting bodies as infinitely rigid and in calculating the forces exchanged, take
into account the elastic and deformation characteristics in contact zones by using
appropriate constants. This approximation is not acceptable when the contact zone
is extensive and the deformations are such that body shape is altered. In such
situations, body deformation must be taken into account, and the non-linear con-
tact force calculation must be made through continuous finite elements models.
The flexibility factor within the multibody simulation software does not currently
encompass extended contacts and mobile impacts between deformable bodies
since it cannot deal with mobile reference systems on flexible bodies. So, simula-
tions often utilise Model Order Reduction (MOR) models to lower the dimension-
ality of the issue by minimising the loss of precision. However, in these reductions
there remain the notable shortcomings of high non-linearity, non-smoothness and
excessive computation associated with contact management algorithms [6, 7]. The
proposed methodology provides a realistic simulation of systems with extended
sliding contact between deformable bodies able to reproduce impacts and vibra-
tions. The numerical model developed can correctly analyse mechanical systems
in which compliance constraints and geometry pliability during mechanical con-
tact must be taken into account.
The methodology uses the continuous model of Lankarani and Nikravesh, which
is well-suited to calculating non-linear contact forces and by discretely discretiz-
ing the deformable body allows for a continuous calculation of the perpendicular
and tangential friction forces on a highly flexible body.To be able to easily inte-
grate the motion equations, automatic time-step selection in the multibody envi-
ronment was used, via a subroutine, based on a predictor-corrector algorithm by
Shampine and Gordon. Experimental static and dynamic measurements helped
evaluate the parameters of rigidity, friction and damping to insert into the model,
taking into account the effects of the lubricant, the material properties of the im-
pacting bodies and the geometric characteristics of the contacting surfaces. The
methodology was applied to modelling the contact between chain links and a
highly deformable mechanical tensioner in the drive timing system of an ICE.

2 Methodology

The tensioner under analysis is that used by the primary stage of the timing chain
drive system of a high performance V-12 ICE studied also in its subsystems by the
authors previously [8, 9]. The tensioner has 5 parts: bracket, blade and 3 leaf
springs (Fig. 1). The steel bracket holds the blade with a pin and a cantilevered
sliding plane allows the blade to flex which transmits the damped transverse
forces of the chain to the cylinder block. The polyamide (PA 66) blade has a con-
An effective model for the sliding contact forces ... 677

cave seat which holds three steel leaf springs which work in parallel and provide
non-linear flexional rigidity.

(a) (b)
Fig. 1. Mechanical tensioner: (a) system components; (b) geometric model.

The tensioner guides the chain in slack side; the blade's convex surface is sub-
ject to impacts and sliding which opposes the force of the chain's pull and facili-
tates regular sprocket meshing even in the harshest conditions.

2.1 Tensioner modelling

Multibody modelling the flexibility of the blade is based on Component Mode


Synthesis (CMS) or the modal reduction proposed by Craig-Bampton. The com-
ponent is discretized into smaller interconnected constituent parts according to
their centres of gravity by means of 3D General Forces. These forces allow the
parts to exchange both axial force components and torsional ones in the three main
directions (Fig. 2).

Fig. 2. Contact model; (b) summary diagram of the proposed methodology.

The parts, which as a whole, represent the rigidity and damping components of
the General Force provide a numerical model with the same frequencies, damping
and deformation capability as the real component. Fig.3 shows a block diagram of
the methodology. Subdividing the blade into parts was carried out on the unde-
formed blade obtained by reverse engineering Fig. 4(a). Since variation continuity
of the blade's upper curvature, the contact area, had to be assured during deforma-
tion, the following modelling strategies were set up. The blade's upper curvature
678 M. Calì et al.

profile was approximated to three arcs with contiguous circumferences of radiuses


R1, R2 and R3 with angles of T1T2 and T3 (Fig. 4b).

Fig. 3. Summary diagram of the proposed methodology.

(a)

(b)
Fig. 4. (a) Blade obtained by reverse engineering; (b) profile subdivision.

The radiuses and angles are set so that the blade is subdivided into an integer z
parts of equal volume. Appropriate constraints allow the radiuses R 1, R2 and R3 as
well as the angles T1, T2 and T3 to be varied during blade deformation thereby
keeping the three zones of the upper curvature tangential at their boundaries. Mak-
ing four the maximum number of parts which can come into contact simultane-
ously with each chain link, the blade is discretized into z = 22 parts. Part 1 is con-
strained by a revolute constraint to the bracket pin while part 22 is constrained
unilaterally to slide on the bracket's shelf tip (Fig. 5b). These are the two extreme
parts of the deformable model which interface with the rigid parts of the multi-
body model. The 20 parts between these two extremes, each of the same mass and
volume, have equivalent density (Tab.1) and inertia, weighted averages of density
of the polyamide and steel.
An effective model for the sliding contact forces ... 679

2.2 Sliding contact force between links and deformable elements

The contact forces between the tensioner parts and the chain links in the
multibody model are calculated through a subroutine in C++ which, at each time
steep, identifies the point pairs and the exchanged forces. In particular, the action
between the mobile parts and the flexible body, discretized into parts, is ex-
changed between the centers of mass using master points. It was assumed in the
modelling that all the points move in the XY plane.

(a) (b)

Fig. 5. (a) Position vectors between links and parts; (b) blade-bracket unilaterally constrain.

The contact subroutine evaluates the distances of the boundary point parts Aj
and Bj closest to the centre of mass Oi of the links i and, through them, obtain the
orientation (in the XY plane) of the mobile parts and the positions of points Pi and
Qi relative to them. Fig. 5 (a) shows the relative and absolute position vectors.
The distance of points P and Q at perpendicular to the blade surface provides
the two perpendicular contact forces (F) between the chain links and the blade
parts:

(1)

where H is the minimum distance between the centre of gravity of the ith link
and the contact surface. H is also the minimum distance between the link centres
(points Pi and Qi) and the blade's contact surface. These forces are shared through
a weight function between points Pi and Qi and the two pairs of centres of mass Oi
of the closest blade parts. The weight function assumes a unitary value when
points Pi or Qi are aligned at a right angle through the centre of gravity of the blade
part. The positions between two successive alignments varies linearly between 1
680 M. Calì et al.

and 0 and is inversely proportional to the relative distances P iOj and PiOj+1. To
classify the possible chain movements relative to the tensioner surface, three con-
figurations for each link have been identified: 1 no contact between the blade parts
and links so no contact force is exchanged; 2 one guiding blade-part point is in
contact with the link surface; 3 both blade-part edges are in contact with the link
surface which corresponds to having sliding face (A i-Bi) in contact with the entire
guiding surface of the tensioner. To evaluate the contact force the following geo-
metric characteristics were used: the length L of the blade part; chain step P; dis-
tance H between the link's centre of gravity and the link's contact surface; rotation
of the link around its own centre of gravity. So, the forces and the moments ap-
plied to the links centres of gravity are:

(2)

(3)

To generalise the procedure, it could be stated that the number and positions of
the master points are linked to the length L of the blade parts. This number is es-
tablished such that the deformation of the flexible body is always reliably repeated
(evaluated experimentally) in terms of velocity, acceleration and frequency.

2.3 Time step selection

The numerous impacts produce very different instantaneous velocities and accel-
erations so the motion equations have widely spread eigenvalues. The system be-
comes stiff requiring time step adjustments [10]. The Baumgarte's method was
applied to define time step. By thus the motion equations can be resolved with ex-
plicit integrators (GSTIFF) using Shampine and Gordon's predictor-corrector al-
gorithms.

3 Experimental validation

Static and dynamic experimental trials revealed the work frequency rigidity and
damping forces to assign to the General Forces. The dynamic trials were carried
out by cyclically applying surface loads where the chain is impacted with continu-
ous increments from 0 to 300N at frequencies varying from 0.00769 Hz to 0.333
Hz measuring any transverse displacements in the tensioner surface and any
lengthening in the blade tip on the bracket shelf. The experimental set-up: Pfaff
screw jack with a 0.1 mm/turn reduction gear; a 25kN Laumas strain gauge load
An effective model for the sliding contact forces ... 681

cell with a sensitivity of 2mV/V; an inductive HBM vertical displacement trans-


ducer (WA20, 20mm, <0.1% f.s); an inductive HBM horizontal displacement
transducer (WA50, 50mm, <0.1% f.s); an HBM MgcPlus acquisition system
(ADC 24bit). The trials were carried out with lubricated leaf springs applying
work loads to the tensioner between preload (21.6N) (Fig. 6a) and maximum load
(138.3N) (Fig. 6b) within the frequency range (0.00769 – 0.333Hz) and measuring
the corresponding deformation.

(a) (b)

Fig. 6. Overlapping experimental–multibody simulation: (a) preload 21.6 N; (b) load 138.3 N.

The results highlight the tensioner's non-linear behaviour showing hysteresis


cycles with decreasing areas as frequency increases. The acquisition of the blade
profile during the dynamic tests and its comparison with the multibody model
produced the best combination of stiffness, damping and preload to assign to the
General Forces which connect the blade parts. By means of a DoE (Design of Ex-
periments) analysis, the differences between the experimental hysteresis cycles
and the multibody ones were optimised (Fig. 7).

Fig. 7. Experimental and numerical transverse displacement at 0.00769 Hz and 0.333 Hz.

To take into account the deformation energy dissipated in each hysteresis cycle
and best characterise the tensioner, ever worse load cycles were applied up to the
greatest elongation of the leaf spring (about 250N) measuring the transverse dis-
placements and blade elongations (Fig.8a) along the bracket shelf. For these trials,
the vertical displacement behaviour as a function of increasing load is interpolated
by a 2° polynomial (Fig.8b) with a determination coefficient R2= 0.93

(4)
682 M. Calì et al.

Table 1 shows the mechanical properties of the extruded part polyamide


(PA66) and the flat steel leaf (48SI7).

(a) (b)
Fig. 8. (a) Maximum displacements; (b) curve displacement-load.

The fourth column shows the mechanical values of the blade parts in the multi-
body model. They are average values weighed against the volumes of polyamide
and steel. Table 2 shows the values assigned to the General Force components, the
density and inertia values of the blade parts together with the rigidity and damping
used during impacts. Table 3 shows the oil properties.
Table 1. Mechanical properties of tensioner materials.
Equivalent material in M.B.
Parameter Polyamide PA 66 Steel 48SI7
model
Young Modulus [N/mm2] 3100 10000 28000
Density [kg/dm3] 1.14 8 3.2
Poisson's Ratio 0.41 3 0.8
Yield strength [N/mm2] 90 10
Operating Temperature range [°C] -40 y170

Table 2. General-Force and Sliding contact parameters. Table 3. Oil property.


General-Force parameters Contact parameters Oil property Value
Stiffness [N*mm/degree] 1488 Stiffness [N/mm] 1000 Density [Ns2/mm4] 8.75 × 10-10
Friction 0.1 Friction 0.05 Viscosity [Ns/mm2] 9.54 × 10-9
Damping [N*mm*s/degree] 1650 Damping [N*s/mm] 950 Bulk modulus [MPa] 1450
Preload [N*mm] 3500 Preload [N] n.a. Initial air fraction (%) 5

Table 4. Frequencies and relative damping factors.


Mode Frequencies [Hz] Damping factor Mode Frequencies [Hz] Damping factor
1° 109.06 1.8% 5° 3588.4 2.01%
2° 1131.2 1.95% 6° 5420.9 2.1%
3° 1404.7 1.9% 7° 6520.2 2.22%
4° 3167.5 2.13% 8° 8163.4 2.31%
Table 4 shows the values of the first 8 frequencies and the related tensioner
damping whose constraints and preload were calculated through the discrete
model with finite elements from 9857 knots and 3076 TET10 elements.
An effective model for the sliding contact forces ... 683

4. Results

The proposed methodology can precisely evaluate the forces exchanged between
tensioner and chain. The following diagrams show the impact forces as a function
of time in a simulation in which the drive shaft angular velocity is 13000°/sec.
Fig. 9a, 9b and 9c show the forces exchanged in parts 1, 9 and 17. The maximum
exchange forces of part 17 are lower by almost an order of size compared to the
first four (1-4) parts. The results show different impact behaviours along the ten-
sioner surface which is only assessable by flexibly modeling the blade. These be-
haviours are even more noticeable in the diagram of the average forces in transient
start-up and at constant velocity 13000°/sec (Fig. 9d).

(a) (b)

(c) (d)
Fig. 9. Contact force between link and blade parts.

With the chain in acceleration, the average impact forces are at their highest in
all the tensioner's parts. The experimental confirmation of the results' precision is
shown by how frequency content corresponds with experimental oscillation order
(for crankshaft and idler) shown in Fig. 10 of the authors' previous work [9]. The
exchanged forces provide an accurate evaluation of the tension state of the blade.
Figure 10 shows the equivalent maximum Von Mises tension.

(a) (b)

Fig. 10. Von Mises stresses at 7750rpm: (a) on the upper surface; (b) on the internal leaves.

The maximum stresses (22.6 MPa) are developed on the upper blade surface
(Fig. 10 (a)) and on the three steel leaves (96.6 MPa) (Fig. 10 (b)).The results are
well below the tension yield values for polyamide (Vs= 90 MPa) and the spring
684 M. Calì et al.

steel harmonic (Vs= 510 MPa). The results of variable frequency load testing re-
ported in paragraph 3, enabled calculating the deformation energy dissipated dur-
ing the hysteresis cycles. The dissipated energy decreases after impact with fre-
quency f and is proportional to the square of the maximum deflection Dtr.max:

(5)

Using a minimisation procedure, the optimum constant values k, a, b and c


were calculated.

(6)

5 Conclusion

This work has illustrated a simulation methodology to model systems with ex-
tended sliding impacts between deformable bodies. Using appropriate constituent
laws which take into account the material properties of bodies in contact, the geo-
metric characteristics of colliding surfaces and their impact velocities, and apply-
ing Lankarani and Nikravesh's numerical model of continual contact forces in a
multibody simulation was able to be modelled. The proposed model provided
evaluations of the discontinuous impulsive contact forces and the deformation en-
ergy. An effective computational strategy using Shampine and Gordon's predictor-
corrector algorithm provided the most opportune time steps for differential equa-
tions of motion during contacts. The method's validity was demonstrated by ap-
plying the proposed modelling to a study of the mechanical tensioner on an ICE
timing driver. The developed methodology can be applied in systems with compli-
ant and in many biomechanical systems.

References

1. Hu S. and Guo X. A dissipative contact force model for impact analysis in multibody dynam-
ics. Multibody System Dynamics 35.2 (2015) pp. 131-151.
2. Negrut D., Jay L. O. and Khude N. A discussion of low-order numerical integration formulas
for rigid and flexible multibody dynamics. Journal of Computational and Nonlinear Dynam-
ics 4.2 (2009) 021008.
3. Stenzel I., Pourroy, F. Integration of experimental and computational analysis in the product
development and proposals for the sharing of technical knowledge. Inter. Journal on Interac-
tive Design and Manufac. (IJIDeM), February 2008, Vol.2, Issue 1, pp. 1-8.
4. Geradin M., Cardona A. Flexible Multibody Dynamics: A Finite Element Approach. Wiley,
New York, 2000.
5. Fiszer J., Tamarozzi T. Desmet W. A semi-analytic strategy for the system-level modelling of
flexibly supported ball bearings. Springher Meccanica (2015) pp. 1-30.
An effective model for the sliding contact forces ... 685

6. Machado M., Moreira P., Flores P. and Lankarani H. M. Compliant contact force models in
multibody dynamics: Evolution of the Hertz contact theory. Mechanism and Machine Theory
53 (2012) pp. 99-121.
7. Fischer M., Eberhard P. Application of parametric model reduction with matrix interpolation
for simulation of moving loads in elastic multibody systems. Advances in Computational
Mathematics, October 2015, Volume 41, Issue 5, pp. 1049-1072.
8. Calì, M., Sequenzia, G., Oliveri, S.M., & Fatuzzo, G. Meshing angles evaluation of silent
chain drive by numerical analysis and experimental test. Meccanica 51.3 (2016) pp. 475-489.
9. Sequenzia G., Oliveri S. M. and Calì M. Experimental methodology for the tappet characteri-
zation of timing system in ICE Meccanica 48.3 (2013) pp. 753-764.
10. Schindler T., Rezaei, S., Kursawe J, Acary, V. Half-explicit timestepping schemes on veloci-
ty level based on time-discontinuous Galerkin methods Computer Methods in Applied Me-
chanics and Engineering, Vol. 290, 15 June 2015, pp. 250–276.
Systems engineering and hydroacoustic
modelling applied in simulation of hydraulic
components

Arnaud Maillard1,*, Eric Noppe1, Benoît Eynard1 and Xavier Carniel2*


1
Universite de Technologie de Compiegne, Laboratoire Roberval UMR CNRS 737,
CS 60319, 60203 Compiegne Cedex, France
2
Centre Technique des Industries Mecanique (CETIM), 52 Avenue Felix-Louat,
CS 80067, 60304 Senlis Cedex, France
* Tel.: +33-(0)3 44 67 30 09; E-mail address: arnaud.maillard@utc.fr

Abstract

The numerical technology for modelling and simulation, whose the rise in power
has followed computer science evolution, has become increasingly used from the
beginning of V-model in the field of systems engineering and product develop-
ment. As a matter of fact, numerical simulation allows a reduction in costs and
lead times by avoiding or limiting the physical prototyping.
In this paper, the proposed work deals with the design aid to meet requirements,
modelling, analysis and simulation of a component assembly constituting a hy-
draulic power transmission taking into account the fluid borne. This paper estab-
lishes first the aid for architecture design, their analysis and optimizations to meet
the different requirements. Among the different kinds of requirements that a hy-
draulic circuit have to meet, this paper focuses especially on the fluid borne noise
and the noise generated during the operation. Therefore, in the section 3 is pre-
sented a state-of-the-art on the fluid borne noise modelling in frequency domain
applied in different hydraulic components and also on test rig which are used to
adjust certain parameters of hydroacoustic laws. Finally, the last part of the paper
focuses on the different computer tools and simulation software which are used to
model hydraulic circuit taking into account hydroacoustic and vibroacoustic phe-
nomena. Thus, this part also discusses how they are interfaced with each other and
which kind of information has to be exchanged in order to obtain a relevant mod-
elling and simulation for wave propagation in temporal domain.

Keywords: Computer-Aided Engineering; Systems Modelling; Hydroacoustic


Analysis; Fluid Borne Noise; Numerical Simulation

© Springer International Publishing AG 2017 687


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_69
688 A. Maillard et al.

1 Introduction

Hydraulic systems are widespread for off-road and on-road vehicles due to its ma-
ture technology reputation providing an unrivalled specific power. Nevertheless,
this kind of technology has two identified weaknesses which are energetic effi-
ciency and noise generated during the operation. The components responsible for
this noise called ‘active components’ are hydraulic pumps and motors due to the
flow fluctuation at their outputs which is the result of the kinematic variation and
the oil compressibility (Figure 1 on the left). Noise is spread by the fluid, the
structures and the air according to the hierarchy represented in Figure 1 (on the
right).

Figure 1: Flow ripple characteristics for piston pumps (on the left) and noise
sources in a hydraulic circuit (on the right)
Each component connected to pump and/or motor has a specific impedance Z
(equation 1) which reacts to the flow fluctuation.
P
Z (1)
Q
Where P is the pressure ripple and Q the flow ripple.

A component assembly to form a whole hydraulic circuit (pumps/motors, pipes,


hoses, valves, actuators…) may, if it is poorly designed, spread and even amplify
this pressure ripples. Finally, this fluid borne noise generates vibration at the inter-
faces and so is the cause of a part of noise emitted by the whole hydraulic system.
In order to aid in a suitable design of a hydraulic system, it is necessary to provide
a way to predict hydroacoustic phenomena using modelling and simulation soft-
ware. This involves knowing the characteristics of each component constituting
the hydraulic system which can be obtained either by measurement on test rig or
by mathematical models.
Systems engineering and hydroacoustic … 689

2 Aid for the design of a hydraulic circuit

A special care must be taken about the design of a hydraulic system to meet re-
quirements with a focus on the overall noise emitted by such a system. The design
of hydraulic architectures from requirements is a complex task and consists to de-
fine alternative architectures, to optimize them then to assess them in order to find
the most relevant solution according to the different requirements. In order to pro-
vide an aid for these three main tasks, the language OMG SysML is well adapted.
Indeed, this language permits to capture the requirements, the structure and the
behaviour of a system, a component… Thus design knowledge applied in hydrau-
lic components or group of components (subsystem) can be captured by defining
their specific attributes, functionalities and behaviour laws. These attributes are
any properties or characteristics of a component or system, such as maximum
pressure, flow rate… When an alternative architecture has been defined according
to their attributes and the desired features of the hydraulic system, each compo-
nent has to be sized in order to meet the specifications of the hydraulic circuit ac-
cording to the maximum operating conditions of the circuit (maximal pressure,
maximal speed, duty cycle…) which are possible using the parametric capabilities
in SysML. The size of a hydraulic circuit can be classed in four stages:
- 1/ The linear or rotary actuators are sized according to the maximal oper-
ating pressure, maximum load and load displacement. This allows choos-
ing the minimum characteristics of these components (as value of the
displacement of a pump…)
- 2/ Then the system parameters such as hydraulic oil flow rate, pressure
changeover, etc., are determined.
- 3/ The suitable actuator sub-circuit (as pumps to provide hydraulic pow-
er) is selected based on the system parameters and the design require-
ments.
- 4/ Then, the other hydraulic components used in the circuit are finally
chosen.
Stages from 1 to 3 are dependent on performance specifications, so the other re-
quirements as fluid borne noise and vibroacoustic characteristics can be mini-
mized by choosing correctly the different dimension of the other hydraulic com-
ponents (as valves, hoses, rigid pipes…). Thus, according to the maximum flow
rate, the minimum diameter of the other hydraulics components can be defined in
order to limit the maximum oil velocity and so a part of the pressure drops. The
other characteristics of these components can be adjusted in order to minimize the
fluid borne noise by minimizing the impedance response of the hydraulic circuit
using mathematical model of hydroacoustic phenomena which are described in the
next section. The precision of the mathematical model of each component is dif-
ferent. As a matter of fact, certain models require running measurements on a test
rig in order to adjust certain parameters of these mathematical models (accumula-
tors, hoses for example). In the case of a pump or a motor, there isn’t existing
690 A. Maillard et al.

mathematical model for all kinds of pumps or motors. These imply that it is neces-
sary to have some example of hydroacoustic results for similar components in or-
der to set the hydroacoustic characteristics for this kind of components.
In [1], Peak et al. discuss a method of a design-analysis integration strategy which
links in their case a CAD and a CAE software in order to make a link between the
design (CAD) and a analysis software (CAE). This method is called ‘Multi-
Representation Approach (MRA)’. In this research project, this method could be
used to link a SysML model for the alternative generation of architecture (design
part) and the hydroacoustic and vibroacoustic simulation using specialized soft-
ware as Dymola© or AMESim© for the hydroacoustic part and Virtual.Lab© for
the vibroacoustic modelling…
Thus, this method is able to make a decision on the change to process either in the
architecture generation part or in the size of the different components to have an
optimized solution meeting the whole requirements.

3 State-of-the-art

3.1 Hydroacoustic modelling in frequency domain

This subsection is an overview of hydroacoustic characteristics for different com-


ponents used in hydrostatic transmission described in frequency domain. These
characteristics have to be specific for each component, i.e which is independent of
the rest of the hydraulic circuit connected to it. As mentioned in the introduction
section, hydroacoustic characteristics for each hydraulic component use the notion
of impedance Z (equation 1) but also the notion of admittance Y which is the re-
ciprocal of Z. Thus, the hydroacoustic characteristics of a component are repre-
sented as a square matrix relating the pressure and flow ripples whose dimension
is equal to the number of component ports. Each term of this matrix is a vector of
complex numbers according to a frequency range. Characterizing a component
means to define each matrix term depending on a mathematical modelling or by
characterization on a test rig. Below, the impedance matrix relating the pressures
as outputs and flows as inputs (equation 2a) and the admittance matrix (equation
2b) which is the reciprocal.
ª P1 º ª Z11 Z12 º ªQ1 º ªQ1 º ªY11 Y12 º ª P1 º
«P » «Z » * « » (2a) «Q » «Y Y » * «P » (2b)
¬ 2¼ ¬ 21 Z 22 ¼ ¬Q2 ¼ ¬ 2 ¼ ¬ 21 22 ¼ ¬ 2 ¼
Both equations are for components with two ports, in the case of a one port com-
ponent, impedance or admittance matrix has just the first term Z11 or Y11. Moreo-
ver, for certain components (pumps, motors, valve…) these different terms vary
Systems engineering and hydroacoustic … 691

according to their operating point (speed, displacement, mean pressure and


flow…)

- Rigid pipe modelling


The mathematical model for wave propagation in a rigid pipe is described in
ISO standard 15086-1 [2]. These terms are defined by equations which are only
dependent on geometric features of the rigid pipe (internal diameter d and length
L) and oil features (density ρ, kinematic viscosity ν and speed of sound in oil c).
The main advantage is to be dispensed with measurement on test rig. The different
terms of the admittance matrix are expressed in equations (3) and (4).
Sd 2 ZL
Y11 Y22 coth> j a  jb @ (3)
4Uc 2 a  jb
Sd 2 ZL j
Y12 Y21 (4)
4Uc a  jb sin(a  jb)
2

Where
L §¨ 2ZQ ·¸ L §¨ 4Q 2ZQ ·¸
a Z  and b 
c ¨© d 2 ¸¹ c ¨© d 2
d 2 ¸¹

- Flexible hoses model


According to Johnston in [3], the impedance matrix relating the pressures and
flows at the ends is given by (equations 5 and 6):
N 2 Z1 N1Z 2 N 2 Z1 N1 Z 2
 
tanh(J 1 L) tanh J 2 L sinh(J 1 L) sinh J 2 L
Z11 (5) Z12 (6)
N 2  N1 A N 2  N1 A
Where
x Z21 Z12 and Z22 Z11
UZ jZ  jIi
x Zi and J i e
jJ i ci
x ρ, ci, L and A are respectively the oil density, the speed of sound
in oil, the length of hose and its internal area
To define these terms, the parameters in the model (c1, c2, ϕ1, ϕ2, the ratio of
modal ratios N2/N1 and internal area A) have to be adjusted in order to fit
measures obtained on a test rig by minimizing the error between model and meas-
urements.

- Hydraulic orifice model


According to Lau et al. [4], the admittance matrix of a hydraulic orifice can be
expressed as in equations (7) and (8):
692 A. Maillard et al.

1
Y11 Y22 (7) and Y12 Y21 Y11 (8)
R  jZL
Where
n' P
x A resistive term R: R ; 'P and Q are respectively
Q
the mean pressure difference and the mean flow rate
UlE
x An inductive term jZL where L
A0
x ρ, A0 and lE are respectively the oil density, the orifice area and the
effective length of the orifice
The characterization of a hydraulic orifice requires to enter a value of the expo-
nent n (in general n=2) and the value for l E which can be defined in two ways. Ei-
ther by adjusting this value to fit measurements on a test rig or using experimental
data sheet which relate lE according to geometrical features of the orifice as it is
done in [4].

- Pressure relief valve


The model for a single-stage relief valve in frequency domain is expressed in
the same way as a hydraulic orifice without the inductive term according to Edge
et al. [5]. So the admittance matrix gives (equations (9) and (10))
Q
Y11 Y22 (9) and Y12 Y21 Y11 (10)
n'P
- Accumulator
The paper [5] deals also with the accumulator modelling where by neglecting
the resistive term, the impedance term is composed of a capacitive and an induc-
tive term. The impedance exhibit an anti-resonance at the frequency fn at which its
amplitude is at a minimum. Before anti-resonance, the impedance has a capacitive
behavior and after it, an inductive behaviour. Below the equation (11) gives the
impedance term:
1
f d f n
Z11 jZ Ca (11)
jZLa ˜ f ! f n
Where:
x Ca and La are respectively the capacitive and inductive coeffi-
cients which have to be defined by measurements
x Fn is the anti-resonance frequency.

- Hydraulic pumps and motors


Systems engineering and hydroacoustic … 693

Hydraulic pumps and motors can be modelled by a Norton model shown be-
low:

ܳ௓௦ 

ܼ௦

Figure 2: Modelling of hydraulic pumps or motors using the Norton model


This kind of component is characterized by a source flow ripple Qs and a
source impedance Zs. The modelling requires thus to know these two specific
physical quantities. These quantities being dependent from the internal geometry
of the component, the use of a test rig is necessary to find these variables.

3.2 Hydroacoustic characterization on test rig

Hydroacoustic characterization is processed by measuring the wave propaga-


tion characteristics of pressure ripples in a reference pipe whose mathematical
model for wave propagation is well known according to its physical characteris-
tics. Hydroacoustic characterization for pumps and motors can be processed by a
p
test rig using a method of two impedances p
as shown in a simplified schema below.

Figure 3: Test rig for active components using a method of two impedances
This method is applied in both sides of the tested component (in high-pressure and
low-pressure). Each operating point (mean values for the high and low-pressure,
the component speed, the displacement…) requires measurements of two different
impedances of the hydraulic circuit connected to the tested component. The im-
pedance change for each side is performed by opening a manual valve in order to
add a capacity. The reason for a second value of impedance is to have the same
number of equations as unknown to be able to solve the equations. Both sides of
the tested component are connected to a straight rigid pipe where there are three
dynamic pressure sensors (piezoelectric) whose one is closest to component ports.
At this pressure for both sides is computed the flow ripple using the mathematical
model for wave propagation in a rigid pipe for the two impedance as shown below
(equations 12 and 13).
Qc Y11P1  Y12P2 (12) and Qc ' Y11P'1  Y12P2' (13)
694 A. Maillard et al.

Where
x Qc and Qc’ are respectively the flow ripple at the port (at P1a or P1b) of
the component without the capacity and with the capacity
x P1 and P1’ are respectively the pressure at the port of the component (at
the P1a or P1b sensor) without the capacity and with the capacity
x P2 and P2’ are respectively the pressure at the second pressure sensor
(P2a or P2b) without the capacity and with the capacity
x Y11 and Y12 are terms of the rigid pipe portion between the two pressure
sensors (between P1a and P2a or P1b and P2b)
A third pressure sensor is only necessary to measure the speed of sound to include
this value to compute the admittance matrix terms of the rigid pipe portion (as ex-
plained in 3.1 for the case of a rigid pipe).
According to the Figure 2, the expressions of the source flow ripple Q s for both
impedances (equations 14 and 15) are:
P1 P1'
Qs  Qc (14) and Qs  Qc ' (15)
Zs Zs
These two equations give Qs (equation 16) and Zs (equation 17):
P1P2'  P2 P1' P1  P1'
Qs Y12 Zs
P1  P1'
(16) and

Y11 P1'  P1  Y12 P2'  P2 (17)
Hydroacoustic characterization for passive components (other than pumps and
motors) are processed according to the ISO Standard 15086-3 [6]. Below a simpli-
fied schema of the test rig:

Figure 4: Simplified schema of a test rig for passive components


The mean upstream pressure and the pressure ripples are generated by a high
frequency valve. The downstream pressure is adjusted by an adjustable hydraulic
restriction. The pressure ripples are measured thanks to a tube having three dy-
namic pressure sensors at each side of the component. According to these sensors
and the mathematical model for wave propagation in a rigid tube, the admittance
matrix terms of the component tested are computed.
Systems engineering and hydroacoustic … 695

4 Computer tools for hydroacoustic modelling and simulation

Hydroacoustic modelling and simulation requires the use of several software in


order to perform the different processing to obtain simulation results of a hydrau-
lic system in temporal domain. The first step to perform is the determination of the
impedance or admittance matrix terms described in the frequency domain of each
component constituting the hydraulic system modelled according to the operation
point sent by AMESim© using Matlab© software. The second step is the transi-
tion from frequency to temporal domain using the method of “Vector Fitting”
[8,9], in this case, Matlab©/Simulink© software seems to be the only one to have
a function implemented to do this. Then the last step is the modelling in temporal
domain taking into account the hydroacoustic phenomena which are effectuate in
AMESim©. The aim of the research project is to be able to predict the overall
noise generated by this kind of technology by modelling. To predict it, hydraulic
simulation results are then included in a vibroacoustic synthesis by using the Sie-
mens Virtual Lab© software using an overlay of VBA code. This synthesis is a
design or a diagnostic method consisting to split a system in noisy or vibrating
components and transfer functions in order to control its vibro acoustique behav-
iour. The Figure 5 summarizes the interaction and data exchanged between each
software.

Vibroacoustic modelling

Fluid borne noise


modelling in temporal
domain

Figure 5: Schema showing interaction and data exchanged for hydro acoustic
modelling.

5 Conclusions

From requirements of a hydraulic circuit, the SysML language is proposed to aid


to design a hydraulic system which meets these requirements. In order to validate
a hydraulic system which meets the whole requirements, this paper discusses the
choice on the different software to use to predict the fluid borne noise of a hydrau-
lic component or system. To predict it, a first software (Matlab©/Simulink©) is
696 A. Maillard et al.

used to define the hydroacoustic characteristics of hydraulic components and


transform them into the temporal domain using the “Vector Fitting” method. A se-
cond software (AMESim©) allow to run simulation in the time domain using the
output data gives by Matlab©. The simulation results on pressure ripples are then
sent to a vibroacoustic synthesis using the software Virtual.Lab© to obtain the
overall noise of a hydraulic system. The future works will be essentially focused
on the creation of numerical hydraulic components taking into account the
hydroacoustic phenomena and also to apply these proposals on a concrete exam-
ple.

References

[1] R. S. Peak, R. M. Burkhart, S. A Friedenthal, M. W. Wilson, M. Bajaj, and I. Kim,


“Simulation-Based Design Using SysML Part 2  : Celebrating Diversity by Example,”
INCOSE Intl. Symposium,San Diego, 2007.

[2] ISO 15086-1, Hydraulic fluid power: Determination of the fluid-borne noise
characteristics of components and systems-Part 1: Introduction, 2001.

[3] D. N. Johnston, “A time-domain model of axial wave propagation in liquid-filled


flexible hoses,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal
of Systems and Control Engineering, November 2006, pp 517-530, vol. 220, no. 7.

[4] K. K. Lau, K. a Edge, and D. N. Johnston, “Impedance characteristics of hydraulic


orifices,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of
Systems and Control Engineering, November 1995, pp 241-253, vol. 209, no. 4.

[5] K. a Edge and D. N. Johnston, “The impedance characteristics of fluid power


components: relief valves and accumulators,” Procedures of the Institution of
Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 1991, pp
11-22,vol. 205.

[6] ISO 15086-3, Hydraulic fluid power. Determination of the fluid-borne noise
characteristics of components and systems-Part 3: Measurement of hydraulic
impedance, 2008.

[7] B. Gustavsen, “Improving the pole relocating properties of vector fitting,” IEEE
Transactions on Power Delivery, july 2006, pp. 1587-1592, vol. 21,no. 3

[8] B. Gustavsen and A. Semlyen, “Rational approximation of frequency domain responses


by vector fitting,” IEEE Transactions on Power Delivery, 1999,pp 1052-1059, vol.
14, no. 3.
LINDE’S ICE-MAKING MACHINE. AN
EXAMPLE OF INDUSTRIAL
ARCHEOLOGY STUDY

Pérez Delgado, Belén 1, Andrés Díaz, José R. 2, García Ceballos, María L. 2,


Contreras López, Miguel A. 2

(1) : University of Málaga, Student (2) : University of Málaga, Professors


+34951952272/+34951952600 +34951952272/+34951952600
belen2404@hotmail.com jrandres, mlgarcia, macontreras@uma.es

Abstract:
This paper proposes a way to perform the study of an ancient machine from the
viewpoint of the industrial archeology. The methodology is based on a number of
steps and a schema to conduct the study, supported by a simulation of the ma-
chine. This approach is illustrated by its application to an ice-making machine of
the nineteenth century. The ice-making machine is the one developed by Linde in
1880. Once it is finished, this type of studies and analysis may be very significant
for any student or researcher that wants to understand the basis of this technology.

Key words: Industrial archeology, simulation, virtual reality, patents.

1- Introduction
One of the definitions of the industrial archeology that we most like is provided by
the Society for Industrial Archeology (SIA): the study, interpretation, and preser-
vation of historically significant industrial sites, structures, artifacts, and technol-
ogy [1]. Another definition that can be more concise is: the systematic study of
material evidence associated with the industrial past [2].
Industrial archeology is an essential element to understanding the past of industry
and their processing. Similarly, helps us to understand the evolution of the tech-
nologies and explain the basics of current machines and processes. Modelling the
technical machines and a contextualization in its environment allows to restore the
working situation of the socio-technical production system [3]
In our case, one of the main applications of our studies of industrial archeology is
to introduce students to the origins and technological basics of the current ma-
chines. This requires, in addition to simulate a machine, doing a study and justifi-
cation of technology by the time that this machine was used, and relating such
technology with the current state of it, contrasting processes and equipment.

© Springer International Publishing AG 2017 697


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_70
698 P.Delgado et al.

2- Objectives
The aim of this study is to propose a methodology for carrying out an industrial
archeological research of a machine supported by the development of a simulation
of the machine.
To do this, in addition to describe the approach, an example of its application will
be developed on the ice-making machine designed by Carl von Linde in 1880.

3- Simulation-based approach
Our methodology is based in the one described by Larroche et al. [4]. They call
this methodology Advanced Industrial Archeology.
The study of a machine from the point of view of industrial archeology could be
conducted throughout the following phases:
- Compilation and analysis of the technical and historical documentation re-
lated to the studied machine. If possible, this documentation will include
both graphic and textual descriptions. This documentation does not need to
be exclusively of the investigated machine, but may also relate to similar
machines that can serve as guidance to its understanding.
- Investigation of the historical evolution of technology and fundamentals ap-
plied into the machine. This involves such varied topics as constituent mate-
rials, control systems, etc.
- Based on the two previous sections, technical description and comparison of
the machine with the current machinery and technologies.
- Finally, representation by a three-dimensional modeling and an animation
of the machine in which the basics of operation of the machine can be ob-
served.
The end of this process should bring us to a reliable simulation of the machine that
has been the origin of current equipment, and whose display should help to under-
stand the operation of this equipment.

4- Illustration of the simulation-based approach


The application of previous approach is below exemplified on the ice-making ma-
chine designed by Carl von Linde in 1880.

4.1 – Study of historical documentation


The first document studied was the USA Patent nº 228,364 which addresses the
ice-making apparatus designed by Linde. The drawings of the machine were ana-
lyzed to have its parts well differentiated and to understand how it works.
The limited information found, and sometimes in a language unfamiliar for the in-
vestigators, as the author was German, made difficult this analysis.
linde’s ice-making machine… 699

Figure 1: Linde’s machine. USA patent nº 228.364 [5].

Previous analysis supported by the information found in technically related docu-


ments [6], and by the knowledge and experience of technical personnel working
with this type of machinery and other professors of the University, allowed to
identify almost all its parts and to understand the basis of its performance.

4.2 – Historical study


It has been carried out a historical study of the technology used to produce artifi-
cial ice. A brief summary of this study is presented here, with several characteris-
tic figures that have helped to identify each of the elements of Linde´s machine.

Throughout the nineteenth century, particularly since 1834, artificial ice produc-
tion starts with the development of the following technics:
- Liquefiable Gases Compression Systems. Jacob Perkins designed the
compression machine with ethyl ether in 1834. The machine of our study
uses this compression system.
- Air Systems. The first open cycle machine was built by John Gorrie in
USA, in 1844.
- Absorption Systems.Ferdinand Carre created in France the absortion
systmes from mixtures of ammonia and water in 1859.
- Water Evaporation Systems. Edmond Carre developed the water evapo-
ration machine under reduced pressure in 1860.
700 P.Delgado et al.

Figure 2: Lehnert’s compression scheme. 1910

Among the systems using liquefiable gas compression we can find the Lehnert’s
scheme. We can see in figure 2 the compression scheme taken from Lehnert,
where we can already distinguish the main elements: evaporator, compressor,
condenser and expansion key. These elements will have the same function in the
case of Linde´s machine.

The pioneer of this system was Oliver Evans (he was called the inventor of the re-
frigerator), who designed in 1805 a steam-driven chiller, but he never got to build
it.

Some years later, Jacob Perkins was inspired by the description of Evans, and he
built his refrigeration machine that used methyl ether as refrigerant. Perkins ob-
tained the British patent number 6.662 in 1834. This machine can be considered as
a prototype for all subsequent compression machines.

Figure 3: Perkins machine. 1834


linde’s ice-making machine… 701

In Europe, the development of compression equipment is primarily represented by


the Carl Von Linde. His personal characteristics were adequate, as he was engi-
neer, scientist and man of business, besides professor. These circumstances made
him first to study the cooling from the thermodynamic point of view, and after-
wards to apply their findings to the manufacture of refrigeration equipment. This
equipment outperformed widely the existing machines at its time in efficiency and
performance.

Among the elements studied to understand the operation of his ice machine, we
note:
- First Machine: it worked with methyl ether and a vertical compression
system.
- Machine Number 5759 12.5.1877 (Spanish Patent): it worked with am-
monia as refrigerant and involved a double effect horizontal compression
system.
- And of course, the Linde´s machine number 228.364 1880 (USA Patent):
the machine to study is the evolution of the Spanish patent; it worked
with ammonia and a double effect horizontal compression system.

Figure 4: First Machine by Linde


Figure 4 illustrates a recreation of his first machine, and Figure 5 reproduces the
plane that is included in his Invention Privilege application in Spain.
4.3 – Technical study of the machine
The machine to study is a refrigeration system, and its performance corresponds to
a single compression cycle. This section should analyze the theoretical perfor-
mance of the compression cycle of the machine (eg from Mollier diagram). This is
702 P.Delgado et al.

Figure 5: Linde’s Machine. Spanish patent nº 5759. 1877

to understand the different thermodynamic processes that occur in its main com-
ponents and to understand as well the characteristics of refrigerant [7].
For example, in the study of fluid used, it was justified in terms of the historical
evolution of the use of these fluids [7][8].

Figure 6: evolution of use of frigorific fluids


linde’s ice-making machine… 703

Linde analyzed the different principles on which rest refrigerating machines.


Linde recognizes as the most perfect method, the one that focuses on the evapora-
tion of highly volatile liquids such as ammonia or methyl ether. In this method, the
volatile liquids are placed in liquid state, by means of force pumps. The employ-
ment of these high price volatile liquids presented certain driving difficulties that
should be overcome in order to minimize consumption and halt its destructive ef-
fect on fats.

Linde arrived to this result by combining a machine so that no part of the volatile
fluid was separated from the atmosphere, rather than by a fitting of regular oakum.
The oakum fittings where not falling organs moving more than time to time and
for an instant work; for example fittings valves are provided with bolted below
enclosures.

Regarding the piston and tow boxes, the volatile fluid is separated from the at-
mosphere by the glycerin. To manage this, it uses the effect of an external force
always higher than the internal vapor pressure.

To sum up, the technical features which constitute his invention are:

- The volatile liquid is separated from the atmosphere by the glycerin (or
other liquid) that is subjected to a high pressure by the effect of an exter-
nal force; for example, by the effect of compressed air or of a column of
mercury.
- The glycerin liquid is accumulated and picked up by special equipment,
collecting vessels and replaces fat circulating through the pump.
- The ice is made in the surface of a drum plated inside by cooled salt wa-
ter, and immersed in water that must be transformed into ice.

4.4 – Modeling Planning

Based on the above, in this section we must study how is going to be the modeling
of the machine, and the different stages of its operation. This will lead us to defin-
ing the sets of parts to model, and to the way we must approach the animation.
This will help us so that it will be clearly seen each one of the steps in the opera-
tion of the machine.

In our case, in order to display the operation clearly, the constituent elements of
the machine will be referred by the corresponding number of the image shown be-
low:
704 P.Delgado et al.

Figure 7: parts of the simulated machine

The machine operation can be described in a series of steps that will be presented
in the final video:
1º. Compression cycle begins at the distiller boiler "1". To fill the machine with
volatile fluid, it´s poured saturated ammonia solution in the boiler, which is evapo-
rated by aspiration due to the action of a resistance, taking from it the heat re-
quired for evaporation.

Figure 8: Distiller boiler


After the evaporation, the refrigerant (ammonia) is at low pressure and low tem-
perature, suitable for supplying the vapor to the compressor "2".

2º. Once supplied, the vapor enters into the compressor "2" by the aspirations
valves in the low pressure side.
linde’s ice-making machine… 705

The movement of the compressor piston is operated by a crank mechanism assem-


bly with a wheel. On this wheel is where the operation of Linde´s machine starts.
The wheel begins its movement due to the action of another wheel, which is
joined through a belt.

Figure 9: Compressor System


The double-effect compressor, compress the vapor in the two directions of move-
ment, increasing the fluid pressure and hence its temperature. The compressed va-
por goes out by two discharge valves and through a pipe located in the center of
the discharge duct. The vapor discharged goes to the oil separator filter "3".

3º. This oil separator "3", filters the vapor and removes the amount of oil drops
which may drag the vapor to enter free of impurities in the condenser. The oil ob-
tained from the filtration returns to the compressor to lubricate the piston and to
flow its operation.

Figure10: Oil separator


4º. At the condenser intake "4" the vapor is high pressure and high temperature.
The vapor enters into the condenser and passes through a hollow coil. The coil is
immersed in cold water, moved by a vertical stirrer with a helix shape. This causes
fluid to condense, becoming liquid on its way through the condenser. The latent
heat absorbed by evaporation comes from the fluid that surrounds the tubes. The
fluid is water or air when we have to cool directly water or air, and is salt water
when we want to produce ice.
706 P.Delgado et al.

At condenser outlet we find liquid ammonia at high pressure, which enters into the
evaporator "6" by an expansion valve "5" placed at the intake of the evaporator.

Figure11: Condenser

5º. This expansion valve "5", expands the fluid, lowering its pressure and its tem-
perature, causing that at the evaporator intake appears largely liquid fluid with a
portion of vapor.

Figure12: Expansion Valve

6º. The evaporator "6" is a heat exchanger similar to the condenser. The fluid
passes through the evaporator across another coil, which is immersed in salt water
cooled. This type is a drum evaporator, characterized by its rectangular section
that will produce ice bar.
Besides, the drum is immersed into the liquid to be frozen, in this case water, to
produce ice blocks. The outer surface of the drum, rotating slowly (movement
produced by the manual action of an external wheel), is constantly covered with a
layer of water that turns to ice by the action of salt water cooled. So when the ice
has taken the appropriate thickness at the end of a complete revolution of the
drum, and before returning to immerse, the ice is detached off the drum and it´s
retired from the output platform.
7º. Once finished the process in the evaporator and after obtaining the final ice
block, the cycle begins again. The ammonia steam situated at the evaporator outlet
goes to the compressor, entering by the valves of aspiration, and so starting by this
way a closed circle of compression.
linde’s ice-making machine… 707

Figure13: Evaportator

Each one of these steps has been accompanied by one of the frames of the final
video, in which is reproduced that step.

Regarding modeling, the machine components were created separately with the
program of creating and three-dimensional animation Autodesk 3DStudio Max
2011[9].

This was definitely the hardest part and the one that has required more work
hours. It was necessary to deepen on parameters and functions of the program to
obtain the intended result. Most difficult points to perform the modeling and simu-
lation were:

- The modeling of the type of drum evaporator, from which there was little
information regarding its inner section.
- To locate the suitable textures with which represent the original materials
with which those parts would have been made at that time, and to apply
them to modeling
- To simulate the movement of a fluid through a pipe
- To simulate the state changes caused in the refrigerant fluid

To visualize the finished machine with its suitable applied materials, as it's shown
in the simulation, figure 14 shows the photogram nº 683, corresponding to the be-
ginning of the machine operation.

5- Conclusions
Once the detailed study of all the characteristics of the machine is finished, and af-
ter the mounting of the video in which its performance is simulated, it's considered
that is perfectly defined the way in which this refrigerating 1880 machine works.
708 P.Delgado et al.

Figure 14: Simulated Machine

This study and analysis may be very rewarding for any student or researcher of
machinery or technology derived from it.
As future work lines, it is proposed to develop a portfolio of different refrigerating
machines, in which it will be possible to distinguish and evaluate the different ad-
vances along the history in materials and technology used.

References
[1] Society for Industrial Archeology. www.sia-web.org
[2] Neaverson P., Palmer M. Industrial Archaeology: Principles and Practice, Ed.: Routledge,
1998
[3] Bernard A.; Hasan R. [2002] Working situation model as the base of life-cycle modelling
of socio-technical production systems. CIRP Design Seminar, Conference Proceedings, Berlin
[4] Laroche, F.; Bernard, A.; Cotte, M. [2008] Advanced Industrial Archaeology: A new re-
verse-engineering process for contextualizing and digitizing ancient technical objetcs. Virtual
and Physical Prototyping. ISSN: 1745-2759, pp 105-122
[5] Linde, C.P.G. [1880] Refrigerating and Ice-Making Apparatus. United States patent num-
ber 228.364
[6] Martínez Villén A. Orígenes y Desarrollo de los Sistemas de Producción y Utilización del
Frío, Ed.: Caja Rural Jaén
[7] Rapin P. J, Jacquard P, Instalaciones Frigoríficas (tomo I - II), Ed.: Marcombo
[8] William C. Whitman, William M. Johnson. Tecnología de la refrigeración y el aire acon-
dicionado I - II, Ed.: Ediciones Paraninfo, S.A, 2002
[9] Mediaactive, El Gran Libro de 3D Studio Max 9, Ed.: Marcombo, S.A.
Solder Joint Reliability: Thermo-mechanical
analysis on Power Flat Packages

Alessandro Sittaa,b*, Michele Calabrettab, Marco Rennab and Daniela


Cavallarob
a
Scuola Superiore di Catania, Università di Catania, via Valdisavoia 9, 95123, Catania
b
STMicroelectronics, Stradale Primosole 50,95121, Catania
*
Corresponding author. Tel.: +393471436550; E-mail address:
alessandro.sitta@studium.unict.it

Abstract In the last decades the main focus for improvements in Power Elec-
tronics was mainly addressed on chip technology.
Therefore, Power Electronic performance depends by a high ratio on package
technologies and on their interconnections. In particular, the automotive industry
has high requirements regarding cost efficiency, reliability and compactness. In-
creasing power densities, cost pressure and more stringent reliability target for
modern power semiconductors are making thermal system optimization more and
more important in relation to electrical optimization.
This article will give an overview of the new methodological approach leaded
by Finite Element (FE) simulation for new packages and interconnection solution
ideas. A viscoplastic creep modelling is adopted for the solder taking into account
time, temperature and stress dependences in Thermal Cycle. A parametric study is
performed by changing geometrical solutions.
The results obtained from the modelling has been used to form design guide-
lines that were also matched with experimental data.

Keywords: Power Electronics, Solder Joint, Finite Element Modeling, visco-


plasticity.

1 Introduction

A constant trend in the electronics industry is the die size decreasing of devic-
es: the resize implies, consequently, increased operative temperature and device’s
power density. As consequence, the choice of materials with a good behavior at
high temperature and fatigue becomes fundamental to guarantee performances and
reliability of high power devices.

© Springer International Publishing AG 2017 709


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_71
710 A. Sitta et al.

In recent year, there is an increasing need in power electronics for high temper-
ature solders that perform reliability in ever-higher temperature applications such
as avionic, military, oil exploration and automotive.
Solder joint interconnections serve generally two important purposes: (1) to
form the electrical connection between the component and the substrate, and (2) to
form the mechanical bond that holds the component to the substrate. Solders are
often used as large area die attaches and due to the high power involved, they need
to dissipate large amounts of heat that could further raise the thermal load on the
devices, which implies thermal expansion or compression of each part of package.
Unfortunately, different materials expand with the own thermal expansion coeffi-
cient, therefore the CTE mismatch could strain solder joint connections, and over
the component lifetime can contribute to mechanical solder joint fatigue. Due the
operating conditions of mission profile, the solder joints may accumulate different
kind of mechanical damage that more or less quick lead to delamination. Decreas-
ing solder area raises the thermal resistance of the assembly and extremely conse-
quence is the device failure linked to a drastic junction temperature rise.
Typically, in power electronics application like a Power Flat Package, the Sol-
der Joint is the weakest aspect for reliability. Simulation tools are becoming fun-
damental for design optimization and for achieve the challenging reliability per-
formances: with a predictive fatigue model it also possible to drastically reduce
the time to market, optimizing also the experimental test for product validation.
In order to evaluate fatigue behavior, packages are tested using specific cyclic
loads described by J.E.D.E.C. rules. One of these is the thermal cycling test, that is
achieved by alternating fluid flow from hot and cold source. The variation in tem-
perature of package is typically less than 15 °C for minute in order to avoid transi-
ent thermal in the sample [1].
Due its high temperature and repetitively, thermal cycle load shows necking-
bottle behavior of solder, which failure is described by Low Cycle Fatigue model
and, due high temperature and long dwell time, accumulated strain is mainly given
by creep [2].
This paper developed a creep fatigue model for thermal cycling in order to
evaluate PFLAT package reliability. It was employed to predict reliability, with
the goal of reducing at least physical prototyping. The model was generated by
correlating nonlinear finite element analysis with commercial FE COMSOL
Multiphysics, taking into account non-linear mechanical behavior of package
components. Cycles to failure coming from FE model had finally to fit with exper-
imental data, so the model predicted fatigue evaluation for similar package.
Solder Joint Reliability: Thermo-mechanical … 711

2 Model description

Model analyzed in this paper was a PFLAT package (fig. 1.) It consists of a
copper lead-frame, PbSnAg die solder, silicon die, PbSnAg clip solder and copper
clip. The package is molded with epoxy resin.

Fig. 1. PFLAT package (without molding compound)

The fatigue modeling approach used for this work consisted of four primary
steps. First, a theoretical or constitutive equation, which forms the basis for mod-
eling, was assumed. Second, the constitutive equations were used for calculating
stress and strain using a Finite Element (FE) approach. Third, the FE results were
given as input data to predict the number of cycle to failure according to a fatigue
curve. Finally, the simulation results were validated by experimental test data.

2.1 Viscoplastic constitutive equation

For 95.5Pb-2Sn-2.5Ag solder, a viscoplastic constitutive equation was consid-


ered taking into account time and temperature dependence in thermal cycling. It
was assumed that the total strain rate (1) consisted of an elastic and an inelastic
creep component,

. . .
H ij H ij  H ij cr
el
(1)

.
.
where H ij is elastic rate strain and
el
H ij cr is inelastic creep rate strain. In this
work, inelastic plastic strain was neglected and only creep, that is a time-
dependent mechanism, was taken into account to characterize inelastic defor-
mation. The evolution of creep during time could be described in three different
stages. In the initial stage, called primary creep, the strain rate reaches high values.
712 A. Sitta et al.

During the next stage, creep strain raises with low constant slope, that is the strain
rate. For this reason, the second part of the curve is called steady-state or second-
ary creep. Tertiary stage begins after necking and in which the strain rate exponen-
tially increases with stress until occurrence of fracture.

Fig. 2. Evolution of creep during time

In order to evaluate cumulated creep strain during test cycles, only secondary
creep was considered. Correlation between stress and creep rate strain was given
by Garofalo equation [3]:

n
. ª § V vm ·º  RT
Q
H ij cr
¨
A«sinh¨ ¸
¸» e nij (2)
¬« © V ref ¹¼»

where the constants A (creep rate), n (stress exponent), and σref (reference effec-
tive stress level) depend by material. Exponential term of (2) introduces “Arrheni-
us type” temperature dependency: in fact, Q is the creep activation energy, R is the
universal gas constant and T is the absolute temperature. As shown in formula 2,
the correlation between creep and stress is given by Von Mises stress σvm, which is
defined as:

. 3
V vm SS (3)
2 ij ij
where Sij is generic stress tensor component xj, so Garofalo creep is also a
deviatoric creep model with a creep rate proportional to the hyperbolic sine func-
tion.
Solder Joint Reliability: Thermo-mechanical … 713

2.2 FEA Simulation

Assuming the relations explained in the previous paragraph, FE model was de-
veloped using COMSOL Multiphysics. The simulation reproduced Thermal Cycle
test, which consists of heating/cooling process between -65 °C and 150 °C in 15
minutes, followed by dwell time of 15 minutes. The entire cycle lasts 60 minutes
and it is shown in fig. 3.

1 cycle = 60 min

Fig. 3. Thermal cycle load

In order to reproduce assembly process, three different sub-models were im-


plemented (fig. 4.): in the first die and clip attach were simulated as cooling from
temperature T0, at which the whole package were considered stress free, to room
temperature. The second part reproduced first the application of molding com-
pound, which occurs a temperature T1, considering stress coming from die and
clip attach, then the cooling until at the starting temperature of the Thermal Cycle.
Resulting stress and strain from assembly FE model were the initial conditions for
the sub-part that reproduced the Thermal Cycle.

Fig. 4. Schematic of FEM setup

All mechanical properties of material used for the simulation came from exper-
imental characterization. In particular, the properties of the epoxy molding com-
pound, considered as function of temperature, came from DMA and TMA
thermoanalytic experimental characterization, which consider respectively CTE
and Flexural Moduli of resin Vs temperature.
714 A. Sitta et al.

2.3 Fatigue curve and experimental correlation

Due large strains, the thermal fatigue life of the solder joint was estimated us-
ing Coffin-Manson equation (4):

.
'H p
H cf 2N c (4)
2
where Δεp is cumulated creep strain, N is number of cycle to failure while c and
ε’f are experimental coefficients. These coefficients were evaluated regressing cu-
mulated creep strain, computed with FE model, on experimental life cycle data
and, fixed them, simulation model can predict life cycle for package with a similar
geometry.

3 Simulation activity

Once theoretical and computational model was established, assembly processes


and thermal cycle load were simulated with FE software for many configurations
of PFLAT package, for which experimental data about life fatigue are available.
Finally, after the model was calibrated, the impact of different die thickness
was evaluated through FE simulation. predicting fatigue life with the Coffin-
Manson equation found before.

3.1 Model calibration

In the analyzed packages, both front metal-clip interface, both back metal-lead
frame interface, were soldered, so it was evaluated cumulated creep strain for each
solder. As shown in fig. 6, die solder joint is more critical than clip solder joint,
according to the experimental observations that localize the starting of delamina-
tion in the borders of the die solder.
Solder Joint Reliability: Thermo-mechanical … 715

Fig. 6. Cumulated creep in solder joints Fig. 7. Effective creep strain

The time history of the equivalent creep strain at the maximum strain location
is shown in fig. 7. It indicates that the initial material response of solder is not sta-
ble because primary creep occurs; therefore cumulated creep strain was computed
after some cycles in order to stabilize solder response. These calculated values
were used to determine Coffin-Manson formula and to predict about further stud-
ies on similar packages.

3.2 Parametric analysis

After model calibration, parametric simulation was performed to evaluate the


impact of the die thickness on device reliability.
More in detail, this simulation compared standard PFLAT (model #1) to other
two having a reduced die thickness (models #2 and #3). Analysis results are re-
suming in table 1.

Table 1. Normalized cumulated creep and life cycle evaluation for PFLAT.

# Model Die thickenss Clip Solder Die Solder


[μm]
Cumulated Cycle to Cumulated Cycle to
creep failure creep failure
1 120 0.57 3.93 1.00 1.00
2 90 0.40 9.12 0.94 1.17
3 60 0.30 19.32 0.84 1.54
716 A. Sitta et al.

4 Conclusion

A FE element model for solder joint reliability was build and calibrated with
experimental results. It permits to predict the solder fatigue behavior for Power
Flat packages and it allows design optimization and experimental results forecast.
In particular, it was observed that the maximum of cumulated creep was reached
along the borders, according with experimental observations.
From the parametric analysis it was evaluated the silicon die thickness impact
for reliability improvements. Die solder stilled the reliability bottle neck for each
analyzed model. A thickness decrease from 120 to 60 μm produced a creep reduc-
tion of 16% for die solder and of 48% for clip solder, while the number of cycles
to failure rose of 54%.

References

1. Standard, J. E. D. E. C. "Temperature cycling." JESD22-A104D, JEDEC Solid State Tech-


nology Association, Arlington, VA (2009).
2. Lee, W. W., L. T. Nguyen, and Guna S. Selvaduray. "Solder joint fatigue models: review
and applicability to chip scale packages.", Microelectronics reliability 40.2 (2000): 231-
244.
3. Garofalo, F. "An empirical relation defining the stress dependence of minimum creep rate
in metals.", Trans Metall Soc AIME 227.2 (1963): 351-355.
4. Zhang, Xiaowu, et al. "Analysis of solder joint reliability in flip chip package." Interna-
tional Journal of Microcircuits and Electronic Packaging 25.1 (2002): 147-159.
5. Yao, Hua-Tang, et al. "A review of creep analysis and design under multi-axial stress
states." Nuclear Engineering and Design 237.18 (2007): 1969-1986.
6. Wong, E. H., et al. "Creep fatigue models of solder joints: a critical review." Microelec-
tronics Reliability 59 (2016): 1-12.
7. Dugal, Franc, and Mauro Ciappa. "Study of thermal cycling and temperature aging on
PbSnAg die attach solder joints for high power modules." Microelectronics Reliability 54.9
(2014): 1856-1861.
Section 5.2
Virtual and Augmented Reality
Virtual reality to assess visual impact in wind
energy projects

Piedad Eliana Lizcano1, Cristina Manchado1, Valentin Gomez-Jauregui1,


César Otero1
1
Research group EGICAD. Civil Engineering School, Universidad de Cantabria. Los Castros,
s/n.
* Corresponding author. Tel.: +34 942 206 757. E-mail address: manchadoc@unican.es

Abstract Virtual reality techniques have been used since several decades ago to
complement the three dimensional modelling in engineering and other disciplines.
This technology allows engineers to view their projects into a 3D environment,
helping to better understand the different designs and bringing a new perspective
of them, and also to simulate different states of the construction process. When as-
sessing the visual impact of a wind farm, the use of virtual reality scenery can help
the designer to integrate the visual aspects in the planning process and to show the
changes produced in the different views. Also, it is a very powerful communica-
tion tool that provides an effective way of presenting the visual impact, as well as
to understand and imagine the future landscape for the stakeholders.

Keywords: visual impact, wind energy, virtual reality, VIA, VR

1 Introduction

The effects produced by large structures, as wind farms, in the landscape as


wind farms, have been deeply analyzed in the last decades, applying different
methodologies in order to quantitatively assess these effects [1, 2, 3, 4, 5]. Along
with those methodologies, virtual reality allows to represent, simulate and com-
pare different proposed solutions with a very realistic visualization, not only in
common devices, such as computers or laptops, but also in immersive displays,
such as large screens displays (e.g. caves), 3D screens, or virtual reality gadgets
(e.g. the newly Google cardboard or more specific virtual reality headsets and oth-
er wearables) after different steps of graphic handling process. It is also a great
tool in planning or sitting processes [6], providing the possibility to explore the
design in real time.

© Springer International Publishing AG 2017 719


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_72
720 P.E. Lizcano et al.

This paper describes the virtual environment and models we are creating to il-
lustrate the visual effects generated by a wind farm project located in the North of
Spain, as well as the software and devices used to carry it out. Based on a previous
quantitative analyses carried out by the software tool Moyses [1] and using differ-
ent indicators [7], its results will be experimented into a virtual environment. This
paper is structured as follows: section 2 includes a description about the software
and technologies used, the data input needed, as well as the followed process to
create the virtual models and Section 3 expose the results and conclusions.

2 Developing the models

2.1 Software and technologies

We are using the design software Autodesk Infraworks 360 [8] to create a GIS-
based virtual reality model. It is created with different types of input data, such as
DEMs, orthophotographs and shapefiles, which are described in section 2.2. In
this environment it is possible to represent different scenarios, with several turbine
parameters, including height, rotor dimensions or rotation speed. Also, 3D models
such as vegetation can be included (both existing forests and new vegetal barriers
proposed to reduce the visual effects) or buildings with a high level of detail. Sun,
wind and clouds can be also configured to produce a wide range of atmospheric
effects on the environment (including shadows, which allows flickering simula-
tions), helping to emphasize the feeling of realism. Figure 1 shows the overall ap-
pearance of the software.

Fig. 1. Autodesk Infraworks 360 (desktop application)


Virtual reality to assess visual impact in wind energy projects 721

The models are visualized in two different ways: on the one hand, we check
them in a workstation equipped with a 3D screen; on the other hand, a low-cost
cardboard headset viewer is used to obtain an immersive environment. This plat-
form, developed by Google [9], is made of pre-cut cardboard, which allows to dis-
play VR applications using our own smartphone, as shown in figure 2. The device
is inserted in the back of the cardboard and an application generates stereoscopic
renderings that are viewed through focal lenses. Also, it presents a wireless con-
ductive mechanism to interact with the device, triggering a touch event on the
phone’s screen. Moreover, the smartphone gyroscope sensor allows to control the
pointer on the screen by head movements, letting the user to control interactive el-
ements into the virtual world. The provided SDK also opens up great possibilities,
not only for displaying static scenery, but also for creating real interactive immer-
sive worlds or for offering augmented or mixed reality experiences.

Fig. 2. Cardboard: the smartphone is placed inside the VR headset and the app generates the two
images. The pair of lenses and the midpiece make it possible the stereoscopic vision.

In this project, the first attempt to get a virtual immersion was using the appli-
cation called “Camera Cardboard” [10], which allows to directly visualize immer-
sive panorama pictures. Originally, the app has been created by Google to visual-
ize stereo photographs only taken with the same application, but it is possible to
use another type of picture as long as we use the same file format. This file is en-
722 P.E. Lizcano et al.

coded using the compressed format JPG, with the stereoscopic information in-
cluded in the Exif header in XMP format [11].

To compose the panoramic picture (from a set of overlapping photographs ob-


tained from Autodesk Infraworks from a single location), we are using the Image
Composite Editor (ICE) [12], a free panoramic image stitcher made up by Mi-
crosoft Research Computational Photography group, able to automatically align
images and stitch them using different types of projection.

2.2 Workflow

Figure 3 shows the workflow for building the 3D virtual environment. The first
step is to collect and arrange different types of input data, which in our case are:
(i) digital terrain models with 5 m resolution, obtained from LIDAR flights. (ii)
shapefiles for the different features to be evaluated, such as population nuclei,
roads, vantage and touristic areas, edification and forest areas, (iii)
orthophotographs to superimpose over the terrain surface, and represent the study
area in a realistic way and (iv) 3D models of the wind turbines and their locations.

Except for the 3D models, all the data have been obtained from official sources,
such as the National Geographic Institute of Spain or the local government. The
turbine model used is the animated one provided by the software, with suitable
physical parameters according to our project (nacelle height and rotor diameter).

Fig. 3. Workflow for building the 3D virtual environment


Virtual reality to assess visual impact in wind energy projects 723

Then, we have modeled the current terrain using Infraworks 360, which allows
to collect several types of data sets in the same 3D environment. All data are clas-
sified in different layers with a BIM approach, and it is possible to recreate the
turbine rotor movement to increase the feeling of realism.

Also, a layer with the visibility values obtained by the software Moyses (the
analysis is not described here, because it is beyond the scope of this publication; in
any case it is similar to the one exposed in ref [1]) is loaded in order to check in
real-time which points are visible are which ones are not. A detail of the finished
model is shown in figure 4.

Fig. 4. Autodesk Infraworks model, including digital terrain model, orthophotographs and wind
turbine 3D models. Above, on the right, top view; on the left, 3D view. Below, 3D view with the
visibility map superimposed. The color indicate the number of visible turbines from every point.

Once the model is completely developed, some vantage points are defined to
capture the set of pictures and generate the panorama images. The selection of this
points is based on the previous quantitative analysis carried out with Moyses, and
represent specific touristic and valuable historical viewpoints. From each spot the
point of view is sequentially rotated with a 50%-60% overlapping to ensure the
subsequent stitching, and screenshots are captured.
724 P.E. Lizcano et al.

Then, the set of captured pictures is automatically aligned, stitched and com-
pleted with ICE software (see figure 5). Several approaches exist inside the soft-
ware for representing panoramas, like spherical, cylindrical or perspective projec-
tions. In this case we have used the cylindrical projection, which produces an
image out of a series of strips that form a 360º view of the terrain to be represent-
ed, mapping the images onto the surface of a cylinder.

Fig. 5. Detail of one of the panoramic pictures created. Above, the stitched picture before com-
pletion. Below, the final result.

After all desired image strips have been created, it is necessary to compound
the stereoscopic pair to properly view them on the cardboard app, with the XMP
information. The panorama file should contains a JPG single photo for the left
eye, and a Base64 encoded image for the right eye in the XMP data (see figure 6).
Currently we are still optimizing an application to automatically encode all this
information in the compressed image.

Fig. 6. Detail of the XMP content into the JPG file. The second image data is encoded in the
XMP-GImage:Data node. Information obtained with the exiftool tool.

After encoding the images, we stored them in a particular folder inside the
smartphone, named /DCIM/CardboardCamera/, to be directly found by the
application. Then, the cardboard camera app generate a pair of images from the
panorama on the smartphone screen, as shown in figure 8. The biconvex optical
lenses and the midpiece let the user to see the stereoscopic image through parallel
vision, only looking around moving the head.
Virtual reality to assess visual impact in wind energy projects 725

Fig. 8. Screenshot from cardboard app.

3 Results and conclusion

In this paper we have briefly described the way we have built a BIM terrain
model using Autodesk Infraworks as well as how we have adapted panorama pic-
tures in order to be viewed in stereoscopic mode using a smartphone and a Google
cardboard. The combination of the smartphone and the cardboard headset that al-
lows immersive representation provides an improved user experience and opens
up a lot of new possibilities for on-site exploration of terrains with a low-cost de-
vice. It is also possible to work at a real scale, in a dynamic environment that al-
lows us to better understand the project, and facilitate presentations.

In the case of wind farms evaluation, the virtual reality and 3D models are a
powerful combination for stakeholders to understand and experience the induced
landscape and visual effects. Moreover, such a tool allows a greater public partici-
pation in planning and decision processes.

Further works

Currently, our work is limited to view 360º static images, but an application
able to visualize the virtual complete model is being developed using both the
cardboard SDK and the software platform Unity. This improvement will allow the
user to visualize any sight in the model, avoiding subjectivity in the selection of
points of view.

Also, to improve the VR system, is necessary to create new ways to interact


with the model, because common devices, such as mouses and keyboards are no
longer used, but gestures, like head or hands movements, replaces them. Acoustic
726 P.E. Lizcano et al.

dimension, by introducing sound like wind or wind turbine noise could improve
the system too.

References

1. Manchado C., Otero C., Gómez-Jauregui V., Arias R., Bruschi V., and Cendrero A. Visibility
analysis and visibility software for the optimisation of wind farm design. Renewable Energy,
2013, 60, 388-401.
2. Bishop I.D. and Miller D. R. Visual assessment of off-shore wind turbines: The influence of
distance, contrast, movement and social variables. Renewable Energy, 2007, 32(5), 814-831.
3. Torres-Sibille A., Cloquell-Ballester V., Cloquell-Ballester V., and Darton R. Development
and validation of a multicriteria indicator for the assessment of objective aesthetic impact of
wind farms. Renewable and Sustainable Energy Reviews, 2009, 13(1), 40-66.
4. Minelli A., Marchesini I. Taylor F., De Rosa P., Casagrande L., and Cenci M. An open source
GIS tool to quantify the visual impact of wind turbines and photovoltaic panels. Environmen-
tal Impact Assessment Review, 2014, 49, 70-78.
5. Gibbons S. Gone with the wind: Valuing the visual impacts of wind turbines through house
prices. Journal of Environmental Economics and Management, 2015, 72, 177-196.
6. Bishop I., and Stock C. Using collaborative virtual environments to plan wind energy installa-
tions. Renewable Energy, 2010, 35(10), 2348-2355.
7. Manchado c., Gómez-Jauregui V., and Otero C. A review on the Spanish Method of visual
impact assessment of wind farms: SPM2. Renewable and Sustainable Energy Reviews, 2015,
49, 756-767.
8. Autodesk Infraworks 360. http://www.autodesk.com/products/infraworks-360, (accessed
March 2016).
9. Google cardboard. https://www.google.com/get/cardboard (accessed March 2016).
10. Camera cardboard app. https://play.google.com/store/apps/details?id=com.google.vr.cyclops
(accessed March 2016).
11. Extracting the audio & stereo pair from Cardboard Camera 3D panoramic images.
http://vectorcult.com/2015/12/extracting-the-audio-stereo-pair-from-cardboard-camera-3d-
panoramic-images (accessed March 2016).
12. Image composite editor: http://research.microsoft.com/en-us/um/redmond/projects/ice, (ac-
cessed March 2016).
Visual Aided Assembly of Scale Models with AR

Alessandro Ceruti1*, Leonardo Frizziero1 and Alfredo Liverani1


1
DIN Department, University of Bologna
* Corresponding author. Tel.: +39-051-2093452 ; fax: +39-051-2093412. E-mail address:
alessandro.ceruti@unibo.it

Abstract The study of the methodologies useful to support the assembly of parts
is a challenging engineering task which can benefit of the most recent innovations
in computer graphics and visualization technologies. This paper presents a pro-
posal for an innovative methodology based on Virtual and Augmented Reality
useful to support the components’ assembly. The herein introduced strategy is
based upon a four stages procedure: at first the designer conceives the assembly
sequence using a CAD system, visualizing the scene wearing an immersive Virtu-
al Reality device. In the second stage, the same sequence is developed by an un-
experienced user using the same equipment: the differences between two assembly
sequences are recorded and exploited to detect critical points in the assembly se-
quence and to develop a Knowledge Based System. Finally, a virtual user manual
is produced in Augmented Reality. When the final user uses the tool, the position
of the object to assemble is detected by tracking the finger position of the user it-
self. A series of symbols and writings is added to the external scene to help the
end-user in the assembly procedure. A test case based on the assembly of a scale
model has been developed to evaluate the methodology. After an evaluation pro-
cess, the procedure seems to be feasible and presents some advantages over the
state-of-the-art methodologies proposed by literature.

Keywords: Virtual Reality, Augmented Reality, Assembly, Marker, Task auto-


mation.

1 Introduction

Augmented Reality (AR) is a technique based on the superimposition of virtual


symbols or writings to a real time video streaming of the external environment
framed by a camera. This technique has been developed [1] as an alternative to the
Virtual Reality in which the user is completely immersed in a synthetic environ-
ment: in this latter case a Head Mounted Display (HMI) or similar device sepa-

© Springer International Publishing AG 2017 727


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_73
728 A. Ceruti et al.

rates the user from the external world; with AR there is a closer contact between
the user and the external environment since the virtual images are added to a scene
belonging to the real world to which humans are accustomed. The hardware re-
quired by AR applications is composed by a camera, a display and a marker (or a
referenced picture of the external zone where to operate). Eventually, more ad-
vanced devices like see-through glasses equipped with a camera and projectors
can be used. The basic idea lying under the AR applications is to synchronize a
virtual object with the external environment in a real-time and interactive way de-
tecting the relative position between the observer and the external environment us-
ing a marker: it is a geometrical symbol whose dimensions and features are a-
priori known which is framed in AR applications to detect the relationships be-
tween reference systems. Fast algorithms can be used to detect the marker in the
scene, to recognize its corner and edges, and finally internal content; in this way a
computation of the 4x4 transformation matrix between the camera reference sys-
tem and the marker reference system (homogeneous coordinates) can be carried
out. Finally, the 3D object to be added to the scene can be referenced in the correct
position of the scene; in this way the 3D object position is coherent with changes
of position and angular orientation of the camera. A more detailed description of
these algorithms can be found in [2]. It is worth noting that also referenced images
of the external environment can be used as an alternative to physical marker: this
technique is called “Markerless AR”. Since the introduction of the AR in litera-
ture, several studies have been carried out to improve the method, to find new ap-
plications, and to test it in both industrial and every-day-life environment. From an
industrial perspective, the activities which can mostly benefit of AR are: manufac-
turing [3], maintenance [2], digital user manuals, assembly of complex systems,
design check for ergonomics and aesthetic evaluations [4]. Focusing the attention
on the studies on AR assisted assembly tasks, the literature proposes several ap-
proaches. Just to provide some examples, the paper [5] is one of the earlier studies
highlighting how assembly tasks can benefit of the support of AR technique. In
the work by [6], a Virtual Interaction Panel has been developed to help the user in
correctly following the assembly sequence of a toy. An assembly procedure design
based on AR with marker and markerless has been described also in [7], while the
monitoring of a whole industrial assembly line with the support of AR technique
is presented in [8]. Useful indications about how to develop an AR based applica-
tion to support maintenance can be found in [9]. Literature suggests that further
studies and methodologies must be investigated and evaluated since developing a
user-friendly and effective environment to support the assembly in AR is not a
trivial task. Another problem is related to the preparation of the models to be used
in AR: a trade-off between model detail and computational weight should be find
to provide a good realism with real time capabilities in elaborating the added sce-
ne. The paper is organized as follows: the next second section describes the layout
of the environment we developed to support assembly task in AR. The third sec-
tion describes a case study developed to validate the methodology based on the
AR assisted assembly of a scale model. The fourth section includes some brief
Visual Aided Assembly of Scale Models with AR 729

comments on the advantages and drawbacks of the methodology; a conclusion


ends the paper.

2 Methodology description

2.1 Motivation

The simpler approach to support the assembly with AR is to place a marker in the
scene and to show to the user the virtual model of a part positioned in the already
assembled group to provide a final check about the correct position and orienta-
tion. According to literature, writings, animations, symbols are used to guide the
user in the task. In this case there is no feedback about the hand moves of the user
since the part position in space is not tracked. In another approach to the assembly
procedure in AR, the position of the part is tracked and acquired so that there is a
more effective control of the assembly operations. In this latter case, several tech-
niques can be applied to guess the position in space and orientation of the part.
One of the classical way is to add a marker on the surface of the component, or to
use a series of IR sources (LEDs) fixed to the object surface to track the position
and define an orientation in space.
The development of AR based user manuals in industrial environment presents
some critical aspects: the assembly sequence is often designed by an expert de-
signer and can be unfriendly to be followed by an unskilled user; the preparation
of the step-by-step animations for the assembly sequence in AR can be a time con-
suming and painstakingly activity, especially with complex groups made by a
large number of parts; the assembly sequence does not keep into consideration the
way in which a user grabs the part to be assembled, but typically shows only the
position of the part itself; a too complex assembly sequence can be boring for the
final user with the risk of overlooking the virtual instructions and going ahead fol-
lowing its own experience. Due to these limitations, a new methodology to design
assembly sequences has been developed and will be described.

2.2 Step 1: Assembly procedure design in CAD

In the first stage of the procedure, based on Virtual Reality technique, the designer
of the assembly selects the parts sequence and defines the trajectory to follow to
add the components to the assembly in a CAD environment by using a HMI or a
similar VR device. This procedure immerses the designer in the environment in a
more realistic way than a traditional screen and it is helpful in evaluating relative
dimensions of the parts and trajectories: the assembly designer records the assem-
bly sequence. In the following, the same task of defining the assembly sequence is
carried out by a final user which is unskilled in this operation.
730 A. Ceruti et al.

2.3 Step 2: Detection of critical assembly phases

The sequence and assembly trajectories proposed by the end user are recorded and
compared to the designer procedures. The differences in the assembly parts se-
quence and trajectory are noticed and exploited to improve the procedure devel-
oped by the skilled designer. In this way, the AR user manual which will be de-
veloped further can effectively help the final user in the most critical points of the
procedure, providing more information where needed and avoiding to jam and
confuse him/her in the trivial parts of the assembly. Also a Knowledge Based En-
gineering system can be integrated to provide some troubleshooting covering the
most frequent errors which the user can do during the assembly sequence.

2.4 Step 3: Development of an AR based manual

In the third step of the procedure, an AR based User Manual for Assembly
(ARUMA) can be now produced with some peculiarity conceived to support the
users in complex tasks. The AR manual contains writings and simple animations
to be added to the video streaming of the assembly zone. Further details will be
described in Section 3 of the paper.

2.5 Step 4: Assembly of parts by end user

The fourth step of the procedure covers the assembly of the group by the end user:
the AR based user manual and the parts to be assembled are available to him/her.
The user starts the assembly operations by placing a marker near to the zone in
which the model will be assembled: in this way, a virtual copy of the object to as-
semble can be viewed in real scale to check the real physical assembly in different
stages. A second marker can be placed to support the assembly of subparts or
groups. Moreover, the AR manual provides also hints on how to move the parts
from the boards to the growing assembly; instead of tracking the position and ori-
entation of the part to be assembled with markers glued to the part or pencils
equipped with InfraRed sensors (Wiimote or similar), the tool we propose tracks
the position of the fingers of the user through devices like the Leap motion tracker
(https://www.leapmotion.com). This procedure can avoid problems related to the
masking of the part to assemble (or the partial hiding of markers/IR sources with
the loss of tracking); moreover it is more user-oriented since the hand motion is
strongly correlated with the positioning and orientation of the part in space. Ac-
cording to literature, Optical Flow techniques (see [10] for a detailed description)
have been proven as effective in guessing the trajectories of objects in the three-
dimensional space by comparing the positions of key point in images from a video
streaming and can be considered a mature technology (see [11]) which can be in-
tegrated in the AR guided assembly methodology. The Figure 1 provides a graph-
ical representation of the methodology layout.
Visual Aided Assembly of Scale Models with AR 731

Fig. 1. Methodology layout.

The methodology has been applied to some case studies to check the feasibility
and verify the possible implementation: one of these tests will be presented in the
next section to better illustrate the procedure.

3 Case study

The case study we present in this paper deals with the assembly of a simple scale
model of a car produced by Tiger (http://www.tiger-stores.it/). Figure 2 shows the
car model features and the boards with plastic components to be assembled. As
Figure 3 depicts, all the parts have been named and coded according to the board
they belong to. The Figure 4 provides an idea of the first stages of the methodolo-
gy: the assembly sequence is developed within a CAD system by an expert de-
signer and by a not-skilled end user to evaluate if there are differences between the
approaches to the assembly procedure and to highlight working critical phases.

Fig. 2. Scale model car parts and box.


732 A. Ceruti et al.

Fig. 3. Boards with parts (arrows point parts label).

Fig. 4. CAD simulation of the assembly procedure.

Figure 4 depicts for instance the CAD simulation of the assembly procedure of the
wheels which are made by a rim and a tyre. In the following, after the detection of
the most critical phases, an AR assisted user manual has been implemented, as
Figure 5 shows.
Visual Aided Assembly of Scale Models with AR 733

Fig. 5. AR based user manual.

With reference to Figure 5, two markers are placed on a flat surface: one L-
shaped to reference the assembly, and another one chessboard-shaped to support
the assembly of sub-groups. A device equipped with a camera and a screen, like a
tablet or a PC frames the assembly zone and virtual writings or objects are added
to the video stream. In frames (1) and (2) the user selects from the “A” board the
A1 part, according to the virtual instructions overlapped to the video stream of the
assembly zone. In (3), the A1 chassis is placed close to the L-shaped marker. The
finger position of the user grabbing the board is tracked, so that an arrow suggest-
ing the hand move to follow appears. In sequences (4), (5) and (6) the AR based
user manual suggests to select the front rim wheel and indicates the movement of
the hand required to place it close to the chessboard-shaped marker. This is neces-
sary to assembly the front wheel sub-model (coded with H1) which is made by
two parts: the rim (part coded with A4) and the tyre (B2). Virtual writings sug-
734 A. Ceruti et al.

gests the operations to follow and new ones are projected once the end user con-
firms the end of a phase, as frames (7), (8) and (9) suggests. Once the sub-group
H1 has been assembled, the AR based user manual shows the procedure to assem-
ble the H1 group onto the main assembly (P1): frames (10), (11) and (12) show
the virtual writings suggesting the operations to follow and the arrows represent-
ing the movements of the end user hand required to move the H1 assembly from
the chessboard marker to the P1 assembly. The step by step assembly sequence
goes on until the final model is obtained.

4 Evaluation of the tool

New digital devices have been introduced to support the design and sketching
[12] aiming to increase the capability provided by traditional drawing on paper
sheets. In a similar manner, the use of Augmented Reality based user manuals
technology will increase in future to support the maintenance of complex systems
in a more effective and exhaustive way respect to paper based supports. The sim-
ple case study herein presented should be considered as a test to validate a more
general methodology to be applied in more complex design scenarios. The tool
does not assure that the user selects the correct part needed for the assembly (it
would be in fact complex to detect single parts of a large assembly) but provides
hints on the movements of the hands assuring the assembly of the part in the cor-
rect way. Care should be given to the counter instinctive assembly phases, while it
could be confusing (and it needs also a long time to prepare Augmented Reality
animations) to support in a detailed way all the building phases of the assembly.
The use of more than one marker allows to prepare sub-groups or to temporarily
“park” components requiring operations on the main assembly. The main benefit
of this approach relates the fact that there is no need for a marker on the parts to be
assembled: an optical flow based strategy can be exploited to guess the position of
the parts by simply tracking the position of the user’s hand in the space. On the
other hand, there is no guarantee that the user selects the correct part during the
assembly: in this “lean” approach the user knows the starting point of the part to
be assembled on the board, and its final position. Only the users hand trajectory is
suggested.

5 Conclusion

A new procedure to develop user manuals in Augmented Reality has been intro-
duced. It is based on a sequence of phases aiming to overcome some of the prob-
lems highlighted by current literature. The comparison in CAD environment be-
Visual Aided Assembly of Scale Models with AR 735

tween the assembly sequence developed by an expert designer and the end-user
can contribute to detect the most critical phases of the sequence. An innovative
AR-based user manual has been developed: markers are used to reference the vir-
tual model of the assembly (or sub-groups) with the real parts. Beside showing the
partially assembled object correct position, the AR environment is able to track
the end-user hand motion. In this way, the AR manual suggests a trajectory to be
followed by the hand instead of simply suggesting a position or orientation of the
part which can lead to counterintuitive movements. A case study showing some
phases of the assembly of a scale model of a car is presented to provide to the
reader an idea of the procedure features. The methodology has been preliminarily
applied to a simple case, but will be extended to more complex scenarios to better
understand its efficiency and effectiveness.

References

1. Azuma, R.T. A Survey of Augmented Reality, Presence: Teleoperators and Virtual


Environments, 1997, pp 355-385.
2. De Marchi, L., Ceruti, A., Marzani, A., Liverani, A. Augmented reality to support on-
field post-impact maintenance operations on thin structures, Journal of Sensors, 2013,
art. no. 619570.
3. Novak-Marcincin, J., Barna, J., Janak, M., Novakova-Marcincinova, L. Augmented
reality aided manufacturing (2013) Procedia Computer Science, 25, pp. 23-31.
4. Fiorentino, M., Uva, A.E., Gattullo, M., Debernardis, S., Monno, G. Augmented reali-
ty on large screen for interactive maintenance instructions, Computers in Industry, 65
(2), 2014, pp. 270-278.
5. Baird, K.M., Barfield, W., 1999, “Evaluating the effectiveness of augmented reality
displays for a manual assembly task”, Virtual Reality, 4(4), pp 250-259.
6. Yuan, M.L., Ong, S.K., Nee, A.Y.C. Augmented reality for assembly guidance using a
virtual interactive tool (2008) International Journal of Production Research, 46 (7), pp.
1745-1767.
7. Pang, Y., Nee, A.Y.C., Ong, S.K., Yuan, M.L. Assembly Feature Design in an Aug-
mented Reality Environment. Assembly Automation Journal 26(1), 34–43 (2006).
8. Kollatsch, C., Schumann, M., Klimant, P., Wittstock, V., Putz, M. Mobile augmented
reality based monitoring of assembly lines (2014) Procedia CIRP, 23 (C), pp. 246-251.
9. Webel, S., Bockholt, U., Keil, J. Design criteria for AR-based training of maintenance
and assembly tasks (2011) Lecture Notes in Computer Science (including subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6773
LNCS (PART 1), pp. 123-132.
10. Liverani, A., Leali, F., and Pellicciari, M, Real-time 3D features reconstruction
through monocular vision, International Journal on Interactive Design and Manufac-
turing (IJIDeM), May 2010, Volume 4, Issue 2, pp 103-112.
11. Fleet, D.J. and Weiss, Y., Optical Flow Estimation, In Paragios; et al. Handbook of
Mathematical Models in Computer Vision, 2006. Springer. ISBN 0-387-26371-3.
12. Liverani, A., Ceruti, A., and Caligiana, G., Tablet-based 3D sketching and curve re-
verse modelling, International Journal of Computer Aided Engineering and Technolo-
gy (IJCAET), Vol. 5, No. 2/3, 2013.
Section 5.3
Geometric Modelling and Analysis
Design and analysis of a spiral bevel gear

Charly LAGRESLE1, Jean-Pierre de VAUJANY1* and Michèle GUINGAND1


1
Université de Lyon, CNRS, INSA-Lyon, LaMCoS UMR5229, F-69621, France, Bât. Jean
d’Alembert, 18-20 rue des Sciences, 69621 Villerubnne ceded
* Corresponding author. Tel.: +33 4 72 43 85 46; fax: +33 4 72 43 89 30. E-mail address:
jean-pierre.devaujany@insa-lyon.fr

Abstract Spiral Bevel gears are used in power transmission systems with two
crossed shafts. The topic of the paper is a model for helping the spiral bevel gear
design, depending on meshing parameters. The global parameters of the gear are
calculated with the standard ISO 23509 equations. A prediction of the profile
modifications has also been performed to center the load pattern on the flank and
to minimize the maximum contact pressure. A numerical simulation is achieved
with the ASLAN software, developed by LaMCoS laboratory for calculating the
quasi-static load sharing of spiral bevel gears.

Keywords: Spiral bevel gear, design, profile modification, contact pressure

1 Introduction

Spiral bevel gears are used in power transmission system with two intersecting
shafts. These gears carry large loads and generally operate at high rotational
speeds. In the history of bevel gears, aspects from design to production control
have been exclusively entrusted to the empirical method provided by cutting ma-
chine manufacturers (Gleason Corporation and Klingelnberg) [1,2]. Few authors
proposed an alternative to the conventional cutting process [3,4], mostly on 5-axis
drilling machines. Research works have been achieved for the geometry and the
calculation of the load sharing between the teeth in contact [5-9]. This paper aims
to introduce a simple model to help and accelerate the design of spiral bevel gears
and their associated complex geometries.

In the second part of this paper, the behavior of the gear under load will be op-
timized by using profile modifications. On cutting machines, profile modifications
are often intimately linked to machines and tools parameters. For example, Artoni
and al. proposed a method to optimize the contact pattern of a hypoid gear by find-
ing out the optimal ease-off topography and identify the requirements in terms of

© Springer International Publishing AG 2017 739


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_74
740 C. Lagresle et al.

machine-tool settings for the generation of this surface [10]. In their papers,
Mermoz and al. presented a script able to optimize the contact path and minimize
the quasi-static transmission error by controlling six CNC grinding machine-tool
parameters [11-12]. In our case, the goal is to correct the microgeometry of the
gear to provide a centered contact path with reduced contact pressures.

The software ASLAN [3], developed at the Contact and Structural Mechanics
Laboratory (LaMCoS) at INSA Lyon, is used for the simulation of the gear mesh-
ing. It is capable of generating fully analytical spiral bevel gear geometries. Based
on these, ASLAN determines potential contact lines and calculates the quasi-static
load sharing of the gear. The displacement compatibility equation is solved using
the method of the influence coefficients and the Boussinesq theories are used to
obtain the contact deformations [13]. Due to the high number of design parame-
ters, the generation of such geometries can be complex. A script enabling fast di-
mensioning of spiral bevel gears based is presented in the next section.

2 Simplification of the process of gear creation

The geometric definition of a spiral bevel gear requires many parameters. In


order to simply their generation, a numerical tool has been developed. Six parame-
ters are needed to define all the XX parameters of the gears: the applied torque,
the gear ratio and pressure, helix and deflection angles. The use of the functions
and abacuses of the standard ISO 23509 [14] permits this simplification and al-
lows the calculation, among other parameters, of the number of teeth, the width of
a tooth, multiples diameters and the addendum and deddendum angles. Two algo-
rithms are provided for the calculation of the gear parameters. The first one is
based on the abacuses given in the ISO 23509 standard: the tooth width, the tooth
number and the external gear diameter are estimated with the curves. The other
gear parameters are calculated by using the functions provided by the standard
method. The second algorithm, which requires few more parameters, computes
every geometric parameter with those functions. The developed calculation mod-
ule and its graphical user interface enable a simplified generation of spiral-bevel
gear geometries in a few seconds.
Design and analysis of a spiral bevel gear 741

3 Optimization of the gears by using profile modifications

Once the desired gears created, their behavior under load has to be improved. A
prediction model of the profile modifications has been developed to center the
load pattern on the flank and to minimize the maximum contact pressure. Its aim
is to predict the optimum locations and depths of profile modifications based on a
Design of Experiments (DoE). Those parabolic profile modifications can be de-
fined by four parameters: XTan and YTan represent the position of the tangential
point of the curves on the gear flank. The parameters Z1TB and Z2TB determine the
depth of the profile modifications respectively on the tip and on the toe of the
tooth. (Fig. 2.).

Fig. 2. Characteristics of parabolic profile modifications on a spiral bevel gear tooth flank

Table 1. Inputs and levels of the three-level factorial design


Inputs Low Level (-1) Middle Level (0) High Level(+1)
XTan 25% of total width 50% of total width 75% of total width
YTan 25% of total width 50% of total width 75% of total width
Z1TB 5μm 20μm 35μm
Z2TB 5μm 20μm 35μm
Torque 900Nm 1200Nm 1500Nm

The DoE allows the prediction of five outputs: the X and Y positions of the
center of gravity of the contact pressures (dot on Fig 5-6), the maximum pressure
value on the gear flank, the transmission error amplitude (AET) and the contact
area size. Five parameters define the inputs of the DoE: the four profile modifica-
tion parameters (position and depth) and the input torque. Each input has three
742 C. Lagresle et al.

equivalent levels, coded as (-1, 0 and +1), as shown in Table 1. The three-level
factorial design is based on 5 inputs and requires a total of 243 simulations.

4 Application

This optimization process is tested on a spiral-bevel gear with those specifica-


tions : 32/47 teeth, pressure and spiral angle of respectively 20° and 35°, a shaft
angle of 90°. The tooth width is 26mm and the modulus is 2.256mm. The model
simulates a gear rotation at 120rpm and 1200Nm. Those parameters are sufficient
to generate the desired gear geometries shown on Fig. 3. Boundary conditions are
y
added to the system p
for the upcomingg simulations on ASLAN.

Fig. 3. Generated pinion (left) and wheel (right). The modelling of ball and friction bearings
(yellow area) is obtained by forcing their degrees of freedom (red axis). The fixation of the body
creates the resistive torque needed for the calculations.

5 Results of the design of experiments

Once the 243 simulations made, the results are analyzed. A prediction model is
obtained for each of the five outputs which can now be described as a polynomial
function of the input parameters. The predicted output values are close to the sim-
ulations results, with the exception of the predicted values of low amplitudes of
the transmission error (Table 2. and Fig. 4).
Design and analysis of a spiral bevel gear 743

Table 2. Relative prediction errors with the calculated results (ASLAN) as reference.

Inputs Centroid X and Y Maximum contact Contact Area Amplitude of trans-


position pressure percentage mission error
Min Error +0.74% +10.81% +1.62% +42.06%
Max Error -0.61% -08.64% -2.28% -44.40%
RMSE 0.09 % 17 MPa 0.36% 0.003 mrad

Fig. 4. Amplitude of the transmission error: predicted vs simulated values. The unfilled circles
show a relative error above the ± 15% limit.

Logically, the centroid position of the contact pressures is fully controlled by


the four profile modifications parameters. XTan and Z1TB have significant impact on
the vertically position of the centroid of contact pressures while its horizontally
position is governed by the Z2TB and YTan profile modifications (Table 3).

Table 3. The five most influent factor combinations.(XXXX)

N° Centroid X Centroid Y Maximum Pressure Contact area AET


n°1 XTan YTan Z1TB XTan YTan
n°2 Z1TB Z2TB.YTan XTan XTan² Z2TB.XTan²
n°3 Z1TB.XTan XTan C YTan XTan.YTan²
n°4 XTan² YTan.C Z1TB².XTan Z2TB.YTan² Z2TB
n°5 Z1TB².XTan Z2TB².YTan Z1TB.XTan² YTan² Z1TB.XTan²
∑ 54.9% 56.5% 31.2% 29.0% 19.2%
744 C. Lagresle et al.

Those only four profile modifications parameters and their polynomial combi-
nations control more than 79% of the centroid position and the first five factor’s
combinations about 55%. However, the prediction of the amplitude of the trans-
mission error and the maximal contact pressure is more difficult: the five most in-
fluent input factors combine respectively less than 29% and 19% of the total pre-
diction. Finally, the prediction of the maximal contact pressure seems to be
controlled only by the vertical profile modifications and the applied torque.
In our case, the optimization of the behavior under load means lowering of the
maximal contact pressure and AET and maximizing the contact area while center-
ing the centroid of pressure on the gear flank. [14]. The DoE’s software proposes
multiple sets of input values meant to meet these criteria. The most promising set
is introduced in ASLAN and the load sharing calculation is achieved. The outputs
given the prediction functions and the results of this simulation are compared (Ta-
ble 4.).

Table 4. Proposed set of inputs and related predicted outputs by DoE and calculated by ASLAN

Inputs Proposed by DoE Outputs Predicted Calculated Error


XTan 32% Centroid X 53.9% 54.1% 0.29%
YTan 61% Centroid Y 55.0% 54.9% 0.18%
Z1TB 08μm Max. pressure 436 MPa 391MPa 11.6%
Z2TB 31μm Contact area 80.9% 79.2% 2.15%
Torque 900Nm AET 0.0202 mrad 0.0135 mrad 49.6%

The predicted values of the position of the centroid are consistent with the sim-
ulation: the relative error made by the prediction is about 0.30%. Such conclusion
can also be made regarding the contact area size prediction with a relative error of
2.15%. The prediction of the maximal contact pressure shows a relative error of
11% (45MPa), which is acceptable according to the fact that the range of pressures
in the experimental design is about 4100MPa (Tip bad effect). The prediction and
the result of the transmission error amplitude are the same order of magnitude.
Nevertheless, the relative error of this output reaches 50%. The simulated maxi-
mal contact pressure reaches 391MPa, which is less than the predicted value of
436MPa. Compared to the maximal contact pressure of the non-modified gear
(4500MPa), this value is very satisfying.
The Fig. 5 shows the non-optimized gear. The gear geometry is fully analytical,
based on the spherical involute theory: in those conditions, the surfaces are per-
fectly paired. This implies a contact area covering the entire tooth surface. At the
tip of the gear, the side effect generates high pressures, beyond 4.5GPa. In order to
avoid those high pressures and the associated high bending deformation of the
tooth, this gear has been optimized (Fig. 6.). The general pattern observed is cor-
rect, mainly centered, though a light high pressure area localized on the heel of the
gear, with reasonable contacts pressures (Fig. 6.).
Design and analysis of a spiral bevel gear 745

(X,Y)

Fig. 5. Contact pressure on the gear flank without any profile modifications.

(X,Y)

Fig. 6. Contact pressure on the gear flank with the optimum proposed profile modifications. The
maximal pressure reaches 391MPa.

Conclusion

Starting with only macroscopic parameters such as the torque and the gear ra-
tio, the model enables a fast and simplified design of spiral-bevel gears by using
the formulations or the abacuses of the ISO23509 norm. Once the generated gear
adapted to the operating conditions – by adding axis or designing thin-rims – a
243 experiences based on Design Of Experiences (DoE) is made. It allows the
prediction of various results and proposes multiple profile modifications sets to
746 C. Lagresle et al.

improve the gear behavior under load. One of those test set has been shown to be
effective in centering the contact area, reducing the maximal contact pressure and
the transmission error amplitude. This optimization process has also been tested
on other spiral-bevel gear geometries and it provides similar conclusions.

References

1. M. Lelkes, “Définition des engrenages Klingelnberg,” Institut National des Sciences Appli-
quées (INSA) de Lyon, 2002.
2. F. L. Litvin, W.-J. Tsung, and H. Lee, “Generation of Spiral Bevel Bears With Conjugate
Tooth Surfaces and Tooth Contact Analysis,” pp. 1–127, 1987.
3. J. Teixeira Alves, “Définition analytique des surfaces de denture et comportement sous
charge des engrenages spiro-coniques,” Institut National des Sciences Appliquées (INSA)
de Lyon, 2012.
4. Álvarez, A., et al. "Large Spiral Bevel Gears on Universal 5-axis Milling Machines: A
Complete Process." Procedia Engineering 132 : 397-404, 2015
5. J.-P. de Vaujany, M. Guingand, D. Remond, and Y. Icard, “Numerical and Experimental
Study of the Loaded Transmission Error of a Spiral Bevel Gear,” J. Mech. Des., vol. 129,
no. 2, p. 195, 2007
6. Simon V. V. Design and Manufacture of Spiral Bevel Gears with Reduced Transmission
Errors. Journal of Mechanical Design, Transactions of the ASME, Vol. 131, p. 041007,
11p, 2009.
7. Litvin F. L., Fuentes A., Hayasaka K. Design, Manufacture, Stress Analysis, and Experi-
mental Tests of Low-Noise High Endurance Spiral Bevel Gears. Mechanism and Machine
Theory, Vol. 41, N°18, p. 83-118, 2006
8. Bibel G. D., Kumar A., Reddy S., Handschuh. F. Contact stress analysis of spiral bevel
gears using finite element analysis. Journal of Mechanical Design, Transactions of the
ASME, Vol. 117(A), n° 2, p. 235- 240, 1995.
9. Astoul, J., Geneix, J., Mermoz, E., & Sartor, M. A simple and robust method for spiral
bevel gear generation and tooth contact analysis. International Journal on Interactive Design
and Manufacturing (IJIDeM), 7(1), 37-49, 2013.
10. Artoni A., Kolivand M., Kahraman A. An ease-off based optimization of the loaded trans-
mission error of hypoid gears. Journal of Mechanical Design, Transactions of the ASME,
Vol. 132, pp. 011010-9p, 2010.
11. Mermoz, E., Astoul, J., Sartor, M., Linares, J. M., & Bernard, A. A new methodology to
optimize spiral bevel gear topography. CIRP Annals-Manufacturing Technology, 62(1),
119-122, 2013.
12. Astoul, J., Mermoz, E., Sartor, M., Linares, J. M., & Bernard, A. New methodology to re-
duce the transmission error of the spiral bevel gears. CIRP Annals-Manufacturing Technol-
ogy, 63(1), 165-168, 2014.
13. Guingand M., De Vaujany J-P, Jacquin C. Y, ”Quasi-static analysis of a face gear under
torque. Computer methods in applied mechanics and engineering,”, Vol. 194, p. 4301-4318,
2005.
14. International Organization for Standardization, Bevel and hypoid gear geometry, Géométrie
des engrenages coniques et hypoides ISO/DIS 23509, 2005.
Three-dimensional face analysis via new
geometrical descriptors

Federica MARCOLIN1*, Maria Grazia VIOLANTE1 , Sandro MOOS1, Enrico


VEZZETTI1, Stefano TORNINCASA1, Nicole DAGNES1, and Domenico
SPERANZA2
1
Dipartimento di Ingegneria Gestionale e della Produzione, Politecnico di Torino, corso Duca
degli Abruzzi 24, Torino 10129, Italy
2
Dipartimento di Ingegneria Civile e Meccanica, Università degli Studi di Cassino e del
Lazio Meridionale, Viale dell'Università, Cassino 03043, Italy.
* Corresponding author. Tel.: +39-011-090-7205. E-mail address:
federica.marcolin@polito.it

Abstract 3D face was recently investigated for various applications, including


biometrics and diagnosis. Describing facial surface, i.e. how it bends and which
kinds of patches is composed by, is the aim of studies in Face Analysis, whose ul-
timate goal is to identify which features could be extracted from three-dimensional
faces depending on the application. In this study, we propose 54 novel geometrical
descriptors for Face Analysis. They are generated by composing primary geomet-
rical descriptors such as mean, Gaussian, principal curvatures, shape index, curv-
edness, and the coefficients of the fundamental forms. The new descriptors were
mapped on 217 facial depth maps and analysed in terms of descriptiveness of fa-
cial shape and exploitability for localizing landmark points. Automatic landmark
extraction stands as the final aim of this analysis. Results showed that the newly
generated descriptors are suitable to 3D face description and to support landmark
localization procedures.

Keywords: 3D Face; Face Analysis; Landmarks; Geometry; Face Recognition.

1 Introduction

Face Analysis has been used in these decades to support Face Recognition (FR),
Face Expression Recognition (FER), and various medical applications such as cor-
rective surgery, diagnosis, prenatal ultrasound. The third dimension was involved
in the research to improve accurateness and avoid issues like lightning variations.
3D face data, which are often given by a non-connected point clouds, i.e. a depth
maps, allow the use of geometry and related geometrical entities in the description

© Springer International Publishing AG 2017 747


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_75
748 F. Marcolin et al.

of facial surface. The so-generated features make possible to compare faces and
generally extract information relevant to the context of application.
The previous work of our research group has provided an application frame-
work of entities coming from Differential Geometry context when adopted as fa-
cial descriptors [1]. These descriptors have been applied to different contexts and
have proven to be key 3D features for human faces at different ages [2, 3, 4, 5, 6,
7, 8, 9, 10, 11]. These descriptors are here briefly described in conceptual order.
The First and Second Fundamental Forms provide the first six descriptors of
the set. They are used to measure distance on surfaces and are defined by
‫ݑ݀ܧ‬ଶ ൅ ʹ‫ ݒ݀ݑ݀ܨ‬൅ ‫ ݒ݀ܩ‬ଶ and ݁݀‫ݑ‬ଶ ൅ ʹ݂݀‫ ݒ݀ݑ‬൅ ݃݀‫ ݒ‬ଶ respectively, where E, F,
G, e, f and g are their Coefficients. Curvatures are used to measure how a regular
surface x bends in Թଷ . If D is the differential and N is the normal plane of a sur-
face, then the determinant of DN is the product ሺെ݇ଵ ሻሺെ݇ଶ ሻ ൌ ݇ଵ ݇ଶ of the Princi-
pal Curvatures, and the trace of DN is the negative െሺ݇ଵ ൅ ݇ଶ ሻ of the sum of Prin-
cipal Curvatures. In point P, the determinant of ‫ܰܦ‬௉ is the Gaussian Curvature K
of x at P. The negative of half of the trace of DN is called the Mean Curvature H
of x at P. In terms of the principal curvatures, Gaussian and mean curvatures can
be written as
‫ ܭ‬ൌ ݇ଵ ݇ଶ , (1)
௞ ା௞
‫ ܪ‬ൌ భ మ. (2)

These are the adopted descriptors and the relative forms implemented in the algo-
rithm:
‫ ܧ‬ൌ ͳ ൅ ݄௫ଶ , (3)
‫ ܨ‬ൌ ݄௫ ݄௬ , (4)
‫ ܩ‬ൌ ͳ ൅ ݄௬ଶ , (5)
௛ೣೣ
݁ൌ , (6)
మ ା௛మ
ටଵା௛ೣ ೤
௛ೣ೤
݂ൌ , (7)
మ ା௛మ
ටଵା௛ೣ ೤
௛೤೤
݃ൌ , (8)
మ ା௛మ
ටଵା௛ೣ ೤

௛ೣೣ ௛೤೤ ି௛ೣ೤
‫ܭ‬ൌ మ ା௛మ ൯ మ , (9)
൫ଵା௛ೣ ೤
మ ൯௛ ିଶ௛ ௛ ௛ ା൫ଵା௛మ ൯௛
൫ଵା௛ೣ ೤೤ ೣ ೤ ೣ೤ ೤ ೣೣ
‫ܪ‬ൌ మ ା௛మ ൯ యȀమ , (10)
൫ଵା௛ೣ ೤

݇ଵ ൌ ‫ ܪ‬൅ ξ‫ܪ‬ଶ െ ‫ܭ‬, (11)


݇ଶ ൌ ‫ ܪ‬െ ξ‫ܪ‬ଶ െ ‫ܭ‬, (12)
where h is a differentiable function ‫ ݖ‬ൌ ݄ሺ‫ݔ‬ǡ ‫ݕ‬ሻ representing the three-dimensional
surface.
Shape and Curvedness Indexes S and C are defined this way:
ଶ ௞ ା௞
ܵ ൌ െ ܽ‫ ݊ܽݐܿݎ‬భ మ, ܵ ‫ א‬ሾെͳǡͳሿ, ݇ଵ ൒ ݇ଶ , (13)
గ ௞భ ି௞మ
Three-dimensional face analysis via new geometrical descriptors 749

௞భమ ା௞మమ
‫ܥ‬ൌට . (14)

The six coefficients of the fundamental form, mean and Gaussian curvatures,
principal curvatures, and shape and curvedness indexes are the 12 descriptors
adopted in the previous studies of our research group.
Some of them have been employed and mentioned in recent literature as well-
performing facial features. Mean, Gaussian, principal curvatures, shape index, and
curvedness, evaluated with varying neighbourhood size and bin size, were adopted
as descriptors by Creusot et al. to support keypoints detection on 3D faces. LDA
was employed to define weights to combine matching score maps over a popula-
tion of neighbouring and non-neighbouring vertices, relative to the relevant land-
mark, and experiments were carried out on FRGC v2 [12] and BFM database [13].
Histograms of shape index (HoS) with 8 bins, the shape index itself, and principal
curvatures were used to develop a mesh-based method for 3D facial expression
recognition to be tested on BU3D-FE [14, 15, 16] and Bosphorus [17] databases.
Mean curvature, Gaussian-weighted curvature, and spin image correlation were
adopted as features by Li et al. [18] to detect 3D faces via graph models on IAIR-
3DFace and BU3D-FE databases. Shape index, calculated for each vertex on its
5x5 neighbourhood, was used by Zhang et al. [19] for a face recognition method
based on the adoption of six different scale invariant similarity measures. The test-
ing was performed on FRGC v2.0. An HK curvature-based approach was adopted
by Bagchi et al. [20], who developed a method for three-dimensional face detec-
tion to be applied on the FRAV3D database. HK indicates both mean and Gaus-
sian curvatures. Mean and Gaussian curvature, shape index and curvedness were
involved as features by Szeptycki et al. [21] for automatic nose tip localization on
3D faces with SVM classifier. The tested database was FRGC v2. Lanz et al. used
mean and Gaussian curvatures for landmark detection in the context of therapeutic
facial exercise recognition for patients suffering from dysfunction of facial move-
ments [22]. The Kinect was employed for acquiring 3D faces. The same descrip-
tors were adopted by Rabiu et al. for a 3D face HK segmentation method to be
tested on UPM-3DFE and Gavab databases [23]. Zeng et al. involved mean curva-
ture in a framework for facial expression recognition via conformal maps in sparse
representation [24]. Tests were performed on BU-3DFER database. Abbas et al.
tested geometrical features such as mean, Gaussian, and principal curvatures in
terms of descriptiveness of facial philtrum on the three-dimensional ALSPAC da-
tabase [25]. The purpose of this study was to investigate medical abnormalities.
The shape index was embedded in a facial landmark localization algorithm tested
on BU-3DFE, BU-4DFE, BP4D-Spontaneous, FRGC 2.0, and Eurecom Kinect
Face Dataset by Canavan et al.[26]. Different facial areas were classified accord-
ing to mean and Gaussian curvature values in a framework of 3D surface analysis
with no previous reconstruction (“one shot” technique) by Di Martino et al. [27].
The shape index was used as 3D local shape descriptor by Perakis et al. to be em-
bedded in a feature fusion-based facial landmark detection algorithm for biometric
applications [28].
750 F. Marcolin et al.

2 New descriptors and landmarking

The 12 geometrical entities introduced in the previous section, which will be from
now on named primary descriptors, have been used as theoretical basis for design-
ing the upcoming composed geometrical entities presented in the following. These
new descriptors introduced here below have never been adopted by the research
community; in this sense they are novel. Composed descriptors are created by
combining primary descriptors. These combinations are linear combinations, frac-
tions, products, special products of/between/among primary descriptors. They also
include forms similar to those of primary descriptors.
To test the soundness and the applicability to three-dimensional faces of the so-
generated formulas, composed descriptors have been point-by-point mapped on
217 facial depth maps of different people aged 19-32 performing 7 expressions
scanned via Minolta Vivid 910 laser scanner. The images in the following section
regard only one person (female, aged 25, serious pose). The new descriptors and
their mappings on a face are reported in Figure 2. The way in which the formulas
of these descriptors were thought and built relies on the nature of the original for-
mula structure of primary descriptors. Here below, the features of their concept are
listed and explained.
1. The denominator in ‫ܧ‬ௗ௘௡ , ‫ܨ‬ௗ௘௡ , ‫ܩ‬ௗ௘௡ , ‫ܧ‬ௗ௘௡ଶ , ‫ܨ‬ௗ௘௡ଶ , ‫ܩ‬ௗ௘௡ଶ , ‫݃ܩ݂ܨ݁ܧ‬ௗ௘௡ ,
‫݁ܩ݂ܨ݃ܧ‬ௗ௘௡ , ‫݃ܩ݂ܨ݁ܧ‬ௗ௘௡ଶ , ‫݁ܩ݂ܨ݃ܧ‬ௗ௘௡ଶ , ‫ܩܨܧ‬ௗ௘௡ , ‫ܩܨܧ‬ௗ௘௡ଶ , ‫݀݊݋ܿ݁ݏ‬ௗ௘௡ ,
and ‫݀݊݋ܿ݁ݏ‬ௗ௘௡ଶ is adopted in the same form of primary descriptors (6),
(7), and (8). In other words, the idea of using this denominator is taken
from the formulas of e, f, and g.
2. The structure of formulas of ݈݈݁݅‫݀݅݋ݏ݌‬ଵ , ݈݈݁݅‫݀݅݋ݏ݌‬ଶ , ݈݈݁݅‫݀݅݋ݏ݌‬௜ ,
݈݈݁݅‫݀݅݋ݏ݌‬௜௜ , ‫݃ܩ݂ܨ݁ܧ‬, ‫݁ܩ݂ܨ݃ܧ‬, ‫݃ܩ݂ܨ݁ܧ‬ௗ௘௡ , ‫݁ܩ݂ܨ݃ܧ‬ௗ௘௡ , ‫݃ܩ݂ܨ݁ܧ‬ௗ௘௡ଶ ,
and ‫݁ܩ݂ܨ݃ܧ‬ௗ௘௡ଶ is based on the standard equation of the ellipsoid
‫ ݔ‬ଶ ‫ݕ‬ଶ ‫ݖ‬ଶ
൅ ൅ ൌ ͳǤ
ܽଶ ܾ ଶ ܿ ଶ
3. The concept of descriptors ‫ܾ݊݌‬஺஺ା , ‫ܾ݊݌‬஺஺ି , ‫ܾ݊݌‬஺ା , ‫ܾ݊݌‬஺ି , ‫ܾ݊݌‬஻஻ା ,
‫ܾ݊݌‬஻஻ି , ‫ܾ݊݌‬஻ା , and ‫ܾ݊݌‬஻ି is based on the special product
ሺ‫ ݔ‬േ ‫ݕ‬ሻଶ ൌ ‫ ݔ‬ଶ േ ʹ‫ ݕݔ‬൅ ‫ ݕ‬ଶ Ǥ
4. The concept of descriptors ‫݌݀݊݌‬஺ and ‫݌݀݊݌‬஺஺ is based on the special
product
ሺ‫ ݔ‬െ ‫ݕ‬ሻሺ‫ ݔ‬൅ ‫ݕ‬ሻ ൌ  ‫ ݔ‬ଶ െ ‫ ݕ‬ଶ Ǥ
5. The structure of formulas of ݊݁‫ܵݓ‬ூ , ݊݁‫ܵݓ‬ூூ , ݂ܵ‫݀݊݋‬ଵ , and ݂ܵ‫݀݊݋‬ଶ relies
on the form (13) of the shape index S. Similarly, ݊݁‫ܥݓ‬, ‫݀݊݋݂ܥ‬ଵ , and
‫݀݊݋݂ܥ‬ଶ are based on the curvedness index (14). Descriptors newGaus-
sian and newMean rely respectively on Gaussian (1) and mean (2) curva-
tures forms.
The purpose of this study is to provide 3D Face Analysis research with new de-
scriptors to be embedded in automatic landmarking. In particular, many new de-
scriptors showed local maximum or minimum behaviour on the locus of a land-
Three-dimensional face analysis via new geometrical descriptors 751

mark. Alternatively, some descriptors presented a typical negative or positive


trend. The behaviour of each descriptor in correspondence to a landmark point has
been examined and then reported in Figure 2; each row is dedicated to one de-
scriptor and shows its trend among each landmark. The landmarks of Figure 1 are
taken into consideration for this analysis. The analysis performed and presented in
this table has the main aim of supporting automatic landmarking methods based
on geometry [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11].

Figure 1. Soft-tissue landmarks adopted in this study in a frontal view face acquired via 3D
scanner [29, 30, 31, 32].

3 Discussion

The qualitative data presented in Figure 2 have been quantified to determine


which descriptors could be considered more suitable for landmark extraction algo-
rithms. Value 2 has been assigned to the green boxes, representing commonality
among faces and actual usability to extract the landmark; value 0 was assigned to
white boxes, which are neutral in terms of descriptiveness of the landmark; value -
1 was assigned to orange boxes, meaning that different behaviours are shown for
these descriptors on different faces for the landmark under investigation. Thus, the
criteria of evaluation is based on the soundness and reliability of the descriptor
among all faces of our dataset. If a descriptor keeps always the same behaviour on
the locus of a landmark, the table corresponding to that descriptor and that land-
mark will be green and its numerical value will be assigned as 2. By assigning
these numerical values, 'horizontal' sums are calculated to state which descriptors
are quantitatively more descriptive. Figure 3 shows the global marks. The sound-
est and most reliable descriptors resulted thecurvature, newMean, Gden2, Sfond1,
Cfond2 with values in the range [8; 10].
752 F. Marcolin et al.
Three-dimensional face analysis via new geometrical descriptors 753
754 F. Marcolin et al.

Figure 2. Descriptors’ names (1st column), formulas (2nd), related facial maps (3rd). From 4th
column onward: each cell shows the behaviour of the descriptor on the locus of the landmark.
Orange cells indicate a weak hint about the trend, meaning that slightly different behaviours are
shown on different faces. Green cells indicate a strong hint, i.e. it is common to all faces and so it
could be actually useful to extract the landmark. “sx” = left; “dx” = right.

4 Conclusion

54 novel geometrical descriptors for 3D face are here presented and analysed.
Three-dimensional face analysis via new geometrical descriptors 755

Figure 3. Global quantitative values assigned to each descriptor relying on the soundness of de-
scriptors among different faces.

Facial descriptiveness is taken as the core objective of descriptors’ usability and


innovativeness. In particular, completeness of description all over the face and
particular behaviour (max or min) in correspondence to a landmark points are
searched for and looked as key indicators of descriptors’ soundness.
The application of the new descriptors onto 217 facial depth maps acquired via
laser scanner by our research group has revealed that some of them are not only
suitable to 3D face description and landmark localization processes, but even more
accurate and clearer than their traditional predecessors.

References

1 Vezzetti, E.; F., M. Geometrical Descriptors For Human Face Morphological Analysis And
Recognition. Robotics And Autonomous Systems, V. 60, N. 6, P. 928-939, 2012.
2 Moos, S. Et Al. Cleft Lip Pathology Diagnosis And Foetal Landmark Extraction Via 3d
Geometrical Analysis. International Journal On Interactive Design And Manufacturing, P. 1-18, 2014.
3 Vezzetti, E.; F., M. 3d Human Face Description: Landmarks Measures And Geometrical
Features. Image And Vision Computing, V. 30, N. 10, P. 698-712, 2012.
4 Vezzetti, E.; F., M. Geometry-Based 3d Face Morphology Analysis: Soft-Tissue Landmark
Formalization. Multimedia Tools And Applications, P. 1-35, 2012.
5 Vezzetti, E.; Marcolin, F. 3d Landmarking In Multiexpression Face Analysis: A Preliminary
Study On Eyebrows And Mouth. Aesthetic Plastic Surgery, V. 38, P. 796–811, 2014.
6 Vezzetti, E.; Calignano, F.; Moos, S. Computer-Aided Morphological Analysis For Maxillo-
Facial Diagnostic: A Preliminary Study. Journal Of Plastic, Reconstructive & Aesthetic Surgery, V. 63,
N. 2, P. 218-226, 2010.
7 Vezzetti, E.; Marcolin, F.; Fracastoro, G. 3d Face Recognition: An Automatic Strategy Based On
Geometrical Descriptors And Landmarks. Robotics And Autonomous Systems, V. 62, N. 12, P. 1768-
1776, 2014.
8 Vezzetti, E.; Marcolin, F.; Stola, V. 3d Human Face Soft Tissues Landmarking Method: An
Advanced Approach. Computers In Industry, V. 64, N. 9, P. 1326–1354, 2013.
9 Vezzetti, E.; Moos, S.; Marcolin, F. Three-Dimensional Human Face Analysis: Soft Tissue
Morphometry. Proceedings Of The Intersymp 2011. Baden-Baden, Germany: [S.N.]. 2011.
10 Vezzetti, E. Et Al. A Pose-Independent Method For 3d Face Landmark Formalization.
Computer Methods And Programs In Biomedicine, V. 198, N. 3, P. 1078-1096, 2012.
11 Vezzetti, E. Et Al. Exploiting 3d Ultrasound For Fetal Diagnosis Purpose Through Facial
Landmarking. Image Analysis & Stereology, V. 33, N. 3, P. 167-188, 2014.
12 Creusot, C.; Pears, N.; Austin, J. Automatic Keypoint Detection On 3d Faces Using A
Dictionary Of Local Shapes. International Conference On 3d Imaging, Modeling, Processing,
Visualization And Transmission (3dimpvt), P. 204-211, May 2011.
756 F. Marcolin et al.

13 Creusot, C.; Pears, N.; Austin, J. 3d Landmark Model Discovery From A Registered Set Of
Organic Shapes. Ieee Computesociety Conference On Computer Vision And Pattern Recognition
Workshops (Cvprw), P. 57-64, June 2012.
14 Li, H.; Morvan, J. M.; Chen, L. 3d Facial Expression Recognition Based On Histograms Of
Surface Differential Quantities. Advanced Concepts For Intelligent Vision Systems, P. 483-494,
January 2011.
15 Yang, X. Et Al. Automatic 3d Facial Expression Recognition Using Geometric Scattering
Representation. 11th Ieee International Conference And Workshops On Automatic Face And Gesture
Recognition (Fg), V. 1, P. 1-6, May 2015.
16 Zhen, Q. Et Al. Muscular Movement Model Based Automatic 3d Facial Expression
Recognition. Multimedia Modeling, P. 522-533, January 2015.
17 Li, H. Et Al. An Efficient Multimodal 2d+ 3d Feature-Based Approach To Automatic Facial
Expression Recognition. Computer Vision And Image Understanding, V. 140, P. 83-92, 2015.
18 Li, Y. Et Al. 3d Facial Mesh Detection Using Geometric Saliency Of Surface. Ieee International
Conference On Multimedia And Expo (Icme), P. 1-4, July 2011.
19 Zhang, G.; Wang, Y. Robust 3d Face Recognition Based On Resolution Invariant Features.
Pattern Recognition Letters, V. 32, N. 7, P. 1009-1019, 2011.
20 Bagchi, P. Et Al. A Novel Approach To Nose-Tip And Eye Corners Detection Using Hk
Curvature Analysis In Case Of 3d Images. Third International Conference On Emerging Applications
Of Information Technology (Eait), P. 311-315, November 2012.
21 Szeptycki, P.; Ardabilian, M.; Chen, L. Nose Tip Localization On 2.5 D Facial Models Using
Differential Geometry Based Point Signatures And Svm Classifier. Biosig-Proceedings Of The
International Conference Of The Biometrics Special Interest Group, P. 1-12, September 2012.
22 Lanz, C. Et Al. Automated Classification Of Therapeutic Face Exercises Using The Kinect.
Visapp, P. 556-565, 2013.
23 Rabiu, H. Et Al. 3d-Based Face Segmentation Using Adaptive Radius. Ieee International
Conference On Signal And Image Processing Applications (Icsipa), P. 237-240, October 2013.
24 Zeng, W. Et Al. An Automatic 3d Expression Recognition Framework Based On Sparse
Representation Of Conformal Images. 10th Ieee International Conference And Workshops On
Automatic Face And Gesture Recognition, P. 1-8, April 2013.
25 Abbas, H.; Hicks, Y.; Marshall, D. Automatic Classification Of Facial Morphology For Medical
Applications. Procedia Computer Science, P. 1649-1658, 2015.
26 Canavan, S. Et Al. Landmark Localization On 3d/4d Range Data Using A Shape Index-Based
Statistical Shape Model With Global And Local Constraints. Computer Vision And Image
Understanding, V. 139, P. 136-148, 2015.
27 Di Martino, J. M.; Fernandez, A.; Ferrari, J. 3d Curvature Analysis With A Novel One-Shot
Technique. Ieee International Conference On Image Processing, P. 3818-3822, October 2014.
28 Perakis, P.; Theoharis, T.; Kakadiaris, I. A. Feature Fusion For Facial Landmark Detection.
Pattern Recognition, V. 47, N. 9, P. 2783-2793, 2014.
29 Vezzetti, E. Adaptive Sampling Plan Design Methodology For Reverse Engineering
Acquisition. The International Journal Of Advanced Manufacturing Technology, V. 42, N. 7-8, P. 780-
792, 2009.
30 Vezzetti, E. Computer Aided Inspection: Design Of Customer-Oriented Benchmark For
Noncontact 3d Scanner Evaluation. The International Journal Of Advanced Manufacturing
Technology, V. 41, N. 11-12, P. 1140-1151, 2009.
31 Galantucci, L. M.; Percoco, G.; Di Gioia, E. Low Cost 3d Face Scanning Based On Landmarks
And Photogrammetry. Intelligent Automation And Computer Engineering, P. 93-106, 2009.
32 Sforza, C.; Ferrario, V. F. Soft-Tissue Facial Anthropometry In Three Dimensions: From
Anatomical Landmarks To Digital Morphology In Research, Clinics And Forensic Anthropology.
Journal Of Anthopological Sciences, V. 84, P. 97-124, 2006.
Agustin de Betancourt’s plunger lock:
Approach to its geometric modeling with
Autodesk Inventor Professional

José Ignacio ROJAS-SOLA1* and Eduardo DE LA MORENA-DE LA


FUENTE2
1
University of Jaén, Department of Engineering Graphics, Design an Projects, Campus de las
Lagunillas, Jaén 23071, Spain
2
University of Córdoba, PhD candidate, Campus de Rabanales, Córdoba 14071, Spain
* Corresponding author. Tel.: +34-953-212452; fax: +34-953-212334. E-mail address:
jirojas@ujaen.es

Abstract A geometric modeling of Agustin de Betancourt’s plunger lock with the


parametric Autodesk Inventor Professional 2016 software has been obtained,
which has made it possible to get a simulation of movement, as well as perspec-
tives and exploded views. The process of geometric modeling has followed sever-
al steps: First, the only available mapping has been only a couple of unscaled
sheets found in several documents. These are accompanied by a report on which
parts of the plunger are detailed, the dimensions of some parts and the purpose of
the joint, allowing a first approach to the reconstruction of the plunger lock as
conceived by Betancourt. Also, some dimensional hypothesis that the engineer
does not specify while considering the design has been done, such as transmitting
motion to the shaft which gives the counterweight raising and lowering the diver
and the dimensions of shafts and gears (with the gearing for transmitting the mo-
tion of the crank shaft of the counterweight). The main contribution is that for the
first time the sub-system of the counterweight (gears, pulleys, girder bridge,
chains, mobile counterweight and tilting counterweights) has been dimensioned to
always achieve the equilibrium position between the movable counterweight and
the plunger, independent of the part which is outside the water. This unprecedent-
ed research values one of Betancourt´s main contributions to civil engineering
which provided a real solution to the problems of navigation canals of the French
rivers in the early nineteenth century and was highly valued by the French Acad-
emy of Sciences.

Keywords: Agustin de Betancourt; plunger lock; geometric modeling; virtual


reconstruction; Autodesk Inventor Professional; cultural heritage.

© Springer International Publishing AG 2017 757


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_76
758 J.I. Rojas-Sola et al.

1 Introduction

Agustin de Betancourt y Molina was a famous Spanish eighteenth and early


nineteenth century engineer (1758-1824) known for his great inventiveness and
for his many contributions in the field of engineering, particularly in the field of
civil engineering. He is considered the founder of the School of Civil Engineering
in Madrid.
Many publications about the figure of Agustín de Betancourt y Molina cover a
biographical chronology and other aspects of his life [1-4], as well as his scientific
and technical work [5]. Therefore, due to the importance of it, the Canary Founda-
tion Orotava of History of Science has spent many years and much effort to col-
lect large parts of the life and work of the brilliant engineer, making the
Betancourt Digital Project freely available [6].
This research shows the process followed during the documentation of the
cultural heritage by the engineer, in particular, on a new navigation system that
included the design of a plunger lock presented in Paris at the National Institute of
France, September 21, 1807 after years of research on the theory of machines.
The main purpose of this communication is to obtain an accurate digital resto-
ration of the tridimensional model of the invention that allows its geometric doc-
umentation as a preceding simulation and analysis from the point of view of engi-
neering through CAE techniques which will be done in future research. Therefore,
it is an unprecedented research into one of the most important contributions of the
great engineer to civil engineering.

2 Plunger lock

The plunger lock, like the rest of locks, is an hydraulic device that allows the ves-
sel to overcome unevenness in navigation canals, raising or lowering boats, that
is, making easy inland navigation, with heights reaching up to 25 m. This limit is
due to the water needed to feed the canal, the navigation traffic, and the volume of
water necessary for the passage of a lock.
Usually, it is a slow operation, because you have to balance the hydraulic lev-
els; first, with the channel where the boat is placed and then with the other chan-
nel where the boat will leave the lock. This maneuver produces considerable wa-
ter consumption.
The canal lock designed by Agustín de Betancourt [7] presents the particularity
to be a completely original study that showed clear advantages over models of
that time: there is not water consumption during operation of ascent and descent
of the boats, the process takes little time, the lock can be manipulated by only one
person, and it is simple to build. To achieve this goal, Betancourt proposes mount-
ing a plunger on a flooded chamber that communicates with the channel. The
Agustin de Betancourt’s plunger lock … 759

physical principle is very simple because when the piston falls, the water level of
the lock rises, and on the contrary, raising the piston causes the water level back
to fall back to its initial position.
Also, the dimensions of the lock were calculated for shallow canals (1.30 m)
and for vessels between 8 and 10 tons. The engine would be useful to save une-
venness of 2.6 m according to the project, although, the author in the memory of
the project, had said that lock would serve up more than 4.5 m. For greater heights
it proposes to build two chambers together. This approach has a limit set at 9.74 m
drop and three chambers in series. From this height, Betancourt proposes a differ-
ent solution: he talks about using an inclined plane by which the boats go up and
down mounted on carriages, through a system powered by the same piston de-
signed for the lock up mentioned.
One of the difficulties faced by the Canary engineer was that the French Acad-
emy was in favor of using deep-water channels in continuity with the projected
Canal du Midi (1681), which allowed navigation boats up to 80 tons. However,
Betancourt argued that transporting goods through shallows channels was more
advantageous because they needed a lower flow rate, labor costs were much
cheaper, and transportation was much faster.
Regarding the originality of the project, Betancourt claims to know the most
emblematic projects in England and Spain, and mentions the work of Robert A.
Fulton [8] of 1796, Pat Lawson J. Huddleston [9] 1800 or the lock in inclined
plane of Coalbrookdale (England) on the Severn river. The patented Huddleston
model, also based on a plunger, existed before his model but there was a funda-
mental difference. Beyond that Betancourt projects his lock without knowing in
detail Huddleston’s patent: the plunger lock has the virtue of getting a perfect bal-
ance for any position of the plunger, so one person could manipulate the whole
system up and down, while Huddleston’s demanded a system of pulleys and very
complex chains that required delicate maintenance, and the intervention of several
people, so it slowed down the operation.
For design purposes, the invention has been divided into four systems: lock ca-
nal, chamber, plunger and the counterweight.

3 Material and Methods

The initial information has been recovered from the Canary Foundation Orotava
of History of Science which has spent years collecting sources on Agustin de
Betancourt y Molina. Specifically, a descriptive memory and a map showing a
description of its operation and the description of the parts of the plunger lock,
and its technical characteristics have been provided.
760 J.I. Rojas-Sola et al.

From this information, we have obtained the geometric modelling allowing a


3D virtual reconstruction with the cooperation of the parametric Autodesk
Inventor Professional software [10], which has made it possible to obtain a
simulation of movement, as well as different plans and exploded views.
The process of geometric modelling has followed several steps: First, the only
available mapping has been only a couple of unscaled sheets found in several
documents, the most important is at the Institut National des Sciences Physiques
et Mathématiques [7]. These sheets are accompanied by an explanatory report on
which parts of the plunger are detailed, the dimensions of some parts and the
purpose of the joint, allowing a first approach to the reconstruction of the plunger
lock as conceived by Betancourt.
Figure 1 shows two main views (front and top view) of the assembly as well as
a section of the plunger seen from opposite directions to appreciate the details of
the camera, plunger and the mechanism and elements that make up the counter-
weight system.

Fig. 1. Main views and section of the plunger lock. Image from “Fundación Canaria Orotava de
Historia de la Ciencia”. (A: lock canal; B: camera; C: plunger; D: counterweight).
Agustin de Betancourt’s plunger lock … 761

4 Results and Discussion

The three-dimensional modeling process has been quite complex due to the lack
of information, both graphic and descriptive. The only two sheets of the invention
are drawn unscaled and the proportionality has been respected by measuring on
these sheets, to obtain an accurate 3D model.
Examining the memory in detail, it can be observed that Betancourt does not
specify the dimensions of the plunger. However, he is very interested to show that
there is a theoretical balance between the plunger and the movable counterweight,
independently of what it is submerged. In addition, he does not specify the num-
ber of teeth that the gears have that move the counterweight or the number of
gears it needs; on the contrary, it only indicates that for every quarter turn the
counterweight makes, the crank maneuver must turn 16 laps.
Also, some dimensional hypothesis that the engineer does not specify while
considering the design has been done, such as transmitting motion to the crank
shaft which gives the counterweight raising and lowering the plunger and the
dimensions of shafts and gears (with the gearing for transmitting the motion of the
crank shaft of the counterweight). As part of the results obtained, different
perspectives are shown. First, the lock assembly (Figure 2) and then other
exploded views with its main systems and components (Figures 3-6).

Fig. 2. View of the whole plunger lock.


762 J.I. Rojas-Sola et al.

Figure 3 shows all the elements that form the counterweight system in detail.
On one side, the two tilting counterweights, made of timber, support the moving
iron counterweight. On the other, whole system rotates about an axis which is at-
tached to the gear which determines the position of the plunger.

Fig. 3. Exploded view of counterweight system.

Another dimension to justify has been the counterweight system; both, the tilt-
ing and the mobile counterweight. Betancourt makes some small indications to
calculate its weight and position, but does not speak about materials for their real-
ization nor performs calculations to solve the particular case presented to the Na-
tional Academy. This is because the position of the movable counterweight will
depend on its weight, so Betancourt estimated that he simply could design a pair
sufficiently large enough to tilt balances to achieve some equilibrium that would
depend on the weight of the movable counterweight.
Figure 4 shows the exploded view of the shaft supporting the two tilting coun-
terweights and the abovementioned gear. The drive shaft movement of the plung-
er can be appreciated in detail. It is noteworthy that the gear system has been de-
duced from the memory of the invention and is designed with the aim that one
single person can move it.
The gear design has raised the initial hypothesis indicated by the engineer, so
that for every 16 turns of the crank, the rotation of a quarter turn of the axis oc-
curs. This, translated into an iron spur gear system (commonly used at the time)
means that for every rotation of the shaft, the crank makes 64 laps. Also there is a
technical difficulty: developing gears lower than 6 teeth, so these gears are taken
as minimum.
If the handle has 6 teeth and was directly connected to the shaft, 64 times high-
er gear would be needed so it would be one of 384 teeth. To avoid this case, an in-
Agustin de Betancourt’s plunger lock … 763

termediate shaft has been proposed which involves two small gears of 6 teeth and
two big gears of 64 (but only a quarter of this gear will be used) and 144 teeth.
This solution is more functional. So, with these gear ratios, the indication given
by Betancourt in his memory is respected.

Fig. 4. Exploded view of the drive shaft of the plunger.

Figure 5 shows how the girder bridge is arranged with pulleys that position the
chains that move the plunger, appreciating how the movement is transmitted from
the gear to the counterweight and thence to the plunger.

Fig. 5. View of the girder bridge, counterweight assembly and the plunger.

The chain design has been particularly difficult. “O geometry” and the high
number of links that make up each chain (Betancourt´s project includes four)
764 J.I. Rojas-Sola et al.

makes the simulation complicated. The design of the link follows a regular pattern
through a simple technique of scanning a circular sector along a closed curve.
However, it can`t be considered like a simple contact between the surfaces of two
links, due to the fact that the software cannot evaluate this kind of contact. To
solve this problem, two points within the surface of the link (one upper and one
lower) have been defined, and to simulate the behavior of the chain, which has
been assumed that each top point of a link is opposite the lower point of the next
link, so that the modeling is correct, and facing a future static stress or dynamic
analysis maintains a similar behavior of a chain (Figure 6).

Fig. 6. Modeling for the simple contact between opposing links from points within the link.

5 Conclusions

This paper presents an original research that shows how the process of 3D model-
ing of the Agustin de Betancourt and Molina´s plunger lock is developed, one of
his main contributions to civil engineering, through the participation of parametric
software Autodesk Inventor Professional 2016, and therein lies its importance.
The 3D model obtained respects the measures proportionality offered by the
only two unscaled sheets available in the memory of the invention, looking for a
reliable model. Also, given the absence of both graphical and descriptive infor-
mation, there have been taken some dimensional assumptions regarding to the
transmission of motion of the gears with the axis that moved the tilting counter-
weight and made the plunger move up or down.
However, the main finding of this research is that the whole counterweight sys-
tem (gears, pulleys, girder bridge, chains, mobile counterweight and tilting coun-
terweight) has been dimensioned for first time, to reach the position of balance
always.
In future research, a study will be carried out in-depth through CAE techniques
(Computer-Aided Engineering) from CAD model obtained (Computer-Aided De-
sign), including the analysis of static equilibrium curve made by Betancourt to
show how great was his invention, allowing the state of perfect balance for all po-
sitions of the plunger.
Agustin de Betancourt’s plunger lock … 765

Acknowledgments This research has been developed within the research project entitled
"Agustin de Betancourt's historical heritage: a comprehensive study of contributions to the civil
engineering from the perspective of engineering graphics for its valorization and dissemination”
(HAR2015-63503-P), funded by the Spanish Ministry of Economic Affairs and Competitiveness
(MINECO), under the National Plan of Scientific and Technical Research and Innovation (2013-
2016), and European Fund for Regional Development (FEDER). Also, we are very grateful to
the Fundación Canaria Orotava de Historia de la Ciencia for permission to use the material of
Project Betancourt available at their website. Also, the authors thank to John Swanston his help
with the translation.

References

1. Muñoz Bravo J. Biografía cronológica de Don Agustín de Betancourt y Molina en el 250


aniversario de su nacimiento, 2008 (Acciona Infraestructuras, Murcia).
2. Bogoliúbov A.N. Agustín de Betancourt: un héroe español del progreso, 1973 (Seminarios y
Ediciones, Madrid).
3. Martín Medina A. Agustín de Betancourt y Molina, 2006 (Dykinson, Madrid).
4. Padrón Acosta S. El ingeniero Agustín de Béthencourt y Molina, 1958 (Instituto de Estudios
Canarios, La Laguna de Tenerife).
5. Cioranescu A. Agustín de Betancourt: su obra técnica y científica, 1965 (Instituto de Estu-
dios Canarios, La Laguna de Tenerife).
6. Proyecto Digital Betancourt, 2016. Available at: http://fundacionorotava.es/betancourt
7. Betancourt A. Mémoire sur un nouveau système de navigation intérieure, 1808 (Institut Na-
tional de France, Paris). Available at:
http://fundacionorotava.es/pynakes/lise/betan_memoi_fr_01_1807
8. Fulton R.A. Treatise on the improvement of canal navigation; exhibiting the numerous ad-
vantages to be derived from small canals. And boats of two to five feet wide, containing from
two to five tons burthen, 1796. (Taylor, London).
9. Huddleston L.J. The repertory of arts and manufactures: consisting of original communica-
tions, specifications of patent inventions, and selections of useful practical papers. In Trans-
actions of the Philosophical Societies of All Nations, Vol. XV, London, 1801, pp.81-89.
10. Shih R.A. Parametric modeling with Autodesk Inventor 2016, 2015 (SDC Publications, Mis-
sion (Kansas, USA)).
Designing a Stirling engine prototype

Fernando Fadon1*, Enrique Ceron1 , Delfin Silio1 and Laida Fadon1


1
University of Cantabria
* Corresponding author. Tel.: +34 942 201797; fax: +34 942 201790. E-mail address:
fadonf@unican.es

Abstract Stirling engines have become of great interest on the last decades due to
their efficiency and sustainable energy production. This study develops one of the
first phases of the design of a Stirling beta type engine - able to generate 1kW of
power- including the specification of basic components of the engine and a ther-
modynamic simulation (by Computer Fluid Dynamic-CFD) with real parameters.
The approach comprises thermodynamic improvements of the engine, which are
the base for an improved design. The methodology consists of various consecutive
steps such as design procedures, theoretical calculations, simulation process de-
scription and analysis of results obtained. The first phase defines the model geom-
etry, specifying the volume occupied by the working gas, equations that define the
power and shifter pistons movements and dimensions of all components that per-
form the engine. After defining the inner volume, the mass of gas inside the en-
gine is calculated, and hypothetical contour conditions are established. Based on
these data and following the theoretical ideal engine process, it is possible to ob-
tain the theoretical P-V diagram, as well as the engine power and efficiency
through a basic thermodynamic analysis. This procedure is repeated as many times
as needed in order to get the desired results. By modifying different parameters, it
is possible to find a suitable design that combines geometry and contour condi-
tions, and therefore approach to the design power.

Keywords: Stirling engine, CFD simulation, geometric design.

1 Introduction

In recent years, interest about new technologies relating to the exploitation of re-
newable energy sources has increased. More efficient ways of energy consump-
tion have been developed, reducing as much as possible the environmental impact
compared to current energy sources such as oil, coal or natural gas.
In this contest, Stirling cycle engines are considered to admit a great variety of
renewable energies like geothermal, nuclear, biomass and solar. These devices are
thermal machines which work with an external combustion, and therefore admits a
different heat sources.

© Springer International Publishing AG 2017 767


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_77
768 F. Fadon et al.

The highly theoretical thermal efficiency is one of its main characteristics; it


even raises the efficiency of the Carnot machine. The engine consists of a close
cycle device with an external combustion, so that the outputs do not blend with the
work fluid. This leads to a higher control of toxic emissions, noise and vibrations.
Thermodynamics of the Stirling engine has been studied previously by authors
like Thombare [1], who provides a detailed literature review of earlier develop-
ments in terms of conceptual design. Others research works [2-5] analyze the ef-
fects and losses of the engine components during the performance. Likewise,
some models have been developed including thermodynamic losses [6-8].

Fig. 1 Stirling model designed

This study aims to introduce a thermodynamic analysis by CFD simulation


techniques of a beta type Stirling engine. The engine design consists of a theoreti-
cal thermodynamic analysis calculated for an efficiency of about 1kW. After that,
mechanical and geometrical parameters of the model are defined, based on the
previous results. Finally, there is an implementation of the CFD simulation
providing more realistic information about the engine performance. The model de-
signed is shown in Fig.1

2 Thermodynamic Simulation

The process followed during the thermodynamic simulation involves three con-
secutive phases: It starts with the Geometry design, then the Model import to the
CFD, and finally the Simulation.
Designing a Stirling engine prototype 769

2.1. Geometry design.

The model geometry is simplified by removing some specific design elements (i.e.
holes, piston details…) which do not affect to the result of the process. Moreover,
elements that slightly affect to the thermodynamic assessment are released, such
as dipsticks or connecting rods.
The components which are involved in the simulation are: heater cap (fig.2a);
cooling fins, heat sink (fig. 2b); displacer piston and regenerator (fig.2c); piston
(fig. 2d). Apart from that, the working fluid needs to be modelled as another piece
(fig.2 e-f).
The space corresponding to the work fluid is divided into two cylinders that are
separated by the shifter and the regenerator. These parts belong to the engine heat-
ing and cooling areas. The connection between both cylindrical spaces is done by
the regenerator, a porous environment which allows the fluid to go through from
one area to the other. The heat exchange is critical in this process.

Fig. 2. Components involved in simulation. View of the external and (g) and internal (h) compo-
nents assembly.

The thermodynamic simulation is implemented by assembling the simplified


components into a whole engine (fig 2.g-h), which is located in the established po-
sition as cycle start. Then, the model is imported into CFD software to create the
mesh.

2.3. Meshing with CFD

Once the model is in the CFD software (ANSYS Fluent), each geometrical com-
ponent is classified by its physical state and tagged with an identifying name. (fig.
3, Table 1).
770 F. Fadon et al.

Table 1 Model components

Element Type
Cooling fins Solid
Cylinder Solid
Displacer Solid
Hot zone fluid Fluid
Cold zone fluid Fluid
Heater cap Solid
Piston Solid
Regenerator Fluid

Fig. 3 Edited model in CFD

Dynamic meshing is applied when the component involved into the simulation
varies its volume and has moving elements. This is the case of the working fluid.
During the engine cycle, the working fluid changes its volume considerably, due
to the alternative movement of the piston and the displacer. Meshing elements
have to be compatible with the variation of the geometrical form of the fluid dur-
ing the process. Therefore, the two elements representing the working fluid in the
simulation process need to be dynamic. For that purpose, the meshing elements
have to warp and adapt themselves to the changing space. Surfaces enclosing the
working fluid must be defined as fixed, rigid boundaries with a specified move-
ment.
The correct definition of surrounding surfaces- rigid boundaries - and fluid are-
as - deformable boundaries – is essential in order to avoid geometrical conflicts of
interferences between surfaces.
When modelling, it is possible to work with tetrahedral and hexahedral mesh-
ing elements. The first ones are generally more efficient when working with flu-
ids, but only admit small deformations. Layering consists on adding or removing
layers to the mesh when it stretches or shrinks, so that the shape is adjusted when
deformations are large. Tetrahedral meshing does not allow layering. Therefore,
the components representing the fluid suffer such deformations that tetrahedral el-
ements cannot be applied; In this case, hexahedral elements can be used. For the
rest of components it is used that mesh that best adapts to their geometries.
The fluid is separated in two cylinders, Fig.4, corresponding to the gas volume
in hot (Fig. 4a) and cool (Fig.4b) areas which are connected by the regenerator. In
both cases, the cylindrical walls shorten or stretch with the alternative movement
of the pistons.
The hot cylinder has three boundaries: its upper base is defined as a fixed
boundary, being in permanent contact with the top of the hot space exchanger; its
lower base is in contact with the displacer and follows its movement, so it be-
Designing a Stirling engine prototype 771

comes a rigid boundary with movement; and the lateral surface or cylindrical wall,
which stretches or shrinks with the relative movement of the bases, it is a deform-
able boundary. The case of the cold cylinder is similar - Fig.4b, but now both ba-
ses move themselves, one in contact with the displacer and the other joined to the
piston.

Fig. 4 Dynamic mesh

The initial high of the two cylinders is determined by the position established
as initial in the engine cycle, the moment when the displacer is in the top dead
center (TDC). The step time is fixed by establishing the engine turning speed and
an increment of rotational angle of crankshaft. If the cell size is small, so will the
time step. In Fig. 5 is shown the geometrical model meshing.

Fig. 5 Representation of the geometrical model meshing

2.4. Simulation

Initial values are obtained when the components have no movements assigned.
The initial conditions of the analysis are a working fluid temperature of 300K and
a pressure of 4 bars.
Starting from the previous values, the transitional regime analysis is activated.
The analysis finishes when the monitored values are stable. In this case, the simu-
lation process stops after 75 revolutions.
772 F. Fadon et al.

The axial symmetry of the engine allows visualizing the process. Figure 6
shows the evolution of the transformations. Other variables such as pressure and
fluid displacement speed can be also observed.
The simulation has been performed by applying an incoming heat flow of
2.5kW and 3.5kW.

Fig. 6 Temperature distribution on the external side of the engine

3 Results

Results obtained when applying 2.5kW show the temperature distribution in the
cylinder and cooling fins, P curve, PV diagram and power calculation.

3.1. Temperature distribution.

Mean temperature of the heater remains almost constant along the cycle. It is
1035.89 K (762.89ºC) when the heat introduced from an external source is 2.5
kW, and the temperature of the external air surrounding the engine is 293 K
(20ºC). The heater top achieves a maximum temperature of 1160 K (887ºC),
whereas the cooling fins does not get over 343 K (70ºC).
The mean air temperature (Fig.7) varies along one of the revolutions of the en-
gine, in each of the expansion and compression spaces of the cylinder. Fluid in hot
and cold spaces is approximately an average temperature of 870.2 K (597.2ºC) y
387.4 K (114.4ºC), respectively.
Fig. 8 shows the movement of work fluid in an engine cycle. The air travels al-
ternatively between both heat exchangers. Temperature rises when it compresses
at the time it moves to the heat end of the engine. Then, the fluid expands and
moves to the cool end, and temperature decreases, as observed in figures 7 and 8.
In other words, compression and expansion processes in the real cycle do not oc-
cur at a constant temperature. Moreover, turbulences occur when the gas flow
Designing a Stirling engine prototype 773

goes into the cool and heat spaces and hit the cylinder upper wall and the piston
head.

Fig. 7 Mean temperature in hot and cold space

Fig. 8 Working fluid movement in an engine cycle

3.2. Average absolute pressure of the fluid. Fluid Volume.

The average absolute pressure of the air mass is unevenly distributed between the
dead volume of the regenerator and the expansion and compression spaces (Fig.
9a). Inside the entire gas volume, there are relative differences of pressure. Figures
9 and 10 indicate the distribution of the absolute pressure inside the interior vol-
ume of the engine.
Maximum absolute pressure of 12.5 bars occurs when the power piston
achieves the TDC(Fig.9b). Pressure difference between heat and cool areas is 1.5
bars.
On the moment of maximum expansion (Fig. 9c), pressure is 2.93 bars. The
lowest value occurs in the cool space, where gas expands occupying a higher vol-
ume than in the heat space. Difference of pressures between both ends is 0.71
bars.
In relation to the distribution of the relative pressure along one of the cycles
(Fig. 10), red color indicates the maximum relative pressure inside the engine,
whereas blue means the minimum.
774 F. Fadon et al.

The volume varies in a sinusoidal way (Fig. 11). Minimum volume is 112.06
cm3 and maximum is 261.95 cm3. The compression ratio is 2.34.

Fig. 9 Mean absolute fluid pressure (a), maximum (b) and minimum (c) compression pressure

Fig. 10 Distribution of the relative pressure along one cycle

Fig. 11 Fluid Volume

3.4. P-V diagram. Power calculations.

Thermal efficiency calculated in the P-V diagram of Fig. 12 is 15.6%. Table 2


summarizes the most characteristic data obtained.
Designing a Stirling engine prototype 775

Another simulation has been performed by introducing 3,5kW to the heat


source, which achieves 1300 K. The results obtained are the following:
W net=56.475 J; Output power 941.25 W; Thermal efficiency 26.9%
Consequently, power improves as well as its performance, Fig.12b. Further re-
search about whether the engine can deal with these conditions would be of inter-
est. If this hypothesis was rejected, then the engine should be resized.

Fig. 12 P-V Diagram for 2.5 Kw input (a) and for 3.5 Kw input (b)

Table 2. Simulation data.

Heat input 2.5 kW


Output power 387 W
Thermal efficiency 15.6%
Rpm 1000
Working fluid mass 1g
Mean temperature heater 1036 K
Cold space mean Temp. 343 K
Max / Min fluid Temp. 870 K / 387 K
Max / Min pressure 11.7 bar / 3.1 bar
Max / Min volume 262 cm3 / 112 cm3
Compression ratio 2.34
Wexp/cycle 99.64 J
Wcomp/cycle 76.416 J
Wnet/cycle 23.224 J

4 Conclusions

The results obtained in the thermodynamic simulation are coherent with the initial
theoretical analysis. Expansion and compression are close to adiabatic processes.
One of the main differences between the initial thermal analysis and the simula-
tion is the work obtained in the expansion of the fluid, which is much lower. By
776 F. Fadon et al.

providing 2.5 kW of heat energy to the heat source, it theoretically generates an


expansion work per cycle of 150 J, which is 99.64J in the simulation, so the heat
power absorbed is 1.66kW. This means that heat transference losses from the
heater walls to the working fluid are relevant. In order to improve this issue, the
exchanger efficiency should be enhanced with a different design, or by using an-
other kind of fluid with better characteristics than air. In the case of the compress
work there is a good approximation, as both of them are almost of the same value.

Acknowledgments The authors would also like to thank ANSYS, Inc. for the use of ANSYS
Academic program.

References

[1] Thombare, D.G. and S.K. Verma, Technological development in the Stirling cycle engines.
Renewable and Sustainable Energy Reviews, 2008. 12(1): p. 1-38.
[2] Popescu, G., et al., Optimisation thermodynamique en temps fini du moteur de Stirling endo-
et exo-irréversible. Revue Générale de Thermique, 1996. 35(418–419): p. 656-661.
[3] Kaushik, S.C. and S. Kumar, Finite time thermodynamic analysis of endoreversible Stirling
heat engine with regenerative losses. Energy, 2000. 25(10): p. 989-1003.
[4] Kongtragool, B. and S. Wongwises, Thermodynamic analysis of a Stirling engine including
dead volumes of hot space, cold space and regenerator. Renewable Energy, 2006. 31(3): p.
345-359.
[5] Costea, M., S. Petrescu, and C. Harman, The effect of irreversibilities on solar Stirling engine
cycle performance. Energy Conversion and Management, 1999. 40(15–16): p. 1723-1731.
[6] Timoumi, Y., I. Tlili, and S. Ben Nasrallah, Performance optimization of Stirling engines.
Renewable Energy, 2008. 33(9): p. 2134-2144.
[7] Zhang, C., et al., Dynamic simulation of one-stage Oxford split-Stirling cryocooler and com-
parison with experiment. Cryogenics, 2002. 42(9): p. 577-585.
[8] Formosa, F. and G. Despesse, Analytical model for Stirling cycle machine design. Energy
Conversion and Management, 2010. 51(10): p. 1855-1863.
Design and analysis of tissue engineering
scaffolds based on open porous non-stochastic
cells

R. Ambu1* and A.E. Morabito2


1
Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via
Marengo 2, 09123, Cagliari, Italy
2
Department of Engineering for Innovation, University of Salento, via per Monteroni, 73100
Lecce, Italy
* Corresponding author. Tel.: +39-070-675-5709 ; fax: +39-070-675-5717. E-mail address:
ambu@unica.it

Abstract In orthopaedics, cellular structures can be used as three-dimensional po-


rous biomaterials that try to mimic the characteristics and function of the bone.
The progress in manufacturing techniques, mainly in the field of additive manu-
facturing, can potentially allow the production of highly controlled pore architec-
tures and customized implants that, however, need more sophisticated design
methodologies. In this paper, the design of porous biocompatible structures based
on mathematically defined surfaces (triply periodic minimal surfaces) has been
considered in respect of the approach that considers unit cells entirely modelled in
CAD environment. Two types of unit cell have been here considered: the cubic
and the P-cell. The cubic cell is created by a 3D CAD s/w from solid features that
are combined together. The P-cell is modelled using an implicit function to de-
scribe the outer surface of the cell. Two are the design parameters of the P-cell:
thickness and radius. The variation of these parameters allows modifying the ar-
chitecture of the basic unit of the scaffold. The modification of the radius is car-
ried out by a procedure, based on scaling and truncation operations. The thickness
of the cell is modified by thickening and closure operations on the P-isosurface.
The effect of these variations on the mechanical behaviour of the scaffold has
been numerically evaluated by the estimation of the stiffness of each structure
considered. The results demonstrated the huge potentiality of the method and
stiffness values compatible with those required for biomechanical applications.

Keywords: porous materials; bone implants; scaffolds; design.

© Springer International Publishing AG 2017 777


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_78
778 R. Ambu and A.E. Morabito

1 Introduction

Cellular materials are considered very attractive for applications in different


fields [1]. In tissue engineering, porous biocompatible materials can be used to ob-
tain three-dimensional structures that try to mimic the features and function of the
bone [2]. These porous structures can be used, for example, in joint replacement
surgery and bone grafting [3, 4]. These materials are considered particularly ap-
propriate, because, optimised structures allow better bone ingrowth and implant
fixation [5].
Typically, structures with open cells are required and the pores should be suffi-
ciently large and form access channels that are connected to each other to allow
for the penetration of osteogenic progenitor cells into the pores, vascularization
and the diffusion of nutrients [6].
The design of a functional scaffold depends on different parameters and, gener-
ally, is obtained by an optimal combination between the dimensional and geomet-
rical characteristics of the cells and the mechanical properties of the biocompatible
material. In fact, from a geometrical point of view, the size and characteristics of
the pores are important parameters for a functional implant; on the other hand, the
mechanical properties have to be similar to those of the bone so as to minimize the
effect known as “stress shielding’’. This effect is due to the mismatch of the me-
chanical properties between the bone and the implant and can affect the longevity
of the implants [7]. Besides the intrinsic mechanical characteristics, the biocom-
patibility is a fundamental aspect to be taken into account. Titanium and its alloys
have shown optimal characteristics of biocompatibility and are widely reported in
literature [8, 9].
The techniques, known as additive manufacturing (AM), generally considered
very promising for manufacturing [10], offer many advantages for the production
of these biomedical implants. These methods use three-dimensional Computer
Aided Design (3D CAD) data to manufacture physical models, prototypes or func-
tional components. Therefore, the architecture of the component can be accurately
controlled during the design stage to build a structure with tailored design specifi-
cations. The use of these technologies potentially allow speeding up the fabrica-
tion of highly complex and custom-fitting medical devices [11].
Porous biocompatible scaffolds can be obtained by considering a stochastic dis-
tribution [12] of the open cells or by means of a non-stochastic architecture ob-
tained by replicating a unit cell along the three Cartesian directions, such as lat-
tice-based geometries [13] cubic, diamond, rhombic dodecahedron, or similar.
Design and analysis of tissue engineering ... 779

The principle of replicating a unit cell to obtain a porous architecture is also


employed for designing porous biocompatible scaffolds starting from triply peri-
odic minimal surfaces (TPMS) [14]. The interest about the TPMS, representing a
particular class of minimal surfaces, is justified by the several natural manifesta-
tions that these particular surfaces show, in living and non-living forms. These
surfaces are periodic in three independent directions, extending infinitely and
without self-intersections. Since they are minimal, the area included in any closed
curve lying on these surfaces is the minimum possible. The minimal surface also
divides the three-dimensional space into two equal volumes. Mathematically, the
minimal surfaces are defined as surfaces with zero mean curvature anywhere. The
mean curvature of a surface S at a point P is expressed by H = (k 1 + k2)/2 where k1
and k2 are the principal curvatures of S in P so that each point of a minimal surface
is a saddle point.
The class of the minimal surfaces are today of great interest for the modelling
of highly porous biocompatible scaffolds. Applications of TMPS reported in lit-
erature are mainly relative to biocompatible architectures manufactured by poly-
mers [15, 16, 17]; recently, the manufacturability and properties of titanium alloy
TMPS structures for tissue engineering obtained with AM has also been investi-
gated [18].
In this paper, the modelling of biocompatible structures based on TPMS has
been considered. In particular, the level surface known as the Schwarz’s Primitive
(P) minimal surface (in the following named as P-surface) has been considered.
This TMPS surface was used to obtain porous structures with different stiffness,
that were structurally evaluated by means of finite elements (FE) analysis in com-
parison with more conventional unit-cell based biocompatible scaffolds entirely
modelled in CAD environment.

2 Design of the unit-cell based models

In order to generate unit-cells based architectures two different parametric


methods have been used. The first method was used to obtain a lattice structure
made of unit cubic cells, while the other procedure was applied for modelling
structures made of TMPS P-unit cells. Lattice structures, such as those made of
cubic cells, can be generated by using a CAD-based approach. By means of a 3D
CAD parametric modeller, a unit cell can be generated from solid features that are
combined together. Then, a 3D periodic array of the unit cell can be carried out
along three mutually perpendicular directions to obtain the final architecture.
To verify this procedure and assess the behaviour of lattice structures, a repre-
sentative model made of cubic unit-cells was generated [19]. The geometric pa-
rameters that characterize this kind of structures are the pore size and the strut di-
ameter and, by varying parametrically their values, porosity and consequently
mechanical stiffness can be adjusted. Fig.1 shows the model obtained.
780 R. Ambu and A.E. Morabito

Fig. 1. Model of a cubic unit cell based structure

The modelled structure is characterized by porosity of 78% which was obtained


by adopting a pore size of 1260Pm and a strut diameter of 270Pm.
A model made of 3x3x3 cells was considered, since modelling of large lattice
structures becomes prohibitive as the number of elements necessary for numerical
analysis becomes extremely large. However, it was shown [20] that the key me-
chanical properties of the lattices structures can be estimated by one or a small
number of unit cells.
The other approach, which is implemented in this paper to model the shape of
the P- unit cell, is the implicit function method. This method uses just one mathe-
matical expression in implicit form to model the outer surface of the cell. The P-
cell can be described, to the first order of approximation, by the following nodal
equation:

cos x + cos y + cos z = 0 (1)

under the boundary conditions x=[-π, π], y=[-π, π] and z=[-π, π]. Mathmod v4.0 [21]
has been used to generate tessellated models that describe the surface of the cell.
The P-surface, described by equation (1), divides the unit cell into two distinct
phases. Phase 1 and phase 2 are respectively the regions where f(x, y, z)<0 and
f(x, y, z) > 0. It is easy to show that these phases have the same volume so that the
P-cell identified by the equation (1) has a porosity of 50%, if the two phases re-
spectively identify a solid and a void region.
The radius of the circular openings of the P-cell can be modified using the
method shown in [14]. Since a scaled and clipped minimal surface also results in a
minimal surface, unit P-cells can be scaled and subsequently intersected with unit
cube to form scaled-truncated P-cells. The radius of these cells will vary depend-
ing on the scaling factor used. Fig.2 shows the result of this scaled-truncation at a
scaling factor of 1.25. The effect of this modification on the unit P-cell geometry
Design and analysis of tissue engineering ... 781

is to increase the volume of the region where f(x, y, z)<0. If this phase identifies
the void region of the cell, the resultant porosity rises to about 73%.
Original P-cell

Scaled P-cell

Fig. 2. The original P-cell and the scaled P-cell (scaling factor=1.25)

For manufacturing purposes, a solid model of the P-cell is obviously more use-
ful. This model can be obtained from the mesh surface data by a procedure, im-
plemented into Mathmod. Firstly, an offsetting algorithm, that makes thick the P-
isosurface, has been applied (Fig.3a)). Then a further algorithm was used in order
to close the thickened mesh (Fig.3b)). By the variation of the offset value, it is
possible to control another important geometric parameter of the solid P-cell: the
thickness.

Fig. 3: a) thickened P-cell; b) thickened and closed P-cell

In this analysis three different unit P-cells were obtained by varying the offset
from the original P-cell surface, while maintaining the same geometric radius. The
unit-cells obtained are reported in Fig.4.
782 R. Ambu and A.E. Morabito

Fig. 4. Unit P-cells obtained by the original P-surface for different offset values.

For each of these P-cells it is possible to define the porosity or the volume frac-
tion of the void phase. If V is the volume bounded by the external surface of the
cell and Vm is the volume occupied by the solid phase, the porosity is given by the
ratio (V-Vm)/V. The values obtained for the three different unit cells are respec-
tively 29%, 49% and 62%.
Each unit cell was imported as STereoLithography (STL) file in CAD envi-
ronment and different scaffolds were modelled by replicating in three orthogonal
directions the unit cell. Fig.5 shows the model of the scaffold obtained for the unit
P-cell labelled as B in the previous figure.

Fig. 5. Model obtained by replication of the unit-P-cell

Each model was made of 3x3x3 unit cells. A limited number of unit cells can
introduce some scale effects in models, but in the range 3x3x3-5x5x5 unit cells it
was shown [22] that discrepancies are below 10%.
Design and analysis of tissue engineering ... 783

3 Assessment of the models

The modelled structures were numerically evaluated by means of structural FE


analysis. Each model was meshed by using four nodes brick tetrahedral elements.
Convergence testing for all numerical models was performed in order to minimize
the influence of mesh density on the results. The material chosen was a titanium
alloy (Ti-35Nb-4Sn), a recently developed material with biocompatible properties
[23] whose mechanical properties are: E=44GPa, Q  Uniaxial compression
tests were performed by applying a uniform displacement, within the material
elastic limit, to the top surface of the structure corresponding to 0.1% of compres-
sive strain while the lower surface was fully constrained.
The first model analysed was the cubic unit-cell model entirely obtained in CAD
environment. Fig.6 reports the iso-colour representation, expressed in MPa, of the
stress in the direction of the applied load.

Fig. 6. Compressive stress relative to the cubic cells model

This lattice structure show straight edges and sharp transitions where the geo-
metric primitives intersect each other. Fig.6 shows qualitatively how the load is
mainly sustained by the vertical struts, while the structure is less compliant in the
other direction; moreover, at higher stresses, near the sharp turns areas of stress
concentrations could be produced.
Subsequent FE analysis was relative to the models obtained by means of the P-
cells previously described. To reduce the computation time, a quarter of each
model made of P-cells was considered and symmetry boundary conditions were
applied to simulate the whole structure. Two different unit-cell edge length were
chosen respectively of 0.8 mm and 1.8 mm. Similarly to the cubic unit cell model,
compression tests were conducted on the structures considered, assuming as mate-
rial a Ti alloy with mechanical properties analogous to that previously considered.
Fig.7 shows the iso-colour representation, expressed in MPa, of the stress in the
direction of the applied load, relative to the model reported in Fig.5.
784 R. Ambu and A.E. Morabito

Fig. 7. Compressive stress relative to the P-cells model

This figure qualitatively highlights the trend of the compressive stress, showing
a more homogeneous distribution in the different directions if compared with the
model obtained with the cubic cell previously analysed.
For each simulation preformed, the effective elastic modulus (E eff) was evalu-
ated. The reaction force was calculated by the FE solver while the homogenized
stress was obtained by dividing the total reaction force by the total area of the
loading plane. Since the applied strain is known, E eff can be calculated. Fig.8
shows the results relative to the models considered.
30

25
0,8
20 1,8
Eeff(GPa)

15

10

0
A B C
Model

Fig. 8. Effective elastic modulus of the P-cells models

The figure shows that the difference between the values obtained for the two
unit-cell edge length is quite limited, with a maximum value within 6%. The val-
ues of the effective elastic modulus for the models analysed are in the field of the
Design and analysis of tissue engineering ... 785

values of the longitudinal elastic modulus of the cortical bone [24]; a closer
agreement is obtained for two models with smaller thickness.
To further investigate the effect of the geometry of the cells on the stiffness of
the structure, a model obtained by a scaled unit P-cell was also considered. The
unit cell analysed, characterized by a volume fraction of 70%, is shown in Fig.9.

Fig. 9. Unit P-cell with scaled offset

The model analysed consisted of 3x3x3 cells and unit-cell edge length of 1.8
mm. The simulation relative to this geometry of the P-cell was similar to the pre-
vious conducted on the other models. A value of 1.48GPa was obtained which is
in the field of the values of the elastic modulus of the trabecular bone [24], further
assessing the usefulness of these cells for biomedical scaffolds.

4 Conclusions

In this paper the modelling and assessment of biocompatible porous structures


has been considered. These structures have a non-stochastic architecture obtained
by replicating a unit cell along the three Cartesian directions. The P-cell is ob-
tained thickening and closing the Schwarz’s Primitive (P) minimal surface that is
described by an implicit equation. The implicit surface modelling procedure im-
plemented allows to potentially obtain structures with complex geometries, differ-
ent pore shapes and architectural features including pore size gradients that can fa-
cilitate bone ingrowth and implant adhesion. Although CAD-based methods
provide a powerful tool for the modelling of 3-D scaffold geometries, and repre-
sent the most widely employed design approach, they are limited by their poor
performance in reproducing structures resembling naturally occurring shapes.
The numerical simulations of the compression tests on the models based on the
P-cell showed how, by combining different thickness of the cells with adequate
material properties, it is possible to adjust the stiffness of the structure to values
useful for this kind of application. The numerical analysis of this porous struc-
tures, independently from the design methodology employed, requires much com-
putation effort since in this kind of structures the number of elements is generally
extremely large. FE accurate convergence analysis is thus requested to obtain ac-
curate solutions minimizing at the same time the computation time.
786 R. Ambu and A.E. Morabito

5 References

1. Banhart J., Manufacture, characterization and application of cellular materials and metal
foams, 2001, Progress in Materials Science, 46, 559-632
2. Levine B. A new era in porous metals: applications in orthopaedics, Advanced Engineering
Materials, 2008, 10(9), 788-792
3. Murr L.E., Amato K.N., Li S.J.,Tian Y.X, Cheng X.Y., Gaytan S.M, Martinez E., Shindo
P.W., Medina F. and Wicker R.B., Microstructure and mechanical properties of open-cellular
biomaterials prototypes for total knee replacement implants fabricated by electron beam melt-
ing, Journal of the Mechanical Behavior of Biomedical Materials, 2011, 4(7), 1396-1411
4. Dias M.R., Guedes J.M.,. Flanagan C. L., Hollister S. J. and Fernandes P.R. Optimization of
scaffold design for bone tissue engineering: A computational and experimental study, 2014,
Medical Engineering & Physics, 36, 448–457
5. Otsuki B., Takemoto M., Fujibayashia S., Neo Masashi, Kokubo T. and Nakamura T. Pore
throat size and connectivity determine bone and tissue ingrowth into porous implants: Three-
dimensional micro-CT based structural analyses of porous bioactive titanium implants, 2006,
Biomaterials 27, 5892–5900
6. Jones A. C., Arns C. H., Sheppard A.P., Hutmacher D. W., Milthorpe B. K.,. Knackstedt M.
A. Assessment of bone ingrowth into porous biomaterials using MICRO-CT, 2007, Biomate-
rials 28, 2491–2504
7. Noyama Y., Miura T., Ishimoto T., Itaya T., Niinomi M. and Takayoshi N., Bone loss and re-
duced bone quality of the human femur after total hip arthroplasty under stress-shielding ef-
fects by titanium-based implant, Materials Transactions, 2012, 53(3), 565-570
8. Spoerke E.D., Murray N. G., Li H., Brinson L. C., Dunand D. C. and Stupp S. I, A bioactive
titanium foam scaffold for bone repair, 2005, Acta Biomaterialia, 1, 523–533
9. Barbasa A., Bonneta A-S., Lipinskia P., Pescic R., Dubois G., Development and mechanical
characterization of porous titanium bone substitutes, Journal of the Mechanical behaviour of
Biomedical materials, 2012, 9(5), 34-44
10. Gaoa W., Zhang Y., Ramanujana D., Ramani K., Chenc Y.,. Williams C. B,. Wang C. C.L,
Yung Shina C., Zhang S. and Zavattieri P. D., The status, challenges, and future of additive
manufacturing in Engineering, 2015, Computer-Aided Design, 69, 65–89
11. Harrysson O.L.A., Cansizoglu O., Marcellin-Little D. J., Cormier D. R. and West II H. A,
Direct metal fabrication of titanium implants with tailored materials and mechanical proper-
ties using electron beam melting technology, 2008, Materials Science and Engineering C, 28,
366–373
12. Chen Y.J., Feng B., Zhu Y.P., Weng J., Wang J.X. and Lu X., Fabrication of porous titanium
implants with biomechanical compatibility, 2009, Materials Letters, 63, 2659–266
13. Ahmadi S.M., Yavari S.A., Wauthle R., Pouran B., Schrooten J., Weinans H. and Zadpoor
A.A., Additively manufactured open-cells porous biomaterials made from six different space-
filling unit cells: the mechanical and morphological properties, 2015, Materials, 8, 1871-1896
14. Rajagopalan S. and Robb R.A., Schwarz meets Schwann: design and fabrication of biomor-
phic and durataxic tissue engineering scaffolds, 2006, Medical Image Analysis, 10, 693–712
Design and analysis of tissue engineering ... 787

15. Melchels F.P.W., Bertoldi K., Gabbrielli R., Velders A.H. and Feijen J., Mathematically de-
fined tissue engineering scaffold architectures prepared by stereolithography, 2010, Biomate-
rials, 31, 6909-6916
16. Almeida H.A. and Bartolo P.J., Design of tissue engineering scaffolds based on hyperbolic
surfaces: structural numerical evaluation, 2014, Medical Engineering & Physics, 36, 1033-
1040
17. Shin J, Kim S., Jeong D.,Lee H.G., Lee D., Lim J.Y. and Junseok Kim, Finite element
analysis of Schwarz P surface pore geometries for tissue-engineered scaffolds, 2012, Mathe-
matical Problems in Engineering, 2012, 13 pages
18. Yan C., Hao L., Hussein A. and Young P., Ti-6Al-4V triply periodic minimal surface struc-
tures for bone implants fabricated via selective laser melting, 2015, Journal of the Mechanical
Behavior of Biomedical Materials, 51, 61-73
19. Kadkhodapour J., Montazerian H., Darabi A.Ch., Anaraki A.P., Ahmadi S.M., Zadpoor A.A.
and Shmauder S., Failure mechanisms of additively manufactured porous biomterials, 2015,
Journal of the mechanical behavior of biomedical materials, 50, 180-191
20. Smith M., Guan Z. and Cantwell W.J., Finite element modelling of the compressive response
of lattice structures manufactured using the selective laser melting technique, 2013, Interna-
tional Journal of Mechanical Sciences, 67, 28-41
21. http://k3dsurf.s4.bizhat.com
22. Coelho P.G., Hollister S.J., Flanagan C.L. and Fernandes P.R., Bioresorbable scaffolds for
bone tissue engineering: optimal design, fabrication, mechanical testing and scale-effect
analysis, 2015, Medical Engineering and Physics, 37, 287-296
23. Li Y., Yang C., Zhao H., Qu S., Li X. and Li Y., New developments of Ti-based alloys for
biomedical applications, 2014, Materials, 7, 1709-1800
24. Keaveny T.M., Morgan E. F.and Yeh O.C., Bone Mechanics, Chapter 8. Standard Handbook
of Biomedical Engineering and Design, 2004 (Mc Graw-Hill)
Geometric Shape Optimization of Organic Solar
Cells for Efficiency Enhancement by Neural
Networks

Grazia LO SCIUTO1, Giacomo CAPIZZI1*, Salvatore COCO1 and Raphael


SHIKLER 2
1
Department of Electrical, Electronics and Informatics Engineering, University of Catania,
Catania, Italy
2
Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev,
Israel
* Corresponding author. E-mail address: gcapizzi@diees.unict.it

Abstract The complexity of the heterojunction organic solar cell stems from the
delicate balance that exists between the different properties of the materials used
and the geometric structure of the cell itself. Therefore several parameters affect
the solar cell conversion efficiency. For this reason, in the literature there are a
large variety of optimization techniques in order to improve the conversion effi-
ciency of solar cells. Often these optimization techniques are complex and costly.
In this paper, a back propagation neural network is used to disclose the link be-
tween length and the maximum power output of the device. The simulation results
obtained show that the devices length has a great influence on the their efficiency
and therefore must be taken into account in manufacturing processes.

Keywords: Geometric Shape Optimization, Organic Solar Cells, Neural Net-


works, Energy conversion efficiency.

1 Introduction

In the past, the high costs of silicon materials and manufacturing, made photovol-
taic less competitive than electricity generation from fossil fuels, even though it
was already clear that it could meet today’s energy demand.䯠
The efficiency of today’s most efficient organic solar cells is primarily limited
by the ability of the active layer to absorb all the sunlight. While internal quantum
efficiencies exceeding 90% are common, the external quantum efficiency rarely
exceeds 70% [1].
The progress of the polymer solar cell is the result of a research directed to-
wards a major technology study of new materials promising a high energy conver-

© Springer International Publishing AG 2017 789


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_79
790 G. Lo Sciuto et al.

Fig. 1. Structure of the investigated OPVs.

sion efficiency to improve the electrical performance while maintaining low pro-
duction costs [2]. Currently the organic solar cells are less efficient than the inor-
ganic ones, and they can be produced at lower prices [3].
In an organic solar cell the active layer is sandwiched in between two elec-
trodes with different work functions. One of the electrodes needs to be transparent
to be able to absorb light in the active layer of the solar cell. The transparent elec-
trode is often a conductive oxide that can be sputtered or solution processed from
a precursor material [4]. The other electrode is a metal that can easily be evapo-
rated on top of the active layer. This metal contact reflects the light that was not
absorbed, which increases the exciton generation in the active layer. Traditionally
the organic solar cells are fabricated on a glass substrate with patterned indium tin
oxide (ITO) electrodes. A typical organic solar cell contains a photoactive layer
sandwiched between two electrodes with different work functions (Figure 1). For
the operation of the cells the morphology of the photoactive layer, i.e. spatial dis-
tribution of the complementary donor and acceptor semiconductors, and the de-
vice architecture, i.e. the choice of electrodes and charge transport layers, are im-
portant [5]. Many effects tend to reduce the efficiency of the organic solar cells
observing the simulations, measuring current and voltages and exploiting the
physics of the devices under atmospheric conditions. The variation in the length of
the devices is not negligible affecting the recombination of charge carriers in the
organic solar cells [6-8].
In this paper, a back propagation neural network is used to disclose the link be-
tween device length and its maximum power output. Simulations show that the
device length influences the efficiency and therefore it must be taken into account
in manufacturing processes.

2 The organic solar cell structures

The typical architecture of a polymer solar cell is shown in Figure 1 and it consists
of different layers sandwiched between the anode and the cathode. The photoac-
tive layer (e.g.: P3HT as a polymer and PCBM as a fullerene) is sandwiched bet-
Geometric Shape Optimization of Organic Solar … 791

Fig. 2. Glove box in Lab of Ben-Gurion University in Beer-Sheva, Israel.

ween two electrodes.


The solar cell is composed of layers: a glass substrate, a transparent conductor
layer of Indium Tin Oxide (ITO), a conductive polymer layer (Pedot:PSS), a p-
type poly (3-hexylthiophene or P3HT) as donor; an n-type organic fullerene [6,6]-
phenyl-C61 butyric acid methyl ester (PCBM) as the acceptor; and a metal alumi-
num electrode as top contact [9].䯠
In general the indium tin oxide ITO is used as highly conductive and transpar-
ent front contact, this electrode is deposited onto the non-flexible and rigid glass
substrate by sputtering. ITO is chosen for the high work function (WF) that is the
energy required to move an electron from the Fermi level into vacuum. In other
words, the effect of electrodes is negligible only for ohmic contacts. The elec-
trodes collect the free carriers; the anode (a high work function metal) collects
holes and the second electrode; cathode collects the electrons and is formed from
low work function metal such as aluminum, with -4.3 eV, which is 0.55 eV lower
than the energy level of the LUMO band of PCBM (-3.75 eV). The optically ac-
tive part of polymer solar cell employs PCBM and P3HT.

3 Experimental Setup

The polymer solar cells analyzed in this paper have been made and tested in the
laboratory of organic semiconductor devices “The Michel Mamon Microelectron-
ics Laboratory” at the Ben-Gurion University of the Negev in Beer Sheva.䯠
The encapsulation of the solar cells has been processed at low temperature
792 G. Lo Sciuto et al.

Fig. 3. A schematic representation of the realized devices.

Fig. 4. The realized devices.

≈22° C compatible with organic materials because they are very sensitive and de-
grade very fast under normal air conditions. So the electric characterization of the
devices was performed under a dry nitrogen atmosphere (inside the glove box
shown in Figure 2) with an O2 concentration of 1.7 ppm and an H2O concentration
smaller than 0.1 ppm.
Isolation glove boxes provide controlled environments that protect contamina-
tion-sensitive materials from ambient conditions. Containment glove boxes pro-
vide safe processing environments that protect operators from biohazards within
the glove box chamber. For controlled atmospheres, nitrogen dry boxes provide an
isolated work environment for processing samples or handling air-sensitive mate-
rials while maintaining an anaerobic or other gas specific environment within the
glove box.
The dual glovebox system for polymer electronics fabrication provides an inert
atmosphere for spin coating, electrode or counter-electrode deposition and assem-
Geometric Shape Optimization of Organic Solar … 793

Fig. 5. Current density vs voltage characteristics of some organic solar cells with different
lengths under 1 sun (100 mW/cm2) simulated AM1.5 irradiation

bly of organic photovoltaic (OPV) and other flexible electronic devices. Integrated
into the glovebox system is a high vacuum room with mask transfer system for the
thermal evaporative deposition of patterned electrodes and an atomic layer deposi-
tion (ALD) system for counter electrode deposition.䯠
We have realized 9 devices, each of which contains 4 organic solar cells. All
devices have the same sizes (width and length of 12 mm), while the organic solar
cells have a width of 1.5 mm and a length ranging from 4.50 to 7.50 mm. Figure 3
shows a schematic representation of the devices, while Figure 4 the realized de-
vices.
The devices have:
x a Metal electrode (cathode) in Aluminum with a thickness of 80 nm.
x a PCBM:P3HT layer with a thickness of 200 nm.
x a Pedot:PSS layer with a thickness of 30 nm.
x a transparent ITO layer of dimensions 6 mm × 12 mm with a thickness of
90 nm, coated in the central part, having a resistance of 20 Ohm/m2,
x a glass substrate with a thickness of 0.7 mm.

The ITO is the anode with high transparency and conductivity but it is fragile
and susceptible to deterioration.
䯠 Finally for each device it was performed the electric characterization by meas-
uring the current density versus voltage in the range from -1 to +1 Volt. The cur-
rent density vs voltage characteristics of some organic solar cells with different
lengths are shown in Figure. 5. 

794 G. Lo Sciuto et al.

Fig. 6. The Matlab schematic of the neural network.

4 The proposed Neural Network for Geometric Shape


Optimization

The selected neural network (see Figure 6) is a multilayer feed forward neural
network composed by an input layer, an output layer and two hidden layers: the
two neurons arranged in the first hidden layer have hyperbolic tangent transfer
function, as well as the other two neurons arranged in the second hidden layer.
While for the output layer has been used a linear transfer function.
For the training we used a fast Levenberg-Marquardt algorithm, the well
known gradient descent with adaptive learning rate back-propagation algorithm
and the relative learning parameters. To prevent overfitting in the networks, the
early stopping rule is used. The method used during the implementation of the
neural network was to separate a set of available data into three subsets: training,
validation and testing sets [10-11].
The training set is used as the primary set of data that is provided as input to the
neural network for learning and adaptation.
The inputs to the neural network are the different lengths of organic solar cells
and the targets are the maximum power outputs associated to each solar cell.䯠
To evaluate the performance of the network during the training we use SSE
(sum squared error) and the best conducted experiments are reported in Figure 7.

5 Discussion

The geometry of the device determines which electrodes are appropriate for back
of the OPV. If the front electrode is to collect holes (normal cells), the back elec-
trode must have a higher work function. If the front electrode is an electron collec-
tor (inverted), the back electrode must have a lower work function than the trans-
parent conducting electrode at the front of the cell. This metal contact reflects the
light that was not absorbed, which increases the exciton generation in the active
layer (LiF).
Photocurrent generation is principally governed by three major factors, i.e., the
number of absorbed photons, the number of carriers generated by exciton separa-
Geometric Shape Optimization of Organic Solar … 795

Fig. 7. The neural network performance.

Fig. 8. The relationship between length and maximum power output of the organic solar
cells obtained by the selected neural network.

tion at the donor and acceptor interface, and the efficiency of charge collection.
All these factors are strongly influenced of the LiF. The origin of the effect of the
LiF is still unclear, but it effectively increases the efficiency of the organic solar
cells with respect to an Al electrode.
This paper represents the first attempt to model the relationship between the
LiF (and thus the efficiency of a cell) and the geometry of the metallic contact. On
the other hand, this model can be useful for the understanding of the origin of the
effect of the LiF.
796 G. Lo Sciuto et al.

6 Experimental Results and Conclusion

In this paper we have investigated the relationship between length and maximum
power output of the organic solar cells in order to improve their efficiencies by
means of a back propagation neural network. Simulations show that the devices
length has a great influence on the efficiency (see Figure 8) and that only for a
limited subset of organic cell lengths we obtain satisfactory electric performances
(in our case: in the subset 5.2 mm ÷ 5.8 mm). therefore must be taken into account
in manufacturing processes.

References

1. Kim S. J., Margulis G. Y., Rim S. B., Brongersma, M. L., McGehee M. D and Peumans P.
Geometric light trapping with a V-trap for efficient organic solar cells. Optics express, 2013.
21(103), A305-A312.
2. Kalowekamo J., Baker E. Estimating the manufacturing cost of purely organic solar cells. So-
lar Energy, 2009, 82(8), 1224-1231.
3. Gang Li, Rui Zhu and Yang, Polymer solar cells. Nature Photonics, 2012, 6, 153–161.
4. Hotovy J., Hüpkes J., Böttler W., Marins E., Spiess L., Kups T., Smirnov V., Hotovy and
Kováč J. Sputtered ITO for application in thin-film silicon solar cells: Relationship between
structural and electrical properties. Applied Surface Science, 2013, 269(15), 81-87.䯠
5. Bube, R. Fundamentals of solar cells: photovoltaic solar energy conversion. Elsevier, 2012.
䯠 6. Zheng Tang, Wolfgang Tress, Olle Inganäs, Light trapping in thin film organic solar cells.
Materials Today, 2014, 17(8), 389-396.䯠
7. Bonanno F., Capizzi G., Lo Sciuto G., Napoli C., Pappalardo G., Tramontana E. A Cascade
neural network architecture investigating surface plasmon polaritons propagation for thin
metals in OpenMP. Lecture Notes in Computer Science 8467 LNAI (PART 1), 2014, pp. 22-
33.
8. Bonanno F., Capizzi G., Coco, S., Napoli C., Laudani A., Lo Sciuto G. Optimal thicknesses
determination in a multilayer structure to improve the SPP efficiency for photovoltaic devices
by an hybrid FEM - Cascade Neural Network based approach International Symposium on
Power Electronics, Electrical Drives, Automation and Motion, SPEEDAM 2014, 2014, pp.
355-362.
9. Shuxuan Qu, Minghua Li, Lixin Xie, Xiao Huang, Jinguo Yang, Nan Wang, and Shangfeng
Yang. Noncovalent Functionalization of Graphene Attaching [6,6]-Phenyl-C61-butyric Acid
Methyl Ester (PCBM) and Application as Electron Extraction Layer of Polymer Solar Cells.
ACS Nano, 2013, 7 (5), 4070–4081.
10. Capizzi G., Bonanno F., Napoli C. Hybrid neural networks architectures for SOC and voltage
prediction of new generation batteries storage. 3rd International Conference on Clean Electri-
cal Power: Renewable Energy Resources Impact, ICCEP 2011, 2011, pp. 341-344.
11. Bonanno F., Capizzi G., Coco S., Laudani A., Lo Sciuto, G. A coupled design optimization
methodology for Li-ion batteries in electric vehicle applications based on FEM and neural
networks. International Symposium on Power Electronics, Electrical Drives, Automation and
Motion, SPEEDAM 2014, 2014, pp. 146-153.
Section 5.4
Reverse Engineering
A survey of methods to detect and represent the
human symmetry line from 3D scanned human
back

Nicola CAPPETTI* and Alessandro NADDEO

Department of Industrial Engineering, University of Salerno


* Corresponding author. T el.: +39-089-96-4094. E-mail address: ncappetti@unisa.it

Abstract This paper proposes a review of the methods to detect and represent
the human symmetry line. In the last years, the development of 3D scanners has
allo wed to replace the traditional techniques (marking based methods) with mo d-
ern methodologies that, starting fro m a 3D valid d iscrete geometric model of the
back, perform the posture and vertebral column detection based on a complex
processing of the acquired data. The purpose of the paper is a critical d iscussion of
the state of the art in order to highlight the real potentialit ies and the limitations
still present of the most important methodologies proposed for human symmetry
line detection.

Keywords: rasterstereography, back shape analysis, symmetry line, posture pre-


diction, anatomical land marks.

1 Introduction

The complex anato my of the musculoskeletal system, characterized by a great


number of degrees of freedom, makes it possible to assume, for the same work-
place arrangement, alternative postures, not all of which are correct. Assuming in-
correct postures for a long period may generate musculoskeletal inju ries, wh ich
currently are the most prevalent work-related reason for work absenteeism in
many work sectors. The curvature of the spine is one of the most important cha r-
acteristics for determining the quality of the posture and the intervertebral disc
loads and stresses. Accordingly, for the last few years, many studies have focused
on the analysis and prediction of postures for different worker typologies [1].
For the last decade, and especially thanks to the development of 3D acquisition
techniques, the study of shape recognition and feature extraction fro m three -
dimensional acquired objects in the biomed ical field has received greater attention
and interest. Nowadays the traditional techniques for posture and vertebral colu mn

© Springer International Publishing AG 2017 799


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_80
800 N. Cappettiand A. Naddeo

detection, based on cutaneous marking [2, 3, 4], can be replaced with modern
techniques based on 3D scanner. These techniques are non -invasive and perform
accurate quantitative measurements with acceptable costs, making possible subject
posture acquisitions which are not conditioned by the instrument; however, they
require that the back should be completely visible. In order to recognize the ele-
ments identifying the spatial configuration of the spine, the acquired point data set
of the subject’s back needs processing. Starting to a 3D valid discrete geometric
model of the back, typically, the spine configuration, which is based on external
detection, is identified with the symmetry line that is the 3D curve passing through
the external position of the vertebral apophyses. The automatic symmetry line de-
tection is a difficu lt task to accomplish. It requires a complex elaboration of the
acquired point cloud so as to obtain the shape properties of the back’s surface and
applying a convenient fitting procedure [5]. Taking into account the thickness of
the soft parts, the spinal midline can be then estimated fro m the symmetry line [6].
In this paper, a rev iew of the most important methods presented in the literature
for the detection and representation of the human symmetry line is proposed.

2. Symmetry line methods

All methods discussed in the following start acquiring the surface of the su b-
ject’s back in its full extent by a 3D scanner. Then, the recognition, bas ed on dif-
ferent rules, of the line which identifies the spinal column fro m the cloud of mea s-
ured points is carried out. The methods presented in the literature for the detection
of the symmetry line can be grouped in “Cutaneous marking based methods”,
“Parallel sections based methods” and “Adaptive sections based methods ”.

2.1. Cutaneous marking based methods

The first approach for the determination of symmet ry line based on the three -
dimensional scanning of the back, was proposed by Turner-Smith [7]. Their
method performs the acquisition of the 3D position of land marks located, by
means of manual palpation, on the vertebrae prominences (C7 or T1 if there was
an amb iguity T1 was chosen, between 4 and 10 spinous processes and the post e-
rior superior iliac spine dimp les). The accuracy of the positioning of the land-
marks was evaluated of 5 mm. The sy mmetry line is determined as the broken line
which joins the barycentre of each marker.
Sotoca et al. in [8], starting fro m the back surface depth map, p roposed a
method for wh ich the spine curve is obtained applying cutaneous markers posi-
A survey of met hods to detect and represent … 801

tioned on the back surface in the correspondence of the vertebral sp inous proc-
esses, fro m C7 to L5 (figure 1a), and projecting the scaled, rotated and translated
x-ray images (figure 1b) on the topographic representation of the back surface
(figure 1c).

(a) (b) (c)


Fig. 1. Clinical ima ge of a patient with a severe thoracic scoliosis. T he vertebral spinous pro c-
esses are marked on the skin (a). A radiography of the same patient ( b). T opographic represent a-
tion of the back surface with the values of z displayed in millimeters (c) [8].

In order to make a 3D paramet ric description of the symmetry line C(u), the au-
thors approximate the acquired locations of the markers with a polyno mia l curve:

(1)

Where nx and nz are the degrees of the polynomialsand Px and Pz are the coeffi-
cients of the polynomials, co mputed by the least squares method.

2.2. Parallel sections based methods

In normal subjects, the symmetry line is the line that divides the back into two mir-
ror-image parts. Fro m a geo metric point of view, each part can be considered the
involutory orthogonal transformation of the other [9, 10]. The symmetry of the
spine can be distorted by some ailments such as scoliosis, kyphosis or lordosis,
which produce a deviation of the spine from the sagittal plane or torsions . Even in
the presence of these anomalies, the spine’s location can be identified based on a
local symmetry property. In a healthy subject, this symmetry can be recognised in
the profile obtained by sectioning the back surface with a horizontal p lane.
802 N. Cappettiand A. Naddeo

Drerup and Hierholzer [6] p roposed the first methodology, based on symmetry
properties of the horizontal sections of the subject’s back. A fixed coordinate sys-
tem is associated to the subject’s back, with the vertical axis defined by the
prominence of the seventh cervical vertebra and the sacrum. The back surface is
sliced by planes perpendicular to the vertical axis. For each slice cu rve Γ, the point
representing the position of the spine is associated to the minimu m value of the
lateral asymmetry function A defined as follo ws. Let p be a point belonging to a
slice curve of the back; the lateral curvature asymmetry A, with respect to p(ξ), is:

L0 / 2
1
A(p) 
L0  a(u)u
u 0
(2)

L0 is the length of reference, that is, the measure of the p neighbourhood of ex-
plored to check symmetry. The partial contributions a(u) are calculated at the
points of each slice lying symmetrically to the left (sub-index l) and to the right
(sub-index r) of p, according to equation (3):

a(u )  H  H  
G 2 l
2
 2Gl Gr cos2   Gr
2
 (3)
l r
2

where:

 H = Mean curvature; Hl =H(ξ-u); Hr =H(ξ+u);


 G = Gaussian curvature; Gl =G(ξ-u); Gr =G(ξ+u);
 ξ is the curvilinear abscissa of the point p;
   l  r is the difference between the angular orientation of the principal
direction in l and r (k 1,l and k1,r ).
The function A(p, L0 ) has value 0 for those points of the curve having a per-
fectly specular neighbourhood. The value of the length of reference L0 has an im-
portant role, especially if co mpared with the dimensions of the symmetric portion
of the back. Varying the value of L0 involves different values of symmetry index,
since it analyses symmetry in a wider or narro wer range of the zone investigated.
The min imu m value of A(p, L0 ) identifies the point that best represents the sym-
metry o f the curve and, therefore, the position of the spine.
In order to avoid results that are not compatible with b io mechanical constraints,
Huysmans et al. [11, 12] integrate the lateral asymmetry function with other fac-
tors by taking into account curvatures, blending, torsions, and biomechanical co n-
straints. The symmetry line is the result of the optimization of the following total
cost function (Ctotal ) that is the sum of the contributions of each slice curve:
A survey of met hods to detect and represent … 803

(4)

In the approach proposed by Santiesteban et al. [13], the directions of principal


curvatures are used as local shape descriptor of the back surface. In particular, as-
suming that surface is oriented so that the backbone is almost vertical, the most
horizontal principal direct ion is used. Then the profile for the j-th cutting plane is
defined as the set of centroids M j {PC1j , PC2j ,…, PC mj } (of the points near to cutting
plane and projected on it) and the set of profile directions Vj ={v 1 j , v 2 j ,…, v mj } (the
most horizontal principal direction evaluated at each centroid). For each p rofile j,
the following function is considered:

(5)

where G is the derivative Gaussian function and is the slope angle of . Ac-
j
cording to the equation (5), PCk is a concave (convex) region if
( ). The point locating the symmetry line in the j-th cutting plane is ob-
tained by a criterion of symmet ric position applied to the sign of .
Di Angelo et al. in [14] introduced the following symmetry index based on the
analysis of the symmetry in the orientation of the normal unit vectors of horizontal
sections of the back surface:

wS  e S ( ,L0 ) (6)

This index is based on the following asymmetry function of equation (7):

S  , L0    Nx   Ny   Nz
2 2 2
(7)

where L0 is the length of reference, that is, the measure of neighbour of p ex-
plored to check the symmetry and:

0.5
1 L0 / 2

 N  , L0     N ( , u)  N  du 
2
for  = x, y, z (8)
 L0 
 u 0 

where
804 N. Cappettiand A. Naddeo

n (  u )  n (  u )
 N ( , u )  ;
n (  u )  n (  u )
L0 / 2
1
 N  , L0    N ( , u)du .
L0 u 0

n(ν+u) and n(ν-u) are normal unit vectors at opposite points of the curve (p(ν+u)
and p(ν-u)), which lie symmetrically with respect to the point p(ν). The minimu m
value of the symmetry index identifies the point which best represents the symme-
try of the i-th slice profile (Γi ) and which is therefore defined as symmetric point.
In figure 2 the definit ion of the symmetry index is illustrated.
Γ i+1

Γi
L
0
p Z
Γ i-1 n(ν -u) (ν)
n(ν +u)
N(ν, u)

Fig. 2: Definition of the symmetry index S(ξ,L0 ) [14].

Fig. 3: Maximum of symmetry function wS (a) in some slices of the back [14]

The line which is associated to the spinal colu mn is a weighted approximation


of the symmetry points with the parametric curve C(t)= {x(t), y(t), z(t)}:
A survey of met hods to detect and represent … 805

3 3
x   ait i ; y   bi cos kt  ci sin kt ; z  t (9)
i 0 i 0

Coefficient a i , b i and ci are calculated by a weighted least square method.


In some cases, the position of the symmetry line could be associated to relative
and not to absolute maximu m values of the symmetric index (figure 3). The est i-
mated symmetry line is the curve that min imizes the sum of its distances from the
approximated points, which can be the absolute or relative maximu m of the sym-
metry index.

2.3. Adaptive sections based methods

In order to overcome some limitations of the previously mentioned methods, a


new method (called NEPA: Non-Erected Posture Approach), based on an adaptive
process is presented in [15, 16]. The authors introduced a local reference system
associated to the Frenet – Serret frame of the symmetry line (figure 4): {OL (t),
ξL (t), ψL (t), ζL (t)}. The orig in (OL ) is a point of the symmetry line. ζL (t) is the lo-
cal longitudinal axis defined as the tangent of the symmetry line in OL (t). ψL (t) is
the local sagittal axis identified as the normal of the symmetry line in OL (t) and
ξL (t), the local coronal axis, is perpendicular to the other two. So, ξL (t) and ψL (t)
define the local transversal plane (Π(t)), ζL (t) and ψL (t) define the local sagittal
plane, ζL (t) and ξL (t) define the local coronal plane.

L(t
 L3()  L(t
 (t  (t
t3 ) 3 )L L 3
3) )

L(t  L(t
2) L( 2 
) L(t  L(t2 )
t2 ) 2)

L(t  L(t
) L(t )L(t
1  L(t 1 )
1
1) 1)

Figure 4: Local reference systems (colour figure online) [15].

Starting fro m the 3D acquisition of subject’s back, the method consists of the
detection of the symmetry line of the first attempt (C0 ) and a subsequent refine-
ment estimat ion (Cf ). The first attempt estimation is performed by the parallel sec-
806 N. Cappettiand A. Naddeo

tion method proposed in [14]. Once obtained the in itial estimation of the symmetry
line Co the refinement algorith m is applied. This refinement algorith m can be con-
sidered as an optimization process that finds the set of planes Π(t) for which the
obtained profiles Γ(t) have the maximu m symmetry of the back. This process co n-
verges to the best symmetry line estimation under the following double hypothesis:
1. the symmetry line passes through the most symmetric points of the back;
2. Π(t) sections the back in the most symmet ric p rofiles.
In order to perform a better evaluation of the symmetry line (SL) for asymmet -
ric postures, Di Angelo et al. in [17] proposed a new refinement algorith m of the
first estimation C0 . This refinement algorith m is an autoregressive method that
searches for the symmetry line by analyzing the profiles obtained by sectioning the
back with planes Π k which are perpendicular to the SLk-1 (Ps *). These sections are
the most important to analyze the symmetry of the back.

6. Considerations and conclusion

By the employ ment of Cutaneous marking based methods, we note that the main
limitat ion is the need to manually identify the apophyses and apply markers.
Parallel sections based (PSB) methods are based on horizontal slicing, so they
are specifically suited to evaluate the symmetry line for erect postures. Even more
so, they do not seem to be adequate to analyze postures with torsion of the trunk,
which may produce spine configurations lying outside the sagittal plane.
Co mparing Drerup [6, 7] and Di Angelo [14] we note no substantial difference
in accuracy in the analysis both of scoliotic patients and of healthy young people.
Some differences exists in their applicab ility on different postures, in impleme nt-
ability of Bio mechanical constraints and information fro m previous measure-
ments, in stability and reliability of anatomical land marks location and in comp u-
tational complexity and algorith m imp lementability. The method proposed in [14]
makes, respect that of [6], fewer errors in the lu mbar and thoracic tracts, whereas
it has some difficu lties in estimating symmetry line in the cervical tract.
All the above-mentioned indirect methods, based on horizontal slicing, are sp e-
cifically suited to evaluate the symmetry line for sy mmetric postures and they are
useful in analysis of spine health. However, they do not seem to be adequate to
analyze asymmetric postures that may produce spine configurations lying far ou t-
side the sagittal plane, like a wider set of postures assumed by the workers in their
workp lace. Since in ergonomics it is very important to know the correctness of the
posture assumed by the workers to imp rove their workplace [18], Adaptive sec-
tions based methods are needed. Only one method is available in literature, wh ich
was validated analyzing and comparing with a PSB approach different cases of
symmetric and asymmetric postures. The results obtained show that the new
A survey of met hods to detect and represent … 807

method performs a correct evaluation of the symmetry line also in the case of ex-
treme asymmetric postures such as those here analysed with an error reduction
with respect to first estimation of about 20%.
The results in the test cases used, although related to seated postures, confirm
the applicability of this method in the analysis of any kind of asymmetric postures
presenting identical crit ical aspects. The method functionalities are not investiga t-
ed in the postural analysis of subject with serious pathologies, but it could be an
interesting field of application. In figure 5 so me results obtained with the tradi-
tional and the refinement approaches are compared.
The most important is due to the body morphology of the subject: a reg ion of
hemisome could happen to reveal some gibbosities or alterations having no corre-
lation with the vertebral colu mn deviations and which could be merely due to in-
adequate or incorrect postural attitudes assumed by the subject, because of pain or
because of daily work and/or sports habits.
markers slices of 1 st attempt final slices final sym-
symmetry line of first at- metry line
tempt

z z

x x y
a) y bb))

z z

x y x y
c
c) d
d)
Figure 5: traditional and refinement approaches: results for some significant cases [17].
808 N. Cappettiand A. Naddeo

In this paper, a review of the methods to detect and represent the human sym-
metry line is proposed. By analyzing the state-of-the art, it has been verified how,
in recent years, there has been a remarkable evolution of these methods that, now,
allo w analy zing also postures whose symmetry line is outside of the sagittal plane.
Nevertheless some limitations remain wh ich have not been evinced in cutaneous
marking.

References

1. Kothiyal K, Kayis B. 2001. Workplace layout for seated manual handling tasks: an electro-
myography study. Int . Journal of Industrial Ergonomics 27: 19-32.
2. Chiou, W.-K.; Lee, Y.-H.; Chen, W.-J-; Lee, M.-Y., Lin Y.-H.: A non-invasive protocol for
the determination of lumbar spine mobility, Clinical Biomechanics, 11 (8), 1996, 474–480.
3. Lee, Y.-H.; Chiou, W.-K.; Chen, W.-J.; Lee, M.-Y.; Lin, Y.-H.: Predictive model of in-
tersegmental mobility of lumbar spine in the sagittal plane from skin markers, Clinical Bio-
mechanics, 10 (8), 1995, 413–420.
4. Sic ard, C.; Gagnon, M.: A geometric model of the lumbar spine in the sagittal plane, Spine,
18 (5), 1993, 646–658.
5. Furferi, R., Governi, L., Palai, M., Volpe, Y.: From unordered point cloud to weighted B-
spline - A novel PCA-based method, Applications of Mathematics and Computer Engineer-
ing - American Conference on Applied Mathematics, AMERICAN-MAT H'11, 5th WSEAS
Int. Conf. on Computer Engineering and Applications, CEA'11, pp. 146 -151 (2011).
6. Drerup, B.; Hierholzer E.: Back shape measurement usin g video rasterstereography and 3-
dimensional reconstruction of spinal shape, Clinic Biomechanics, 9 (1), 1994, 28 -36.
7. T urner-Smith, A.R.: A method for analysis of back shape in scoliosis. In J. Biomech. 21(6),
497–509 (1988).
8. Sotoca JM, Buendia M, Inesta JM, Ferri FJ. 2003. Geometric properties of the 3D spine
curve. Lecture Notes in Computer Science 265 2, Perales, FJ, et al., Springer- Velag, 1003-
1011.
9. Di Angelo L., Di Stefano P., “Bilateral symmetry estimation of human face”. Int . Journal on
Interactive Design and Manufacturing, vol. 7 (4), 2013, p. 217 -225, ISSN: 1955-2513.
10. Di Angelo L., Di Stefano P., “A computational Method for Bilateral Symmetry Recognition
in Asymmetrically Scanned Human Faces”. Computer-Aided Design and Applications, vol.
11 (3), 2014, p. 275-283.
11. Huysmans T , Haex B, De Wilde T , Van Audekercke R, Vander Sloten J, Van der Perre G.
2006. A 3D active shape model for the evaluation of the alignment of the spine during slee p-
ing. Ga it & Posture 24: 54–61.
12. Huysmans T , Haex B, Van Audekercke R, Vander Sloten J, Van der Perre G. 2004. T hree -
dimensional mathematical reconstruction of the spinal shape based on active contours. Jour-
nal of Biomechanics 37: 1793–1798.
13. Santiesteban Y, Sanchez JM, Sotoca JM. 2006. A method for detection and modelling of the
human spine base d on principal curvature. In: Martinez-T rinidad JF, et al., editors. CIARP
2006, LNCS 4225. Berlin: Springer; p. 168-177.
14. Di Angelo L, Di Stefano P, Vinciguerra M G. 2011. Experimental validation of a new method
for symmetry line detection. Computer – Aided Design and Applications 8 (1): 71-86.
A survey of met hods to detect and represent … 809

15. Di Angelo L., Di Stefano P., Spezzaneve A., “An iterative method to detect symmetry line
falling far outside the sagittal plane”. International Journal on Interactive Design and Manu-
facturing vol. 6 (4), 2012, p. 233-240, ISSN: 1955-2513.
16. Di Angelo L., Di Stefano P., Spezzaneve A., “Symmetry line detection for non -erected pos-
tures”. International Journal on Interactive Design and Manufacturing, vol. 7 (4), 2013, p.
271-276, ISSN: 1955-2513.
17. Di Angelo L., Di Stefano P., Spezzaneve A., “A method for 3d symmetry line detection in
asymmetric postures”. Computer Methods in Biomechanics and Biomedical Engineering, vol.
16 (11), 2013, p. 1213-1220, ISSN: 1025-5842.
18. Vallone M., Naddeo A., Cappetti N. and Califano R. Comfort Driven Redesign Methods: An
Application to Mattresses Production Systems, T he Open Mechanical Engineering Journal,
2015, 9, 492-507.
Semiautomatic Surface Reconstruction in
Forging Dies

Rikardo Minguez1*, Olatz Etxaniz1, Agustin Arias1, Nestor Goikoetxea1,


Inaki Zuazo1
1
Department of Graphic Design and Engineering Projects, Faculty of Engineering, University
of the Basque Country UPV/EHU, Urkixo zumarkalea z/g, 48013 Bilbao, Spain
* Corresponding author. Tel.: +34-94-601-7325. E-mail address: rikardo.minguez@ehu.eus

Abstract: The reuse of damaged stamps or forging dies is a key aspect of the
forging process. Whenever a forging die must be repaired the damaged zones are
filled with welding material to be later machined. In this process, some phases can
be optimized as the amount of welding material and moreover the machining tool
paths and parameters. With the introduction of new digitization technologies new
possibilities in the automatization of the machining are arisen. Thanks to the ap-
plication of reverse engineering techniques, good and smoothed contours are ex-
tracted from the digitized geometry. The machining phase based on the obtained
contours is considerably reduced in time and it does not involve any significant
problem. The obtaining of these contours is the most complicated step of the pro-
posed methodology. Some tests with diverse forging dies and mixed contours have
been performed. The operations defined in this paper perform in an optimal way in
all the cases. The repairs are analyzed and the required times in the actual and the
proposed processes are compared.

Keywords: Surface reconstruction, reverse engineering, computer inspection,


forging die.

1 Introduction and objective of the research work

Forging is a key industrial process in the manufacture of a great number of


pieces and, in particular, in those with demanding mechanical and fatigue specifi-
cations. This manufacture process uses expensive machines and tools of great di-
mensions. This fact accounts for the need to incorporate any technical improve-
ment into the design of these tools as soon as possible.
A very important aspect of the forging process is the reuse of damaged stamps or
forging dies. Whenever a forging die must be repaired, the damaged forging die is

© Springer International Publishing AG 2017 811


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_81
812 R. Minguez et al.

cleaned locally and, by means of machining, the forging die is prepared to be


filled with welding material in a located way. However, it is necessary to machine
the entire tool path, which means that a great part of the machining time is missed
when cutting in no-load conditions. Thanks to the contribution of welding mate-
rial, the life of the forging die is made to be limitless, and the efficiency and dura-
bility of the forging die increase with the repairs [1].
The aim of this research work is to develop a semiautomatic process for the repair
of forging tools. More precisely, the project focuses on the recuperation of the
stamp of an automobile crank. The stamps are medium size (405 mm in length),
with a weight of approximately 130 kg. Thus, a more sustainable repair process
will be obtained and the consumed resources and times of operation will be sig-
nificantly reduced.
The present process, where the machining of the welded forging dies is made with
a generic machining program for any type of welded zone so the tool goes through
the entire path, will be replaced by a semi-automated one, which uses a non-
contact 3D digitalization system to locate the welded zones in the forging die and
to machine the involved zones solely. At the moment, these imperfections are de-
tected in a visual way, following the internal procedure of the company.
The use of the scanner will be focused exclusively on the digitization of the al-
ready prepared zones, having the added material already done [2, 3]. Thus the new
methodology will be able to generate an optimum machining path for the tool at
these zones. In this way, the risk of breakage of the tool due to the irregularity of
the welded surfaces will be avoided, resulting in a considerable saving for the
company in terms of time and energy [4].
As it is a semi-automated process, the company will be able to generate a data
base where the tracking of each stamp will be done [5, 6]. This data base will con-
sist of a complete list of failures, repairs, times between repair, types of failures,
repaired zones, etc. It will be also possible to create a preventive maintenance plan
to reduce their failures [7].

2 Methodology for surface reconstruction

A semiautomatic solution to reduce the machining time in the repair of forging


tools is required. The position of the welded zone will be located by means of a
non-contact three-dimensional measurement system, which must be able to extract
guides or contours, so as to make it possible to machine in a located way [8, 9].
The proposed general repair process is described in a summarized form in Figure
1. In the flow diagram the digitization phase which will be explained later can be
observed.
Semiautomatic Surface Reconstruction in Forging Dies 813

Figure 1. Flow diagram of the proposed process

In the repair process, it is required that the tool does not break at the contact with
the forging die and that all the added material is eliminated to reduce the final ad-
justments as much as possible.
The selection criteria of the optimal solution can be enumerated in terms of
saved time and money. The most significant are:
x Not to waste unnecessary time.
x To facilitate to the operator the work and to avoid unnecessary tasks.
x To avoid tasks that can even be developed in a simpler way incurring
smaller error that does not risk the repair of the forging die.
x To generate a data base that helps the worker to identify the failures in
the forging die and to know how to repair them.
814 R. Minguez et al.

x To form an agile, visual and simple process.


x To avoid breakage of tools.
x To completely machine the repaired zones.
Once the forging die is received with the added material, it will be scanned so that
a STL file will be obtained [10, 11, 12]. This is, a 3D model of the forging die
with all its geometry, showing where the welds are located.
Knowing exactly the location of the weld and its thickness makes it possible to
approximate much more to the zone to mechanize without risk of breaking the tool
[13]. Therefore, we will compare this point cloud model with the original CAD
model to obtain the geometrical differences that will serve us to optimize the ap-
proaches and the path of the tool. In this phase, the welded zones are identified
and they are exported to the reverse engineering program for the mesh modifica-
tion. Once the welded zones are extracted, their contours will be obtained in a
CAD format [14, 15].
Those contours serve to generate the optimized machining operations in the CAM
program, and thus to generate a CNC code ready to export to the machining centre
[16, 17].

3 Proposed process guidelines

The proposed process guideline follows an example of a forging die repair.


This case is the lower previous forging die of an automobile crank. This forging
die has made 5000 cycles and has passed the preventive inspection. The situation
of the forging die has been evaluated and it has been sent to repair the defective
zones. Once the material is added to the forging die, it has been sent to the metrol-
ogy laboratory for the generation of the digitized model of the forging die to be
repaired.
The proposed process automatization affects mainly to the machining phase of the
repair.
The equipment and the required software are:
x ATOS II ™ (GOM mbH, Germany) scanner, including the processing
software.
x Revolving table to facilitate the movement of the forging die and not to
move the scanner for the digitalization.
x GOM inspect V8 ™ (GOM mbH, Germany) program to perform the
comparative inspection and the report.
x Geomagic Studio 2013 ™ (Geomagic, Inc., NC, USA) program to
change the STL file format to a readable IGES file by the CAM program
to make the tool path to be machined.
x Catia V5R23 ™ (Dassault Systèmes, France) program, CAM module.
The forging die is received with all the required information in the metrology
laboratory to be able to introduce this information to the data base of the forging
Semiautomatic Surface Reconstruction in Forging Dies 815

dies. These data sheets have been previously generated by the operator who has
inspected the forging die and by the operator who is going to repair it.
A recommendation on this matter is to locate the scanning system close to the ma-
chining zone of the forging dies to avoid unnecessary movements of the forging
dies.
The proposed process can be summarized as follows:
1. Positioning of the forging die in the measurement table
2. Scanning
3. Comparative inspection
4. Segregation of zones to be machined
5. Alignment of the two models in the mesh processing program
6. Getting contours and selection of areas to be machined
7. Generation of the machining tool path
Until this point the steps are not automatic but they are simple to execute. From
this step onwards, the process is automated, thus facilitating the work of the CNC
programmer. In this stage, the forging die is divided in zones. Assigning the coor-
dinate origin to the plate of the forging die, the parts to be machined are listed [18,
19].
This step is essential to locate the welds automatically. It is important because de-
pending on the zone of repair, the machining tool path will be different. These
machining operations depend on multiple factors:
x Horizontality and verticality.
x Desired finishing.
x Accessibility of the tool.
x Approaches of the tool.
Once the forging die to repair in the CAM program is open, the curves obtained in
the reverse engineering program will be imported. Since when making the com-
parative of the scanned STL model and the CAD, the curves were aligned and the
coordinate system was the same, the resulting curves will appear perfectly aligned
with the original CAD file when importing them.
Contours: An off-set operation of the concerned contours is performed to extend
the contours.
The obtained off-set contours will be associated to the corresponding machining
operation of the operation tree.
Fronts: The contours located in the fronts will be associated to already defined
roughing operations. Stocks must be generated with the concerned contours.
Stock generation: In the machining module of the program, a ‘part operation’ is
selected and the contour corresponding to the front is chosen.
Assignment of each contour to a machining operation: At this moment, all the
necessary elements to make the tool path of the machining program are defined.
The system will identify the welded zones and will execute the tool paths corre-
sponding to each zone, so the technician must introduce the tool paths once.
Since the identified parts and the machining strategies are common to all of them,
these tool paths are used for all the references. In order to make any type of ma-
816 R. Minguez et al.

chining in a forging die to repair, it will be necessary to use solely 5 types of op-
erations: Roughing operation, Z-level, Sweeping, Spiral-milling and in unusual
cases Contour-drive. The possibility of human error in the programming is limited
when automating the process. Also, a considerable time is saved in the generation
of CNC codes.
In Figure 2 a general flow diagram of the digitization and machining process is
exposed.

Scanning of the forging


die to obtain the STL file

Comparison to original
CAD to obtain the zones
and the planes onto per-
form the machining
Extraction of the zones
to be machined

Extraction of the con-


tours to be machined

Placement of the con-


tours in the original CAD
model

CNC code generation

Figure 2. Flux diagram of the process

4 Time study

The saved time is substantial with respect to the actual process. It must be re-
membered, as shown in Table 1 that, since the material to be machined is always
the same, the improvement of times is made in the operations with no-load condi-
tions. But since the simulation counts the operations in no-load conditions as ap-
proaches, exits, etc., the table does not show it faithfully.
Semiautomatic Surface Reconstruction in Forging Dies 817

Table 1. Time comparison for one of the studied forging die

Actual Process Proposed Process


Planning 17 min. 2 min.
Front roughing 1 h. 28 min.
General roughing 47 min. 10 min.
Bottom roughing 5 min. 5 min.
General finishing 1 h. 17 min. 15 min.
Horiz. zones finishing 4 min. 4 min.
Bottom finishing 4 min. 4 min.
Total 3 hours 34 min. 1 hour 9 min
% saved time 68%

This reduction of machining time is very beneficial for the company. They are
machining centres with a very high production load and the reduction of times in
these operations traduces into greater production.

5 Conclusion and discussion

Thanks to the application of reverse engineering techniques, good and


smoothed contours are extracted from the digitized geometry. The machining
phase based on the obtained contours is considerably reduced in time and it does
not involve any problem. The obtaining and processing of these contours is the
most complicated step of the methodology.
Several tests with diverse forging dies and with diverse and mixed contours
have been performed. The operations defined in this paper perform in an optimal
way in all the cases. The possibility to automate the located machining process be-
comes real.
In the near future, this repair process of forging dies by means of the automati-
zation of the processing of the contours with a programmed macro in the CAM
software can be improved.
Also, the automatization of the digitalization process by means of a small robot
is feasible, although the necessary investment is important, and currently, the bot-
tleneck of the process is the machining centre.
Another non-negligible aspect when implementing these improvements in the
industrial process is the reduction of the environmental impact attained by this
new methodology.

Acknowledgments The authors of this paper want to thank the University of the Basque Coun-
try UPV/EHU (GIU15/22) for financing this research project.
818 R. Minguez et al.

References

1. Kim, Y. and Choi, C. A Study on Life Estimation of Hot Forging Die. International Journal of
Precision Engineering and Manufacturing, Vol.10, No.3, 2009, pp. 105-113.
2. Kovács, I., Várady, T. and Salvi, P. Applying geometric constraints for perfecting CAD mod-
els in reverse engineering. Graphical Models, Vol.82, 2015, pp. 44-57.
3. Bi, Z.M. and Wang, L. Advances in 3D data acquisition and processing for industrial applica-
tions. Robotics and Computer-Integrated Manufacturing, Vol.26, 2010, pp. 403-413.
4. Wang, Q., Zissler, N. and Holden, R. Evaluate error sources and uncertainty in large scale
measurement systems. Robotics and Computer-Integrated Manufacturing, Vol.29, 2013, pp.
1-11.
5. Yilmaz, N.F. and Eyercioglu, O. Knowledge based reverse engineering tool for near net shape
axisymmetric forging die design. Mechanika, Vol.73, No.5, 2008, 65-73.
6. Kulon, J., Mynors, D.J. and Broomhead, P. A knowledge-based engineering design tool for
metal forging. Journal of Materials Processing Technology, Vol.177, 2006, pp. 331-335.
7. Germani, M., Mandorli, F., Mengoni, M. and Raffaeli, R. CAD-based environment to bridge
the gap between product design and tolerance control. Precision Engineering, Vol.34, 2010,
pp. 7-15.
8. Gao, J., Chen, X., Zheng, D., Yilmaz, O. and Gindy, N. Adaptive restoration of complex ge-
ometry parts through reverse engineering application. Advances in Engineering Software,
Vol.37, 2006, pp. 592-600.
9. Fayolle, P. A. and Pasko, A. An evolutionary approach to the extraction of object construction
trees from 3D point clouds, Computer-Aided Design, Vol.74, 2016, pp. 1-17.
10. Park, S.C. and Chang, M. Reverse engineering with a structured light system, Computers &
Industrial Engineering, Vol.57, 2009, pp. 1377-1384.
11. Popister, F., Popescu, D., Hurgoiu, D. and Racasan, R. Applying Reverse Engineering to
Manufacture the Molds for the Interior Decorations Industry. Journal of Automation, Mobile
Robotics & Intelligent Systems, Vol.6, No.3, 2012, pp. 61-64.
12. Iuliano, L. and Minetola, P. Enhancing moulds manufacturing by means of reverse engineer-
ing. International Journal of Advanced Manufacturing Technology, Vol.43, 2009, pp. 551-
562.
13. Mawussi, B. and Tapie, L. A knowledge base model for complex forging die machining.
Computers & Industrial Engineering, Vol.61, 2011, pp. 84-97.
14. Minetola, P., Iuliano, L. and Calignano, F. A customer oriented methodology for reverse en-
gineering software selection in the computer aided inspection scenario, Computers in Indus-
try, Vol.67, 2015, pp. 54-71.
15. Urbanic, R.J. A design and inspection based methodology for form-function reverse engi-
neering of mechanical components, International Journal of Advanced Manufacturing Tech-
nology, Vol.81, 2015 pp. 1539-1562.
16. Masel, D.T., Young II, W.A. and Judd, R.P. A rule-based approach to predict forging volume
for cost estimation during product design. International Journal of Advanced Manufacturing
Technology, Vol.46, 2010, pp. 31-41.
17. Rauch, M., Laguionie, R., Hascoet, J.Y. and Suh, S.H. An advanced STEP-NC controller for
intelligent machining processes. Robotics and Computer-Integrated Manufacturing, Vol.28,
2012, pp. 375-384.
18. Várady, T., Facello, M. and Terék. Z. Automatic extraction of surface structures in digital
shape reconstruction, Computer-Aided Design, Vol. 39, 2007, pp. 379-388.
19. Tapie, L., Mawussi, B. and Bernard, A. Topological model for machining of parts with com-
plex shapes, Computers in Industry, Vol.63, 2012, pp. 528-541.
A RGB-D based instant body-scanning solution
for compact box installation

Rocco FURFERI1, Lapo GOVERNI1, Francesca UCCHEDDU1* and Yary


VOLPE1
1
Department of Industrial Engineering, via di Santa Marta, 3, 50139 Firenze (Italy)
* Corresponding author. Tel.: +39-055-2758741 ; fax: +39-055-2758755. E-mail address:
francesca.uccheddu@unifi.it

Abstract Body scanning presents unique value in delivering the first digital asset
of a human body thus resulting a fundamental device for a range of applications
dealing with health, fashion and fitness. Despite several body scanners are in the
market, recently depth cameras such as Microsoft Kinect® have attracted the 3D
community; compared with conventional 3D scanning systems, these sensors are
able to capture depth and RGB data at video rate and even if quality and depth
resolution are not optimal for this kind of applications, the major benefit comes
from the overall acquisition speed and from the IR pattern that allows poor light-
ing conditions optimal acquisition. When dealing with non-rigid bodies, unfortu-
nately, the use of a single depth camera may lead to inconsistent results mainly
caused by wrong surfaces registration. With the aim of improving existing systems
based on low-resolution depth cameras, the present paper describes a novel scan-
ning system for capturing 3D full human body models by using multiple Kinect®
devices in a compact setup. The system consists of an instantaneous scanning sys-
tem using eight depth cameras, appropriately arranged in a compact wireframe. To
validate the effectiveness of the proposed architecture, a comparison of the ob-
tained 3D body model with the one obtained using a professional Konica Minolta
Range Seven 3D scanner is also presented and possible drawbacks are hinted at.

Keywords: Body scanning; 3D modelling; Custom Avatar, Depth Camera, Model


Fitting.

1 Introduction

We are living in a world that is experiencing extremely personalized products


and services in which each person is unique and deserves customized solution for
improving the quality of his life. Body scanning presents unique value in deliver-

© Springer International Publishing AG 2017 819


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_82
820 R. Furferi et al.

ing the first digital asset of a human body. As a result, body scanners are finally
reaching mass consumers through among others, health, fashion and fitness.
Unfortunately, even for rigid objects, there is no cheap commercial solutions,
which can provide good quality, high resolution distance information in real time
[1]. Scanning devices based on structured light [2] or laser scan guarantee a very
high quality but are expensive since their performance can be comparable to
CMM [3]. Other solutions, based on stereoscopic vision, are economic but
carachterised by low performance [4].
Despite a number of computer-aided methods could be used, in principle, for
performing a 3D acquisition and/or reconstruction of human bodies [5-8] main
approaches employed in depth camera technologies are, currently, the two pre-
sented below. The first one is based on the time-of-flight (ToF) principle [9],
measuring time delay between transmissions of a light pulse.
The second approach is based on light coding, projecting a known near infrared
pattern onto the scene and determining its depth based on the pattern’s defor-
mation. Such device cost results much lower than ToF-based scanners. Among the
most popular devices of this kind, named RGB-D sensors, i.e. camera systems
able in providing both color and dense depth image, is the Microsoft® Kinect TM
[10]. These devices have recently attracted the 3D community; compared with
conventional 3D scanning systems, these sensors are able to capture depth and
RGB data at video rate. Even if the RGB quality and the depth resolution are lim-
ited, the major benefit comes from the overall acquisition speed and the near IR
pattern that allows the acquisition in poor lighting conditions as well as on dark
surfaces. Moreover, such device is cheaper than conventional 3D scanning devices
thus leading to an increasing use for several applications. Indeed, the use of such a
device rapidly grew with particular reference to human motion analysis [11] but
also for creating low-cost body scanners. The first publications on using the
Kinect™ as a 3D whole body scanner started in 2012 and several publications
show that the devised system works properly [12-14]. Despite the excellent results
obtained in the above-mentioned literature, common issues remain in developing
full body scanners based on RGB-D sensors.
In fact, the depth data captured is of extreme low quality (low X/Y resolution
and depth accuracy) especially when dealing with non-rigid bodies (like living
human bodies are) that have to be measured instantaneously [12]. Moreover, these
devices require relatively long computational time to reconstruct a complete mod-
el from the scan data and the obtained model is often unreliable. Finally, to scan a
full human shape with only one Kinect sensor, the body should be positioned
around 3 meters away from the sensor, and the obtained resolution would result
extremely low.
With the aim of improving existing systems based on low-resolution depth
cameras, the present paper describes a novel scanning system (hard-
ware+software) designed by authors for instantaneously capturing 3D full human
body models in a compact set up by using multiple Kinect® devices.
A RGB-D based instant body-scanning solution … 821

2 System Setup Overview

The devised system consists of the instantaneous and compact system of Figure
1, using eight depth cameras appropriately arranged in a wireframe sized 2m x
1.5m x 2m, that can generate a performant body scanning.

(a) (b)
Fig. 1. a) Overall structure (hardware) of the implemented system. b) software architecture.

In detail, the body scanner consists of a scanning station where cameras are
properly arranged so that to simultaneously acquire the working volume inde-
pendently from the body size (i.e. camera allow to acquire persons with height
from 1 m to 1.90). Data acquired by depth cameras are then processed with a
software based on the open-source point cloud library (PCL) [15]. A real-time
mesh grabber is implemented together with a parameter feedback loop able to per-
form both an initial rough alignment of the 8 point clouds and a finer alignment
based on optimization algorithms. Hardware and software setup are detailed in the
following sections.

2.1 Hardware setup

The scanner structure was realized with the purpose of capturing an entire hu-
man body, i.e. 1 m to 1.9 m of vertical captured area. The point cloud acquisition
is carried out by the eight acquisition devices: the scanning station was build tak-
ing under considerations the specifics of the chosen device aiming at reducing the
overall space requirements and minimize the interference phenomena caused by
the simultaneous acquisition of the cameras.
The scanning station height is 2m, and on each side two devices are positioned
at a distance of ~1m from one another as shown in Figures 2 and 3.
Depth is set equal to 1.5m in order to ensure the possibility of capturing the
subject from the four angles and guarantee the minimum distance needed to see
the full body height.
822 R. Furferi et al.

Fig. 2: With an angle of view of 58°, by using one sensor, a distance of 2.5-3m is needed to
fully capture the body (on the left). By positioning two sensor as in figure (on the right), is possi-
ble to come closer thus decreasing the overall space requirements

Fig. 3: Layout of the body scanning station.

2.2 Software setup

In order to allow an instantaneous human body acquisition, the Kinect devices


operate in parallel, thus making the acquisition system prone to the interference
phenomena, in which the projected patterns of two or more cameras overlap, caus-
ing wrong depth reconstruction. To cope with this issue, cameras are positioned to
obtain only small overlapping regions as shown in Figure 3.
However, a minimum overlap among point clouds, (i.e., sets of 3D-points) is
needed to correctly align each other, thus obtaining a full 3D model of the entire
acquired scene. Thus, we developed a custom registration algorithm to improve
the measurement accuracy based on a hierarchical approach and on the use of a
known 3D target by leveraging the registration algorithms available in the PCL.
3D Registration is the problem of consistently aligning two or more point
clouds when they are acquired from different viewpoints [16]. The registration
finds the relative pose (position and orientation) between views in a global coordi-
nate frame, such that the overlapping areas between the point clouds match as well
A RGB-D based instant body-scanning solution … 823

as possible. The overall objective of registration is to align individual point clouds


and fuse them to a single point cloud.
Point clouds registration is usually carried out by means of one of the several
variants of the Iterative Closest Point (ICP) algorithm [17, 18]. Due to the non-
convexity of the optimization, ICP based approaches require initialization with a
rough initial transformation in order to increase the chance of ending up with a
successful alignment. The initial pose can be found by manually selecting a small
set of common points (at least 3 points). Good initialization also speeds up their
convergence. To make the registration process automatic a two steps procedure
can be considered for a correct unsupervised point clouds alignment. A first coarse
alignment can be performed by matching some shape descriptor and the refine-
ment can be performed by executing the ICP algorithm.
The computational steps for two point clouds are straightforward:
x identify interest points (i.e., keypoints) that best represent the scene in both
datasets;
x for each keypoint, compute a feature descriptor;
x from the computed feature descriptors together with their 3D space positions
in the two datasets, estimate a set of correspondences, based on the similari-
ties between features and positions;
x given that the data is assumed to be noisy, not all correspondences are valid,
so reject those bad correspondences that contribute negatively to the registra-
tion process;
x from the remaining set of good correspondences, estimate a motion trans-
formation.
The proposed stand-alone scanning station has been implemented using the
PCL library, which provides various tools to process the depth map and the 3D
data. In particular, the devised procedure aims first at registering the position of
the eight devices with respect to a given reference system and shall be performed
each time the body scanner cabin is installed (cabin registration). Then, a second,
faster, registration procedure is accomplished any time a new object/subject is to
be acquired using the 8 acquisition devices (pre-acquisition registration).

Step 1- Cabin registration


To overcome the limitations imposed by the small overlapped regions in the
point clouds, a known target is used to assess the initial sensors poses. The target,
i.e. a rigid body, previously measured with a professional high quality laser scan-
ner (Konica Minolta Range 7) is located inside the cabin to be visible to all the
eight devices. This model will act as a 3D high-resolution reference for the subse-
quent scanned point clouds obtained using Kinect devices so as to calibrate the
relative position of the devices directly using the RGB-D sensor (relative position
of the sensors could be also retrieved using a standard calibration board; however
this calibration would results inaccurate due to the shift between the RGB-D sen-
sor position and the other camera working in visible wavelengths).
824 R. Furferi et al.

Let now ୌୈ be the point cloud acquired using the above mentioned profes-
sional scanner; let then be ௜ ୐ୈ the point cloud acquired using the ith [i=1..8]
Kinect device (see Figure 4).

Fig. 4: Cabin registration procedure to register the eight depth maps: each device output is
registered with the accurate 3D model obtained with a professional laser scanner and the eight
roto-translation matrices are saved.

The ith point cloud is linked to ୌୈ by the following roto-translation equation
(applicable only to couple of points detected both in ୌୈ and in ௜ ୐ୈ ) [17]:

ୌୈ ൌ  ௜ ‫ כ‬௜ ୐ୈ ൅ – ௜ (1)

Where  ௜ and – ௜ are, respectively, the ith rotational matrix translational vector.
Since in Eq.1 both  ௜ matrices and – ௜ vectors are unknown, a proper procedure for
determining them is required to allow positioning the eight Kinect point clouds to
the same reference system provided by professional scanner (defined as the global
reference system). Despite this registering phase can be performed using either a
descriptor-based approach [19] or the interactive method proposed in [20], in the
present work we preferred to use this last method, the final result consisting on a
semi-automatic registration of the eight depth maps on the high resolution model
i.e. on determining the 8 matrices  ௜ and the 8 vectors – ௜ to align any scanned data
to the global reference system.
A RGB-D based instant body-scanning solution … 825

Step 2- Pre-acquisition registration


Once the position of the eight acquisition devices is known with reference to
the selected global coordinate system, the body scanner is able to acquire any ob-
ject/subject positioned in the scanning area. In fact, the simultaneous acquisition
allowed by the body scanner consists of 8 point clouds Ԣ௜ ୐ୈ that can be registered
according to the roto-translation matrices defined in Step 1:

௜ ୐ୈሺ୰ୣ୥୧ୱ୲ୣ୰ୣୢሻ ൌ  ‫ כ‬Ԣ௜ ୐ୈ ൅ ‫ݐ‬௜ (2)

The entire set of 8 registered point clouds ௜ ୐ୈሺ୰ୣ୥୧ୱ୲ୣ୰ୣୢሻ consists of a point


cloud ୐ୈ with overlapped regions.
Since, as previously mentioned, the acquisition devices operates in parallel no
interference phenomena subsist in overlapping areas; accordingly, a fine global
registration, based on the Iterative Closest Points (ICP) method, can be safely ap-
plied to eventually obtain a more accurate registration.

3 Results

Since no accurate ground truth of the human subject is available, we propose to


evaluate the performance of the overall acquisition on a static mannequin. The ac-
curacy check is performed with reference to a professional laser scanner; in par-
ticular, the previously mentioned laser scanner Konica Minolta Range 7 was used
to scan the mannequin with an accuracy lower than 0.5 mm. The resulting point
cloud is used as a “reference” to assess the performance of the proposed new
scanning system. In fact, a comparison has been carried out by overlapping the
(registered) point clouds acquired using the RGB-D based scanning system with
the reference one and, therefore, by evaluating the error map between them (see
Figure 5).
Error map, evaluated using a commercial software package (Geomagic Studio),
shows that the average error between the two models is equal to 2.78 mm with a
standard deviation equal to 3.38 mm. Accordingly, the accuracy of our biometric
measurement is similar to the results proposed in [12] and in [21] thus demonstrat-
ing the potential of the proposed body scanner to achieve good and reliable full 3D
body models. Of course, further studies have to be performed to confirm the effec-
tiveness of the system especially when dealing with “real-life” acquisitions.
From a qualitative point of view, in Figure 6 an example of a female human
subject, acquired using the proposed body scanner, is depicted.
826 R. Furferi et al.

Figure 5: Comparison between the mannequin model acquired with


laser scanner Konica Minolta Range 7 (reference point cloud) and by using the proposed
scanning system.

From the image it can be noticed that geometric details such as face, dress and
hairstyle [21] are well captured in the 3D model.

Figure 6: Example of the obtained point cloud of a female human subject and the relative re-
constructed mesh.

Accordingly, retrievable 3D models can be effectively used for applications


such as virtual try on or personalized avatar to be used in video games, online
shopping, human computer interaction applications and so on. On the other hand,
for some specific biomedical applications, the quality of the model could result
too low and further improvements have to be addressed in the near future.
A RGB-D based instant body-scanning solution … 827

Conclusions

In this paper, we have presented a novel scanning system for capturing 3D full
human body models by using multiple Kinect® devices in a compact setup. The
system consists of an instantaneous scanning system using eight depth cameras,
appropriately arranged in a compact wireframe. A fully automatic procedure has
been devised to find the sensors poses and the relative correct alignment. The
hardware layout has been designed to reduce the overlapping regions seen from
each sensor and to cover the full human shape (up to 2 meters tall subjects) in a
compact cabin (2m x 1m x 2m). The overall accuracy of the system, developed
using PCL and validated against a professional laser scanner for rigid objects case,
demonstrated the effectiveness of the proposed system for a range of practical ap-
plications involving the use of full body models such as, for instance, virtual try
on, online shopping, human computer interaction applications. Further studies are
required prior to state the effectiveness of the system for dealing with biomedical
applications. Therefore, future work will be addressed in testing the performance
of the system in real case scenarios and in improving the resolution of the system
by using other kinds of RGB-D devices.

References

1. Tong, J., Zhou, J., Liu, L., Pan, Z., & Yan, H. (2012). Scanning 3D full human bodies using
kinects. Visualization and Computer Graphics, IEEE Transactions on, 18(4), 643-650.
2. Barone S., Paoli A., and Razionale A.V. Multiple alignments of range maps by active stereo
imaging and global marker framing. Optics and Lasers in Engineering, 2013, 51(2), 116-127.
3. F. Leali, F. Pini and M. Ansaloni: Integration of CAM off-line programming in robot high-
accuracy machining. 2013 IEEE/SICE International Symposium on System Integration, SII
2013 6776741, pp. 580-585
4. Liverani, A., Leali, F., Pellicciari, M., Real-time 3D features reconstruction through monocu-
lar vision, International Journal on Interactive Design and Manufacturing, Volume 4, Issue 2,
May 2010, Pages 103-112
5. Governi, L., Furferi, R., Puggelli, L., Volpe, Y. Improving surface reconstruction in shape
from shading using easy-to-set boundary conditions (2013) International Journal of Computa-
tional Vision and Robotics, 3 (3), pp. 225-247.
6. Barone S., Paoli A., and Razionale A.V. Computer-aided modelling of three-dimensional
maxillofacial tissues through multi-modal imaging. Proceedings of the Institution of Mechan-
ical Engineers, Part H: Journal of Engineering in Medicine, 2013, 227(2), 89-104.
7. Governi, L., Furferi, R., Palai, M., Volpe, Y. 3D geometry reconstruction from orthographic
views: A method based on 3D image processing and data fitting (2013) Computers in Indus-
try, 64 (9), pp. 1290-1300.
8. Apostolico, A., Cappetti, N., D'Oria, C., Naddeo, A., Sestri, M., 2014, Postural comfort evalu-
ation: Experimental identification of Range of Rest Posture for human articular joints , Inter-
national Journal on Interactive Design and Manufacturing, 8 (2), pp. 109-120. doi:
10.1007/s12008-013-0186-z
828 R. Furferi et al.

9. Andreas Kolb, Erhardt Barth, Reinhard Koch, and Rasmus Larsen. Time of-flight sensors in
computer graphics. In M. Pauly and G. Greiner, editors, Eurographics 2009 - State of the Art
Reports, pages 119–134. Eurographics, 2009.
10. https://dev.windows.com/en-us/kinect . Accessed 06/04/2016.
11. Chen, Lulu, Hong Wei, and James Ferryman. A survey of human motion analysis using
depth imagery. Pattern Recognition Letters 34.15 (2013): 1995-2006.
12. J. Tong, J. Zhou, L. Liu, Z. Pan, H. Yan Scanning 3D full human bodies using kinects IEEE
Trans. Visual. Comput. Graphics, 18 (2012), pp. 643–650.
13. S. Song, S. Yu, W. Xu Study on 3D body scanning, reconstruction and measurement tech-
niques based on Kinect J. Tianjin Polytech. Univ., 31 (2012) 34–37.
14. R. Wang, J. Choi, G. Medioni, Accurate full body scanning from a single fixed 3D camera,
in: Proc. – Jt. 3DIM/3DPVT Conf.: 3D Imaging, Model., Process., Vis. Transm., 3DIMPVT,
2012, pp. 432–439.
15. http://pointclouds.org/ Accessed 06/04/2016.
16. Jost, T., & Hügli, H. (2003, October). A multi-resolution ICP with heuristic closest point
search for fast and robust 3D registration of range images. In 3-D Digital Imaging and Mod-
eling, 2003. 3DIM 2003. Proceedings. Fourth International Conference on (pp. 427-433).
IEEE.
17. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE PAMI. 14 (1992)
239–2569. Brett Allen, Brian Curless, and Zoran Popovic. The space of human body shapes:
reconstruction and parameterization from range scans. ACM Transactions on Graphics,
22(3):587–594, 2003.
18. Furferi, R., Governi, L., Volpe, Y., Carfagni, M. Design and assessment of a machine vision
system for automatic vehicle wheel alignment (2013) International Journal of Advanced Ro-
botic Systems, 10, art. no. 242
19. Gelfand, N., Mitra, N. J., Guibas, L. J., & Pottmann, H. (2005, July). Robust global registra-
tion. In Symposium on geometry processing (Vol. 2, No. 3, p. 5).
20. Buonamici, F., Furferi, R., Governi, L., Volpe, Y. Making blind people autonomous in the
exploration of tactile models: A feasibility study (2015) Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor-
matics), 9176, pp. 82-93.
21. Alexander Weiss, David Hirshberg, and Michael J. Black. Home 3d body scans from noisy
image and range data. In 13th International Conference on Computer Vision , 2011.
Machine Learning Techniques to address
classification issues in Reverse Engineering.

Jonathan Dekhtiar1*, Alexandre Durupt1, Dimitris Kiritsis2,


Matthieu Bricogne1, Harvey Rowson3, and Benoit Eynard1
1
Université de Technologie de Compiègne, Department of Mechanical Systems Engineering,
UMR UTC/CNRS 7337 Roberval CS 60319, 60203 Compiègne Cedex, France
2
Swiss Federal Institute of Technology at Lausanne (EPFL), STI-IPR-LICP, CH-1015
Lausanne, Switzerland
3
DeltaCAD, 795 Rue des Longues Raies, 60610 La Croix-Saint-Ouen, France

* Corresponding author. Tel.: +33-770-411-384 & fax: +33-344-234-971.


E-mail Address: contact@jonathandekhtiar.eu & jonathan.dekhtiar@utc.fr

Abstract This paper aims to provide a road map for future works related to
reverse engineering field of expertise. Reverse Engineering, in a mechanical
context, relates to any process working in a bottom-up fashion, namely that it goes
from a lower level concept or product (closer to the final product) to a higher level
one (closer to the ideation step). Nowadays, the manufacturing industry is facing
unprecedented increase in data exchange and data warehousing. This comes with
new issues that our work will not explore, such as “how to store these data in an
efficient manner?”, “what should be stored?” and so on. Nonetheless, this trend
also creates new opportunities if we manage to integrate these data into the
expertise workflows. In this paper we will cover the possibilities offered by
machine learning to succeed in this challenge. We will also present a first and
major step in our road map in order to achieve our research goals. We plan to
design a metric to quantify how well and how precise can we perform some
specific reverse engineering tasks such as detection, segmentation and
classification of mechanical parts in imagery data. We aspire to open this metric,
and make it freely and widely available to researchers and industry in order to
compare the effectiveness, robustness and preciseness of the existing and future
approaches.

Keywords: Reverse Engineering; Knowledge Extraction; Data Analysis; Data


Management; Machine Learning; Computer Vision; Big Data; Deep Learning;
Text-Mining.

© Springer International Publishing AG 2017 829


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_83
830 J. Dekhtiar et al.

1. Introduction

Mechanical reverse engineering has been getting more and more interest from the
industry over the last decades. Its root-goals are to rebuild the Digital Mock-Up
(DMU) in order to re-manufacture [1], to manage an existing product (e.g.
maintenance) or to improve an existing product. Its concepts can also be applied to
allow rapid-manufacturing and re-documenting during any part of the product
lifecycle.

One of the recurring issues, in the reverse engineering challenges, is the ability to
integrate and use heterogeneous data and knowledge efficiently. By data
integration, we mean all the processes aiming to move, combine and consolidate
data for further analysis and exploitation.

Previously, many approaches focussed on integrating and using prior data or


knowledge coming from the same company [2]. However, manufacturers are,
nowadays, facing a fast growing amount of data stored and exchanged by their
information systems. They become increasingly part of a global system where the
information internal to these systems are linked to each other with software
solutions such as Enterprise Resource Planning (ERP) systems, and Electronic
Data Interchange (EDI) messaging systems. Moreover, the concept of an
“internal” system has become blurry over the years with the massive use cloud
computing and everything as a Service such as SaaS for Software, PaaS for
Platform and so on.

This observation is especially true in the manufacturing industry context. New


data is generated every day such as manufacturing data, technical reports, design
documents (e.g. CAD Files, technical drawings, etc.), quality procedures, etc. The
emergence and the democratisation of technologies such as RFID/NFC, or the
internet of things (IoT), reinforces this trend.

Nowadays, many approaches allow us to integrate and use these data.


Nonetheless, these processes work with a limited number of data-types and data-
sources.

This leads us to, not only consider what we could call “internal knowledge-base
and system”, but also any kind of data coming from other businesses or systems
(e.g. suppliers’ documents, shared knowledge database, mechanical parts libraries,
etc.). Unfortunately, very few, if any, methods can accept external resources as
input for the reverse engineering processes. This consideration leads us to rethink
the expertise processes, and the way we execute certain tasks.

Within this framework, our study will try to address the DMU reconstruction
problem. In a more specific manner, we will focus on technical document and
imagery data classification which is a key problem in order to perform a
Machine Learning Techniques to address ... 831

successful reverse engineering workflow.

With this goal in mind, we will be looking to the computer science solutions in the
field of Data Analysis, more specifically, Supervised Machine Learning and
Computer Vision.

Our short-term goals are to build a robust metric to compare the different
approaches to reverse engineer a Bill of Materials (BOM) from an existing
assembly but also to classify and associate parts-related technical documents with
the BOM.

2. The Reverse Engineering Framework

In order to relocate our study in a global perspective, we would like to get back on
low domain-level definitions.
First of all, manufacturing engineering activities could be described as a set of
different tasks such as designing, manufacturing, assembling, maintaining, etc.
Secondly, one can find two major types of engineering: Forward Engineering (FE)
[3] and Reverse Engineering (RE). FE processes focus on implementing high level
concepts and ideas into lower-level ones or even an intermediate or final product.
Conversely, RE workflows work in a bottom-up fashion and focus on re-obtaining
the higher level ideas and concepts from lower level concepts, ideas or product.

Classical Reverse Engineering activities are remanufacturing, re-documenting,


redefining the design intents, digitising, feature extracting, remodelling,
maintaining an alive version of the Digital Mock-Up (DMU) called DMU as-
maintain (in comparison to the “as-design” and “as-built” DMUs) [4].

In order to realise such operations, we need to quantify, describe, analyse and


harness the real in order to rebuild any virtual or numerical asset (e.g. 3D model,
documentation, disposal procedure, etc.).

Why is Reverse Engineering a burning issue in an industrial manufacturing


context? Let us explore a few situations highlighted in the book written by Raja V.
et al. [3] that justify the necessity of efficient Reverse Engineering workflows.
x The original manufacturer ceased activities or do not produce the product
any more. However, there is still a need for it. The situation may show up
for long-life products, infrastructure or machines (e.g. nuclear power
plants, aeroplanes, boats, submarines, cars, etc.).

x The original documentation has been lost, never existed or is completely


outdated because the product may have completely evolved.
832 J. Dekhtiar et al.

x No CAD file for the part is available because it could have been
handmade, or very limited production run or even it has become obsolete.

x Competitors analysis, need to understand the product’s features.

These are just a few examples, but we could find a lot more reasons to justify the
necessity to carry out reverse engineering activities. However, reverse engineering
is a wide domain and depending on the situation it may not require to reverse
engineer the whole numerical product. How to effectively decide on what content
areas the reverse engineering expert shall focus? Which data have the highest
added value for the required tasks? How to adapt our processes to each specific
data types?

3. Heterogeneous data, limitations or opportunities?

3.1 Definition of “Engineering Data”

With this context in mind, companies are moreover facing an unparallelled growth
of their information systems. Design and technical teams deal every day with
more data and generate some more that needs to be stored. The generalised use of
cloud computing reinforces this trend. Large industrial groups, (e.g. a famous
automotive industrial supplier : Valeo [5]), tend to externalise their computing
infrastructure and assets. The key points, that lead Valeo to run this business
decision, can be found in the work of Peiris et al. [6]. To mention just a few, we
could shortly recap that cloud computing can help to drastically reduce working
costs but also to help improve workflows efficiency by ensuring flexibility,
simplicity and performance. Many more reasons can be found in the previously
cited publication.

All this together leads us to an ever-growing amount of data or more accurately,


engineering data. These specific data differ, amongst other things, by a strong and
implicit incorporated business sense, proprietary formats, etc. Most of them
respond to some standards or specific formats. Those have been thought and
designed to address specific engineering issues. And thus, by analysing not only
the data, but also their structure and format, one can extract meaningful
knowledge valuable for a company.

These data could be found in the company internal information systems and its
subsystems such as databases, ERPs, Product Data Management (PDM) systems,
Network Attached Storage (NAS) systems / Storage Area Network (SAN)
systems, etc. As stated before, the data can increasingly be also found in some
external systems such as cloud services or shared systems such as an On-board
Machine Learning Techniques to address ... 833

diagnostics Parameter IDs (OBD-II PID) knowledge-base in the automotive


sector.

We reckon it is necessary to differentiate data that reside inside the company


internal information system than the others for many reasons. First of all,
companies may not have an easy access to the data or any control on the data
structure. How reliable are these data? Where are they from? It could also have a
non-negligible cost to exploit these data, indeed a robust network may be
necessary and transfer costs could become expensive over a long run.

3.2 Structured vs Unstructured Data

Speaking of mining data in order to extract some meaningful insights, an


important consideration should be highlighted. It is fundamentally different to
analyse structured data than unstructured data.

A simply way to define structured data could be that one should be to sort out
these data into spreadsheets. In other words, structured data is a set of variables
called “features” (not to be confused with the mechanical and geometrical feature)
such as the following metadata: weight, length, width, depth, diameter, colour,
material, cost, price, manufacturers, clients and so on.

In the other way, unstructured data cannot be decomposed or fully described


easily with a set of features.

Structured Data Unstructured Data


Database data Spreadsheets Files Photos
ERP data Machine Logs Videos
PDM/PLM data Computer Logs Point Clouds
MPM data Server Logs Text Documents
KBE data Bill-Of-Materials CAD Files

Table 1. Examples of structured data and unstructured data.

So in some point of view, it would be completely right to sort everything in the


category structured data. For instance, a JPEG photo respond to a certain format
and hold some colour-related information. The structure allows us to clearly
identify one specific pixel in the photo and it is enough to define a clear and
completely perfect JPEG photo.

Nevertheless, do not we have lost some information about the photo? A JPEG
photo cannot explicit its content by itself (aside of manually inputed metadata).
Therefore, by considering a photo as a set of pixels, we clearly lost a valuable
834 J. Dekhtiar et al.

amount of information. And this information is clearly not structured. There is no


data structure allowing us to create some simple computer code to extract this
knowledge. And this justifies our distinction, by reducing the data to its data
structure we could lose some meaningful sense that was embedded in the data
itself considered as a whole and not its structure.

3.3 Heterogeneous Data and Classification Challenges

Engineering Data are by nature heterogeneous, by heterogeneous we mean


different data-types, data-formats but also by the nature of the expertise
knowledge inside these data. With heterogeneous data comes some inherent
problems. We need to develop different approaches that changes depending on the
file-type, its format and the point of view we analyse the data with. Indeed, as we
analyse some raw data, we need to design a set of desired expected results in order
to make sense of these data.

A quick example to support this point could be the following: we have at our
disposal of 100 000 text documents related to a cutting edge aeroplane, in order to
capitalise its cutting-edge eco-design, we would like to identify and bring together
all the documents that have some impact on this aspect.

Two solutions then appear, we pay a team of experts to manually review each of
the 100 000 text documents, or we find a way to extract meaningful patterns from
the documents and flag them as potentially correlated to our topic.

Such an example could also be found with photos, videos, plans, point clouds,
CAD models, and so on. We could, to some extent, also consider badly formatted
data as unstructured data, which considerably increase the diversity of data we
could analyse.

Nevertheless, in the above example, it is not said how we should extract those
meaningful patterns. It seems obvious that for every different question “we might
ask to our data”, there will be a need to fully redefine new patterns to identify the
correct data answering our question. In other words, we shift the issue to a
repetitive pattern identification task that could also only be done by an expert of
the domain.
Machine Learning Techniques to address ... 835

4. DMU-Net – A Mechanical Engineering Oriented 3D-Models


dataset for Computer Vision Applications.

4.1 Machine Learning to Address Automate Pattern Discovery

Machine Learning is a burning subject since a few decades. At the junction


between computer science and statistical analysis, it provides tools and methods to
implement artificial intelligence to classical engineering workflows. Basically,
Machine Learning focusses on setting up algorithms that learn from examples or
experience instead of relying on hard-coded rules. Most of the time, these rules
are hidden or not well defined. And thus, it is fairly hard, even for a domain
expert, to list them all.

Let us say that we want an algorithm able to sort out a bunch of mechanical parts
depending on whether a part is made of plastic or metal. The expert need to
analyse the given situation and might give a few rules such as: “Measure the
density, if it is higher than a threshold then it is a metal part, however, if it is
lower, then it is a plastic part.” He could also say, “You should also measure the
light-reflective index; a high value would indicate a metal part”. Nevertheless,
how to define the thresholds? What happens if one metal alloy is in between?
Basically, it is really hard to build up a model for which it is fairly difficult to
build up an example that break the rules (e.g. a metal called Microlattice is a lot
lighter than any existing plastic at the moment: 0.9mg/m3 [7]). However, what
would happen for a much more difficult situation (e.g. Recycling facilities need to
sort out many kinds of plastic, metals, glass, cardboard at the same time). Is it still
possible to give accurate rules?
It is also important to notice that the whole expertise work needs to be done again
if we add any new categories to classify (e.g. composite materials) or if we
radically change the situation to another problem. The real industrial world is
messy, straightforward rules tend to break for complex situations.

Machine Learning tries to specifically address such situations by creating an


algorithm that learns out of examples. This kind of algorithm tries to find hidden
patterns in the data in order to create a classification or regression model. Finding
correlation in the data is fairly easy for a computer, if the problem or the situation
changes, the algorithm reanalyses the input data and rebuild its model with limited
human intervention. We will not investigate how this kind of algorithm work in
this paper, however detailed explanations and reviews can be found in the
scientific literature [8–10]. We could find different kinds of learning algorithms,
which could be classified as follows: Supervised Learning [11], Unsupervised
Learning [12] and Reinforcement Learning [13, 14] for the most common types.
836 J. Dekhtiar et al.

4.1 Computer Vision – A focus on Imagery Data Analysis.

The Computer Vision is a Computer Science field of expertise that is highly


correlated and linked to Machine Learning, basically it aims at performing
imagery data (e.g. photos, videos, plans, point clouds, etc.) analysis by tracking
and identifying objects or people, or even understand the content of such a data in
a global perspective (e.g. “a boy is riding a bike”). Nowadays, this processes
mainly relies on algorithmic structures that mimic the behaviour or the human
brain, and thus called Neural Networks (NN) and Deep Neural Networks (DNN).
The deep factor refer to the number of neural layers contains in the network. The
more complex is the task, the deeper the neural network needs to be. Again, we
will not deeply explore its concepts. However they could be found in the scientific
literature [15–19]. We think that such a technique could represent a pillar to any
process aiming at performing BOM reconstruction using imagery data.

Since 2012, Computer Vision techniques mainly focus on Deep Learning


solutions. Deep Learning is a subfield of Machine Learning, focussed on Deep
Neural Networks. With classical machine learning techniques, the data scientist
needs to provide a feature vector (i.e. a set of variables) and then the algorithm
computes the correlations. We could say that, we shift the intelligence in
measuring the features (in terms of variables) and defining them. The expert needs
to understand what to measure, what are the important concepts to classify the
dataset.
For instance, if we would like to classify Trucks, Cars, and Motorbikes, efficient
features would be the number of wheels or the total surface of glass in square
metres. What happens with more complex data? Or if we cannot really define
these features easily? Deep Learning takes on the challenge that with massive
training datasets, it could highlight such features automatically. And this is
specifically this feature what we are excited the most about. If we provide enough
training data to such an algorithm, we could, in theory, elaborate an accurate
classifying mechanical parts with minimal expert knowledge beside the labelling
task.

4.3 DMU-Net - A necessary step to insure result repeatability.

DMU-Net is the dataset we aspire to create and share with the research
community. Inspired by the ImageNet Large Scale Visual Recognition Challenge
[20], the ShapeNet Project, An Information-Rich 3D Model Repository [21], The
Pascal Visual Object Classes (VOC) Challenge [22, 23], the Aim@Shape Project
[24] and many others. We would like to create an Expert Oriented CAD Models
Dataset of mechanical components in industrial domain, following the lead of the
other projects, we will call it: DMU-Net.
Machine Learning Techniques to address ... 837

The goals are plural; firstly, we would like to provide a framework to allow
collaboration between computer science, machine learning, computer vision and
mechanical engineering. We believe essential to build tools and data to interest
more computer science researchers and workers to our problematic. Secondly,
such a dataset could provide an accurate and extremely robust validation tool for
every research work focussing on mechanical objects, parts and models
recognition / identification. It will provide a sufficiently large number of CAD
Models to insure the scientific validity and better reproducibility of the results. In
addition, this work will provide a systematic method to measure state-of-the-art
performance by establishing a metric, and thus, algorithms and approaches could
be finally compared side by side.

5. Conclusion

To conclude, the first experimentations and tests we had the opportunity to run
leads us to be optimistic with the possibilities offered by the deep learning and
computer vision fields. However, these techniques are highly computing intensive
and require a very fine hyper-parameters tuning. A relatively complex model to
perform object recognition in photography could contain millions of these hyper-
parameters and these hardening the training task.

To address the previously mentioned challenges, we will firstly focus on acquiring


a massive CAD Model Dataset and publishing it to the community.
Then, we would like to screen the different approaches possible in Data Analysis
and compare them with State of the Art performances in order to demonstrate the
relevance of an automated data-oriented approach in order to sort out the DMU
reconstruction problematic from heterogeneous data with expertise meaning
compared to geometric and topological methods.

However, even if we actually try to build algorithms that use minimal expertise
knowledge, it is fundamental for us to design solutions expertise oriented with a
strong context inclusion. Our approach integrates itself in a computational
mechanic context.

References

1. Pal, D.K., Bhargava, L.S., Ravi, B., Chandrasekhar, U.: Computer-aided


Reverse Engineering for Rapid Replacement of Parts. Defence Science
Journal. vol. 56, 225–238 (2006).
2. Durupt, A.: Définition d’un processus de rétro-conception de produit par
intégration des connaissances de son cycle de vie,
http://www.theses.fr/2010TROY0009, (2010).
838 J. Dekhtiar et al.

3. Raja, V., Fernandes, K.J.: Reverse Engineering: An Industrial Perspective.


Springer Science & Business Media (2007).
4. Motavalli, S.: Review of reverse engineering approaches. Computers &
Industrial Engineering. vol. 35, 25–28 (1998).
5. Leena Rao: Google Adds 30,000 App Users in Biggest Enterprise Deal to
Date, http://seekingalpha.com/article/137500-google-adds-30000-app-
users-in-biggest-enterprise-deal-to-date.
6. Peiris, C., Sharma, D., Balachandran, B.: Validating and Designing a
Service Centric View for C2TP: Cloud Computing Tipping Point Model.
In: Phillips-Wren, G., Jain, L.C., Nakamatsu, K., and Howlett, R.J. (eds.)
Advances in Intelligent Decision Technologies: Proceedings of the Second
KES International Symposium IDT 2010. pp. 423–433. Springer Berlin
Heidelberg (2010).
7. Schaedler, T.A., Jacobsen, A.J., Torrents, A., Sorensen, A.E., Lian, J.,
Greer, J.R., Valdevit, L., Carter, W.B.: Ultralight metallic microlattices.
Science (New York, N.Y.). vol. 334, 962–5 (2011).
8. Jiawei Han, M.K.: Data Mining: Concepts and Techniques. (2000).
9. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer-
Verlag New York (2006).
10. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools
and Techniques, Second Edition. Morgan Kaufmann Publishers Inc.
(2005).
11. Kotsiantis, S.B., Zaharakis, I.D., Pintelas, P.E.: Machine learning: a
review of classification and combining techniques. Artificial Intelligence
Review. vol. 26, 159–190 (2007).
12. Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM
Computing Surveys. vol. 31, 264–323 (1999).
13. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A
Survey. Journal of Artificial Intelligence Research. vol. 4, 237–285
(1996).
14. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction.
(1998).
15. Morris, D.: Computer Vision and Image Processing. Palgrave Macmillan
(2003).
16. Jähne, B., Haußecker, H.: Computer vision and applications: a guide for
students and practitioners. Academic Press, Inc. (2000).
17. Stockman, G., Shapiro, L.G.: Computer Vision (1st ed.). Prentice Hall
PTR (2001).
18. Srinivas, S., Sarvadevabhatla, R.K., Mopuri, K.R., Prabhu, N.,
Kruthiventi, S.S.S., Babu, R.V.: A Taxonomy of Deep Convolutional
Neural Nets for Computer Vision. Frontiers in Robotics and AI. vol. 2,
(2016).
19. Esposito, F., Malerba, D.: Machine learning in computer vision. Applied
Artificial Intelligence. vol. 15, 693–705 (2001).
20. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S.,
Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei,
Machine Learning Techniques to address ... 839

L.: ImageNet Large Scale Visual Recognition Challenge. 43 (2014).


21. Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z.,
Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., Yu, F.:
ShapeNet: An Information-Rich 3D Model Repository. (2015).
22. Everingham, M., Van~Gool, L., Williams, C.K.I., Winn, J., Zisserman,
A.: The Pascal Visual Object Classes (VOC) Challenge. International
Journal of Computer Vision. vol. 88, 303–338 (2010).
23. Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn,
J., Zisserman, A.: The Pascal Visual Object Classes Challenge: A
Retrospective. International Journal of Computer Vision. vol. 111, 98–136
(2014).
24. Falcidieno, B.: Special session AIM@SHAPE project presentation. In:
Proceedings Shape Modeling Applications, 2004. pp. 329–329. IEEE.
Recent strategies for 3D reconstruction using
Reverse Engineering: a bird’s eye view

Francesco Buonamici1, Monica Carfagni1* and Yary Volpe1


1
Department of Industrial Engineering, University of Florence (Italy), Via di Santa Marta 3,
Florence, Italy
* Corresponding author. Tel.:+39-0552758731; fax: +39-0552758755. E-mail address:
monica.carfagni@unifi.it

Abstract This paper presents a brief review of recent methods and tools avail-
able to designers to perform reverse engineering of CAD models starting from 3D
scanned data (mesh/points). Initially, the basic RE framework, shared by the vast
majority of techniques, is sketched out. Two main RE strategies are subsequently
identified and discussed: automatic approaches and user-guided ones.

Keywords: Reverse Engineering; CAD reconstruction; Constrained Fitting.

1 Introduction

CAD models are nowadays fundamental in a great number of engineering fields


and application. Within the design and fabrication processes of any mechanical
part there are a number of steps where CAD models prove to be essential (e.g.
sketching, 3D drawing, structural analysis). In the most advanced engineering
companies and mechanical design studios, CAD models have become, in fact,
practically indispensable, due to their great benefits. With this respect, noticeable
examples are the reduction of the design phase duration and related costs, higher
control of the project, and the access to a series of computer-aided tools (e.g.
FEM, CAM) that allow a level of precision and efficiency otherwise impossible to
reach.
In other words, CAD models have significantly shaped modern engineering
and the whole product development process; therefore, lots of problems arise
whenever the CAD model of a physical part that needs to be re-engineered, is not
available.
Reverse Engineering (RE) aims at the retrieval/generation of the CAD model
of a mechanical part, starting from 3D data directly measured on the physical
object. The measurement can be done by means, for example, of a 3D scanner or a
Coordinate-Measuring Machine (CMM) and typically results in a set of points or
mesh, describing the object.

© Springer International Publishing AG 2017 841


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_84
842 F. Buonamici et al.

The measured data implicitly contains information about the part’s geometry,
its surfaces and geometric features (e.g. planes, cylinders, spheres, etc. that may
compose the object). Obtained data, however, are not useful for the designer as
they stand, due to the following multiple factors: 1) measured data are always
affected by an error, that generally can’t be overlooked; 2) acquired points provide
a discrete and not explicit representation of the original geometry; 3) the physical
part has been fabricated with an imperfect process and has, therefore, inevitably
diverged to a certain degree from its original design;
Summing up, acquired data is influenced by all the non-idealness of the
fabrication and measurement process and necessarily needs to be elaborated to
obtain a spendable result; to be useful for the designer needs, the obtained
information is required to be channelled in an ideal mathematical representation of
the object (a CAD model), attempting at the retrieval of the original design intent.
Finding a “good” geometrical representation, incorporating as closely as
possible the original part design intent, is the ultimate result of the whole RE
process. This topic has been recently addressed by a number of studies, and has
received the attention of several software companies who released advanced RE-
oriented CAD tools.
On the basis of the above considerations, the present work aims at identifying
latest trends, innovations and limits of the RE process. To better understand the
CAD reconstruction issue, a description of the basic RE framework is provided in
section 2. In sections 3 and 4, two different RE approaches/tools are reviewed; due
to their importance for designers, section 4 will be particularly focused on RE
commercial software packages.

2 Basic RE Framework

In this section, the steps composing the basic RE framework (Fig. 1), usually
shared by the vast majority of approaches and techniques, will be illustrated.
As previously hinted at, the typical RE process starts with the acquisition of
3D data describing the shape and dimensions of the physical object. This is
typically achieved thanks to a 3D acquisition system (e.g. 3D scanner); other
strategies [1, 2, 3] exploit a set of orthographic views or 2D images to reconstruct
the 3D information of the object in an alternative mode.
The acquired data (usually a set of point clouds) is then processed to generate
a mesh; during this step, additional operations, like point clouds filtering and
merging, are usually performed. The mesh, composed by a number of triangles, is
subsequently segmented in multiple isolated regions. Each separate set of triangles
is later classified: its geometrical properties are analysed, and the region is
associated to a geometric feature (e.g. cylinder, plane, thorus, etc.). The
information obtained in the classification step determine the choice of a set of
mathematical features that are tailored to the mesh in a subsequent step,
minimizing a fitting error. This is the key-operation of the whole RE process and
the way it is carried out heavily influences the final CAD model reconstruction.
Recent strategies for 3D reconstruction using ... 843

Lastly, post-processing operations are performed to stitch together the generated


surfaces and the final CAD model is created.

Fig. 1. RE Framework. a) Physical part; b) 3D data acquisition; c) mesh segmentation; d) classi-


fication of segmented regions; e) reconstructed CAD model.

The above mentioned steps are shared, to a certain degree, by all RE existing
approaches; therefore, a number of procedures and algorithms responsible for each
step have been already deeply studied and tested in literature. Hereby, a few basic
considerations regarding the most important aspects of the first phases of the RE
process are provided. With respect to the acquisition phase, the user typically
needs to acquire multiple point clouds to successfully describe objects with
elaborate shapes. A single point cloud (necessary in latest steps) is usually
obtained thanks to registration algorithms, the most famous one being the Iterative
Closest Point (ICP) algorithm. Typical problems that need to be taken into account
in this step are the non-uniformity of points and the asymmetry of the scan data;
both these problem can be dealt with using an appropriate defined minimizing
function, such as the one presented in [4]. Registration is usually followed by
sampling, which aims at obtaining a single point cloud with a uniform
distribution; if reconstruction strategies exploiting curves directly derived from the
points cloud are applied, as in [5, 6], this step can be ignored.
The subsequent step, called “triangulation” [7], is usually obtained thanks to
well-known and established techniques, such as the Delaunay algorithm.
Considering the segmentation step, which subdivides the mesh in group of
triangles with similar geometric features, a review of techniques is presented by
Di Angelo and Di Stefano in [8]; in the article, a number of segmentation methods
are applied to meshes obtained scanning real parts. It is worth mentioning that the
literature describing point clouds and mesh operations (i.e. acquisition, sampling,
segmentation, classification) has been developed rather extensively and, as
previously suggested, each step is usually executed thanks to well-known golden
standard techniques (a detailed review of most important methods can be found in
[9]). As a consequence, in the following sections, this article will particularly
focus on the description of the so called “fitting” step; this phase is arguably the
most important of the RE process, its execution directly conditioning the accuracy
844 F. Buonamici et al.

of the final CAD model. Moreover, the fitting step is probably the least and most
recently studied [10].
In detail, two main categories of RE strategies are available to designers thus
identifying the current state of the art. A first class of approaches can be
recognized in methods and algorithms that, starting from the acquired 3D data, try
to obtain, automatically or semi-automatically, the final CAD model. This
approach, although fast and convenient under many aspects, generally leads to
imperfect CAD models, usually expressed in formats not directly usable in most
common CAD software packages.
A second approach to the problem is represented by RE processes that rely
mostly on user-guided tools. These strategies exploit the designer’ knowledge and
engineering skills in order to assure a more controlled process and, possibly, a
model closer to its original design. On the other hand, these require competent and
trained users involved in rather long and complicated processes. Moreover, to be
efficient, these approaches heavily relies on a CAD-like modelling environment
and therefore, user-guided RE tools are principally available in well-known
commercial RE software packages.
Recently, significant improvements have been made in both of the presented
areas, aiming at the development of algorithms, methods and tools to develop an
effective RE process. To identify limits and trends of most-recent RE tools
available to designers, a study of the state of the art of both automatic and user-
guided approaches is therefore presented.

3 Automatic and Semi-Automatic RE Strategies

A number of approaches, subsequently described, have been proposed in scientific


literature to perform automatic reconstruction of a CAD model, starting from the
acquired 3D data; notable differences among the approaches are usually mainly
located in the previously described fitting step. With this respect, the most direct
and easy-to-implement method exploits the information obtained in the classifica-
tion step (i.e. recognized surfaces, that should make up the mathematical model)
to perform a direct and separate fitting of each analytical surface to the corre-
sponding scanned data; the final CAD model is obtained finding the set of sur-
faces that minimizes a defined fitting error. A notable example of this technique is
provided in [11], by Bénière et al.; in the article, a method to perform the recon-
struction of B-REP models starting from a 3D mesh not affected by error, is pre-
sented. The strategy relies on the fitting of independent geometric primitives (i.e.
plane, sphere, cylinder and cone) to segmented data. The approach covers three
steps: 1) mesh segmentation and primitive extraction (based on differential ge-
ometry operators); 2) reconstruction of relationships between primitives and 3) B-
REP creation.
This strategy, although rather straightforward, presents some limits: CAD models
generated with this approach are usually affected by defects, due to errors intro-
Recent strategies for 3D reconstruction using ... 845

duced during the acquisition and the reconstruction; typically, models obtained
with this technique are poor representations of the object original design.
In fact, it is essential in practically every engineering application, to flaw-
lessly reconstruct at least a subset of geometrical features and dimensions that are
directly responsible for the functioning or fitting of the part. Even though the pre-
sented strategy (i.e. separate fitting of analytical surfaces) does provide the best
mathematical representation possible of the scanned data, it is generally more
meaningful to use a method that provides a result that somewhat diverges from the
measurements, in order to retrieve a higher level of design intent.
The constrained fitting technique [12, 13, 14, 15] is one of the reconstruction
approaches that partially sacrifices the adherence between surfaces and scanned
data in favour of a closer representation of the ideal design of the part. This is
achieved by imposing constraints in the fitting step, actually transforming the pre-
viously described unconstrained minimization into a constrained one. This ap-
proach allows the reconstruction of a model dimensionally faithful to the scanned
data as long as respectful of a set of significant constraints. Typical constraints
that are usually imposed are geometric relations between features (e.g. parallelism
of planes, orthogonality between axis, symmetries or pattern regularities of fea-
tures, etc.) or significant dimensions.
Different algorithms performing constrained fitting can be found in literature.
In some cases, the constraints are detected automatically by an apposite procedure
[14], analysing the relations between the identified surfaces, measuring parameters
and confronting them with a set of threshold values. As an examples, two planar
surfaces forming an angle of 89.9° could be detected as orthogonal and their rela-
tion would be imposed. Other methods rely on the user to impose conditions and
constraints in the fitting step [12], using point-and-click graphical interfaces or
other types of UIs.
Once that the constraints are defined, an optimization is automatically per-
formed, and the final set of surfaces parameters, minimizing the fitting error under
the conditions imposed by the constraints, is identified.
Werghi et al. present in [12] one of the first approaches to constrained fitting;
in their work, a CAD model is reconstructed starting from segmented range data.
The authors manually impose geometric constraints as non-linear equations in the
fitting of a set of analytically surfaces; a Levenberg-Marquardt algorithm is re-
sponsible for the minimization of an objective function, which is defined by two
contributes: 1) the fitting error between surfaces/data, expressed as a function of
the surface parameters and 2) a weighted sum of the imposed constraints. It is im-
portant to highlight that, due to the presence of the set of weights, the constraints
are imposed only up to a certain tolerance in this method; this is rather common in
constrained fitting techniques, due to the numerical problems and complexity that
a constrained optimization with perfectly imposed constraints introduces.
Wang et al. discuss in [13] an extension of the method presented in [12]; the
authors perform a constrained optimization, based on [12], to fit a set of quadratic
surfaces to segmented 3D data. Moreover, their method includes a feature-based
reconstruction step, which recognises CAD features (i.e. extrude, revolve, sweep
and loft modelling operations) in the segmented mesh and evaluates a set of pa-
846 F. Buonamici et al.

rameters to fit the identified solids to the 3D data. An example of a CAD model
obtainable with this approach is represented in fig. 2.

Fig. 2. Model reconstruction of a mechanical part by Wang et al. [13]. a) Original point cloud; b)
segmentation result; c) reconstructed model.

Benko et al. presented in [14] a slightly different constrained fitting approach,


applied to both 2D and 3D applications. The method hypothesizes a previous
automatic recognition of the constraints to be imposed, with a methodology
similar to [16]; the optimization is carried out with a methodology similar to the
one previously described. Both these methods [12, 14] attempt to perform an
automatic reconstruction of the CAD model and generally achieve a good
representation of the original design intent. The authors state that the introduction
of constraints surely improves the retrieval of the ideal geometry of the studied
part; furthermore, in a number of tests, also the dimensions result closer to their
original values.
As previously described, constraints are usually imposed within a tolerance
in the fitting of the surfaces to the scanned data and are, therefore, only
approximately enforced, even if the designer is certain of their presence in the
CAD model and of their significance. Actually, errors and imperfections
introduced by this approximation have a partial influence on the dimensions of the
model, in some cases being practically inexistent, but can compromise the model
usability in following applications.
This is a central problem in engineering applications, particularly important
since all the mentioned techniques (and generally all the automatic and semi-
automatic approaches) usually produce non-parametric CAD models (e.g. STEP,
IGES, B-REP representations) that do not contain any information about the part
modelling history. Hence, an automatic feature recognition step is usually
performed by the designer, within a chosen CAD environment, to convert the
previous non-parametric model into a parametric one. Unfortunately, the
automatic feature recognition step efficacy is negatively influenced by the
imperfections originated in the previous constrained fitting step, especially by
loosely imposed constraints. As a consequence, this additional step, heavily
inhibits the general efficiency of the presented framework and has limited the
applicability of automatic and semi-automatic RE approaches.
Summing up, an accurate recognition of features, the definition of an exact
modelling tree, and the achievement of a meaningful final CAD model, although
fundamental elements in engineering applications, are rather difficult to obtain,
Recent strategies for 3D reconstruction using ... 847

especially with an automatic approach, despite the number of scientific works


dealing with this topic.

4 User-Guided RE Tools

User-guided approaches represent, nowadays, the category of RE methods most


used by designers. This is due to the limitations of automatic approaches,
described in the previous section, which make this approach the most reliable to
reach a satisfying result (i.e. a parametric CAD model as faithful as possible to the
original design intent). In this framework, in fact, the designer is in control of the
whole process and can build a model that reflects his/her knowledge of the part
function, fitting and original design.
It is important to note that the efficacy and usability of user-guided tools
highly depends on the “hosting” environment in which are implemented; in
particular, most effective RE tools require a proper CAD-like software
environment, provided with 1) parametric solid modelling capabilities and 2)
scanned data/mesh handling features. In view of that, advanced RE tools can be
found mainly in a limited number of well-known commercial RE systems. Chang
and Chen presented in [10] a description of the state of the art of commercial RE
systems; their study covers several software packages and focuses on parametric
modelling. The present work, using their conclusions as an input, provides an up-
to-date description of the current research.
CAD and RE modelling tools are generally comparable: on a basic level, both
allow the user to generate analytical surfaces; RE tools, however, usually permit to
extract useful information from the scanned data to consequently generate guided
surfaces and geometric features.
Basic RE functionalities, available in most systems, are limited to the fitting
of a geometric feature/surface to the scanned/segmented data. In this case, a
subsequent manual stitching of adjoining surfaces is performed to generate the
CAD model. In most low-level systems, the resulting model is not parametric and,
therefore, cannot be modified once that the reconstruction process is finished.
Another limit of these systems is represented by the available types of
surfaces/features in the system: the majority of systems permits the creation of
simple primitives (e.g. cylinders, spheres, etc.) or NURBS patches, particularly
useful for the reconstruction of freeform surfaces. More advanced systems allow
the fitting of more refined surfaces, as extrusion and revolution surfaces.
Full-parametric tools, on the other hand, are available only in most advanced
RE software packages; parametric sketches and geometric features, modelling
feature tree, and the possibility of drawing sketches directly using information
provided by the scanned data are among the most advanced RE tools currently
available to designers.
In advanced RE systems, the reconstruction follows the steps outlined in
section 2; the scanned mesh is imported in the modelling environment, segmented
and the various mesh regions are classified; afterwards, a convenient reference
848 F. Buonamici et al.

frame, aligned accordingly to the mesh most significant geometric features, is


identified and used to guide all subsequent modelling steps. An important feature,
recognizable and dimensionally relevant, is usually chosen to begin the
reconstruction; all subsequent features are generated using the reference frame and
the main feature as landmarks (Fig. 3).
The most interesting capabilities and tools are offered in specialized RE
software packages; with this respect, notable examples are Rapidworks® (a
Nextengine proprietary version of Geomagic Design X®, formerly Rapidform
XOR3®) and Polyworks®.
Rapidworks®, in particular, provides a full-parametric modelling
environment and offers all the functionalities previously mentioned, the most
important being: 1) fitting of primitives, revolution or extrusion surfaces and
NURBS patches to the scanned data; 2) loft/sweep surface fitting; 3) possibility of
imposing a single geometrical constraint in the creation (i.e. in the fitting step) of
some types of surfaces and features (e.g. axis direction in revolution/extrusion
wizard); 4) 2D parametric sketching guided by mesh sections; 5) a solid modelling
environment directly linked to a traditional CAD environment (i.e. Solidworks®),
allowing for fully editable and directly spendable models.
Polyworks® offers similar functionalities: the software, as an example,
provides parametric sketches that can be drawn upon the mesh data; the system,
however, does not offer 3D parametric solid modelling capabilities and the
sketches must be exported into an external CAD system in order to perform
subsequent 3D modelling operations.
In addition to proper RE systems, a number of commercial CAD software
packages offer useful tools as well to perform CAD reconstruction. Their
functionalities, even if not specifically RE-oriented and generally not the most
advanced, are to some extent comparable to those previously described. Although
their modelling tools perform flawlessly, the performances of these systems are
usually limited sensibly in the interaction with mesh/scanned data; as an example,
mesh-guided sketches are not available.
Among the CAD systems tested by the authors, Siemens NX® and Leios2®
prove to be the best equipped to fulfil RE needs. Both these systems provide a
parametric 3D modelling environment, and allow the fitting of a set of simple
primitives to the scanned data. Moreover, for certain primitives is possible to
enforce a single geometrical constraint (e.g. a relation of orthogonality or
parallelism between axes) during the fitting; this feature, also available in
Rapidworks® as previously mentioned, allows the exact imposition of known
constraints in the reconstruction, generating more meaningful models. Regrettably,
advanced geometrical constraints cannot be imposed with these tools and their
usefulness is, therefore, limited.
Summing up, user-guided RE tools, mostly available in commercial RE and
CAD software systems, permits the reconstruction of CAD models by means of
parametric modelling tools; the introduction of parametric models increases the
final CAD usability and its usefulness for the designer, making the previously
mentioned “automatic feature recognition” step redundant. The creation of the
model is generally achieved by means of a series of independent fitting steps: in
Recent strategies for 3D reconstruction using ... 849

this framework (fig. 3), every feature previously identified is manually and
individually generated, one feature at a time. Regrettably, the established feature
creation chain imposes higher uncertainties on the last features generated, which
are negatively affected by previous errors and wrong choices of the designer.

Fig. 3. Sequential steps of the reconstruction process of an electrical socket adapter

Regarding the level of design intent retrievable with this approach,


geometrical constraints can generally be imposed in the process in a limited
number and up to a certain level of complexity, allowing for models more faithful
to the original design intent of the part. Sadly, this framework, although highly
reliable and assuring an overall better result with respect to automatic RE
approaches, relies on a competent user and a time-consuming framework.

5 Conclusions

In this paper, a series of practical approaches to CAD reconstruction were briefly


reviewed. Two main RE strategies were identified and discussed: the auto-
matic/semi-automatic approach and the user-guided approach. Both the ap-
proaches rely on the same underlying framework and therefore exploit similar
“ingredients” to perform the reconstruction. Automatic methods generally provide
a fast and easy-to-perform solution to the RE problem; obtained models, however,
are affected to a certain degree by imperfections and are usually non-parametric
representations. For these reasons their usefulness to designers is rather limited.
User-guided methods, on the other hand, require a competent and trained user
to be applied, but allow the retrieval of a CAD model faithful to the original de-
sign intent in terms of geometry and dimensions.
850 F. Buonamici et al.

The user-guided approach is, arguably, the most used by designers nowadays,
mostly due to the possibility of obtaining a parametric model directly at the end of
the process. Future research, addressed to the development of a streamlined proc-
ess and of tools capable of imposing advanced geometrical constraints, could fur-
ther increase benefits of user-guided methods.

References

1. Governi, L., Furferi, R., Palai, M., and Volpe, Y. 3D Geometry Reconstruction from Ortho-
graphic Views: A Method Based on 3D Image Processing and Data Fitting. Computers in In-
dustry, 2013, 64(9), pp. 1290-1300.
2. Furferi, R., Governi, L., Volpe, Y., Puggelli, L., Vanni, N., and Carfagni, M. From 2D to 2.5D
i.e. from Painting to Tactile Model. Graphical Models, 2014, 76(6), pp. 706-723.
3. Furferi, R., Governi, L., Palai, M., Volpe, Y. 3D model retrieval from mechanical drawings
analysis. International Journal of Mechanics, 2011, 5 (2), pp. 91-99.
4. Di Angelo, L. and Di Stefano, P. A Computational Method for Bilateral Symmetry Recogni-
tion in Asymmetrically Scanned Human Faces. Computer-Aided Design and Applications,
2014, 11(3), pp. 275-283.
5. Furferi, R., Governi, L., Palai, M., and Volpe, Y. Multiple Incident Splines (MISs) Algorithm
for Topological Reconstruction of 2D Unordered Point Clouds. International Journal of
Mathematics and Computers in Simulation, 2011, 5(2), pp. 171-179.
6. Demarsin, K., Vanderstraeten, D., Volodine, T., and Roose, D. Detection of Closed Sharp
Edges in Point Clouds Using Normal Estimation and Graph Theory. Computer-Aided De-
sign, 2007, 39(4), pp. 276-283.
7. Di Angelo, L., Di Stefano, P., and Giaccari, L. A New Mesh-growing Algorithm for Fast Sur-
face Reconstruction, Computer – Aided Design, 2011, 43(6), pp. 639-650.
8. Di Angelo, L., and Di Stefano, P. Geometric Segmentation of 3D Scanned Surfaces. Computer
– Aided Design, 2015, 62, pp. 44-56.
9. Bi, Z. M. and Wang, L. Advances in 3D Data Acquisition and Processing for Industrial Appli-
cations. Robotics and Computer-Integrated Manufacturing, 2010, 26(5), pp. 403-413.
10. Chang, K.-H., and Chen, C. 3D Shape Engineering and Design Parameterization. Computer-
Aided Design and Applications, 2011, 8(5), pp. 681-692.
11. Bénière, R., Subsol, G., Gesquière, G., Le Breton, F. and Puech, W. A comprehensive proc-
ess of reverse engineering from 3D meshes to CAD models. CAD - Computer-Aided Design,
2013, 45, pp. 1382–1393.
12. Werghi, W., Fisher, R., Robertson, C., and Ashbrook, A. Object Reconstruction by Incorpo-
rating Geometric Constraints in Reverse Engineering. CAD – Computer Aided Design, 1999,
31(6), pp. 363-399.
13. Wang, J., Gu, D., Yu, Z., Tan, C. and Zhou, L. A framework for 3D model reconstruction in
reverse engineering. Computers & Industrial Engineering, 2012, 63, pp.1189–1200.
14. Benko, P., Jod, G., Varady, T., Andor, L., and Martin, R. Constrained Fitting in Reverse En-
gineering. Computer Aided Geometric Design, 2002, 19(3), pp. 173-205.
15. Fisher, R.B. Applying knowledge to reverse engineering problems. Computer-Aided Design,
2004, 36, pp. 501–510.
16. Langbein, F.C., Mills, B.I., Marshall, A.D., Martin, R.R. Recognizing geometric patterns for
beautification of reconstructed solid models. Proc. Int. Conf. on Solid Modelling and Appli-
cations. IEEE Computer Society Press, 2001, pp. 10–19.
Section 5.5
Product Data Exchange and Management
Data aggregation architecture “Smart-Hub” for
heterogeneous systems in industrial
environment

Ahmed AHMED1*, Lionel ROUCOULES1, Rémy GAUDY2 and Bertrand


LARAT2
1
Arts et Métier ParisTech, CNRS, LSIS, 2 cours des Arts et Métiers, 13697 Aix-en-Provence,
France
2
Sogeti High Tech – Groupe Capgemini, 2-10 rue Marceau – CS70400, 92136 Issy Les
Moulineaux, France
* Corresponding author. Tel.: +33-6-66-31-33-50; E-mail address: ahmed.ahmed@ensam.eu

Abstract Distributed systems spread widely in industrial environments. One of


the key challenges is the exchange and the aggregation of data between these sys-
tems. Although standards play an important role to solve data interoperability is-
sues between systems, these standards do not completely address existing indus-
trial problems. In fact, it is not granted to have an industrial environment that
complies with a unique standard; therefore, ad-hoc solutions are used to solve this
issue. In this article, the authors propose a generic architecture to address the in-
teroperability between systems. This architecture is developed based on model-
based techniques and principles. Besides, it reduces the need for human interven-
tion and time by developing once and reusing the building blocks of the architec-
ture. Finally, the architecture is described in its application to a case study.

Keywords: interoperability; data aggregation; smart systems; model-based engi-


neering; supervision;

1 Introduction

In the environments of industrial enterprise, distributed systems widely spread to


perform useful operation independently. The networking and integration of these
heterogeneous systems to achieve a common goal leads us to a larger complex
system mostly defined as System of Systems SOS [1]. These systems include
emerging smart systems such as smart grids, smart gas, smart cities, etc. [2, 3]. As
shown in figure 1, they are widely found at the control level and enterprise level.

© Springer International Publishing AG 2017 853


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_85
854 A. Ahmed et al.

On the one hand, the control level depicts the vertical data exchange. It in-
cludes the full data acquisition chain from the control module of devices (sensors
and actuators) up to the acquisition servers through Machine-To-Machine (M2M)
communication standards. Afterward, the acquisition data are monitored on a hu-
man machine interface (HMI). This HMI not only assists in reading the
measurements provided by the sensors but also to drive the actuators. This chain
for monitoring and driving the physical devices is referenced by an architecture
called Supervisory Control and Data Acquisition (SCADA) [4]. Various SCADA
systems may co-exist in a given environment, each with its own data format such
as OPCUA1, MQTT2, or Sigfox3, etc. and this exhibits the need to use the appro-
priate standard and mechanism to access the data. In this SCADA architecture, we
are interested in the acquisition server for exchanging the data with other systems.
On the other hand, the enterprise level represents all the other existing infor-
mation systems in the industrial environment, e.g. maintenance systems (Comput-
erized Maintenance Management System - CMMS), Geographic Information Sys-
tem (GIS), decision support systems, forecasting systems, logistic systems, etc.
Therefore, the data need to be shared horizontally between the acquisitions servers
at the control level and the systems at the enterprise level. Examples on figure 1: a
CMMS information system (data producer - DP) produces data in CSV format for
the SCADA server (data consumer - DC) that itself uses OPCUA standard; a GIS
system (DC) displays alarms originated from the acquisition server OPCUA (DP)
and Backend Sigfox (DP) and using data from the CMMS (DP) system as well.
The heterogeneity, due to different data format and data semantic (OPCUA,
Sigfox 12 bytes, etc.) of these systems, makes it very challenging to guarantee the
interoperable exchange of data between them. Therefore, this paper addresses the
problem of syntactic and semantic interoperability to guarantee the interoperable
exchange of data between heterogeneous systems. So, the authors propose a ge-
neric interoperability architecture for ad-hoc solutions that relies upon model-
based engineering principles to deal with this issue.
The remainder of the paper is organized as follows: section 2 discusses the re-
lated work; section 3 is devoted to the proposal of the interoperable architecture;
section 4 illustrates the technical aspects of implementing the proposed architec-
ture, its application on a case study and finishes with a discussion; finally, section
5 concludes with a summary and some insights on ongoing and future work.

1 https://opcfoundation.org/
2 http://mqtt.org/
3 http://www.sigfox.com/
Data aggregation architecture “Smart-Hub” for heterogeneous … 855

Fig. 1. Data Flow and exchange between heterogeneous systems

2 Related Works

The integration and the automatic exchange of data among heterogeneous systems
lead us to the essential requirement of interoperability. Some workgroups and or-
ganization steer their effort to decompose the interoperability issues by defining a
reference model in architectural approaches, such as Grid Wise Architectural
Council [5] and the Smart Grid Architecture Model for electricity domain [3] and
Reference Architecture Model for Industrie 4.0 (RAMI 4.0) [6]. However, in all
these reference models, the information interoperability in terms of syntax and se-
mantic are defined by domain-specific standards. Several standards are developed
for various domains, examples include: the Common Information Model (IEC
61970, 61968 and 62325) [7] in the electricity domain that defines the components
of the electrical power systems and their relationships; the standards ISA-95 [8],
B2MML (Business to Manufacturing Markup Language), MIMOSA [9], ISO-
15926 [10], or PRODML [11] in the oil and gas industry; ISO-10303 [12] in au-
tomation systems and product data exchange.
Furthermore, there exist some technical standards such as OPCUA [13] for da-
ta integration platforms. In spite of these standards, it is not granted to have a fully
end-to-end interoperable environment [14]. For instance, it is impossible to have all
the information systems using OPCUA. Therefore, some work had been done for
the mapping between standards such as CIM to OPCUA in the electricity domain
[15]. Other work proposed the use of a model-based integration framework using
PRODML as the central metamodel for the oil and gas industry [16]. The major
limitation of these approaches is their application on specific standards and do-
main only. Similarly, to our best knowledge and based on our industrial partners’
feedback, this issue is usually solved by developing ad-hoc mediator solutions.
This approach may not be practical and conventional in all situations because it in-
volves numerous human manual interventions that are time-consuming, error-
prone, and lack both flexibility and generality. Thus, the previous solutions provide
domain-dependent solutions. Therefore in this work, a domain-independent in-
teroperability framework for the aggregation of data among systems is proposed.
856 A. Ahmed et al.

3 Conceptual proposal: “Smart-Hub Architecture”

As shown in figure 1, the data are originated from various sources, i.e. data pro-
ducers, and consumed by other systems, i.e. data consumers. As a result, the data
consumer must interpret the received data. This leads us to decompose the prob-
lem as following:
1. Syntactic interoperability due to different data syntax (format) between
systems (e.g. XML, DB, CSV, etc.)
2. Semantic interoperability due to the different interpretation of data be-
tween systems. (e.g. data attribute “temperature” in system A is “temp”
in system B )
3. Various Interfacing mechanisms to read and write data from data produc-
ers and to data consumers (e.g. file system, request-reply, publish-
subscribe, SQL queries, web services, etc.)
This work introduces a 4-layered generic interoperability architectural solution to
address the above-mentioned problems. This architecture dubbed “Smart-Hub”
acts as a hub between all heterogeneous systems at all levels of an industrial envi-
ronment. It aggregates data originating from various data producers, manipulates it
in terms of syntax and semantic and finally communicates it for data consumers.
The 4-layers architecture is as follows:
x Communication & services layer: This layer manages various proto-
cols and mechanisms to establish communication between various data
producers and consumer systems. It includes a repository for various
data connectors, such as file systems connectors, publish/subscribe
connectors, request/reply connectors, etc. It also supports some essen-
tial services to read (consume) and write (produce) data from and to
the various systems, respectively.
x Extensible layer: This layer handles the projection (conversion) of da-
ta between DP and DC format and the modeling environment. This in-
cludes a repository for the supported data formats and syntax projec-
tion rules.
x Integration layer: This layer involves the aggregation of the model-
based data in a global data model (GDM) repository and the fetching
of the specific model-based dataset from the GDM. It relies on a num-
ber of operational and transformation rules and deal with the semantic
of data.
x Configuration Layer: This layer focuses on the internal configuration
of the smart-hub. It is shared between all the previous layers to guaran-
tee the integration and the functionality of the smart Hub with the sur-
rounding interconnected systems.
Figure 2 illustrates the smart-hub for the context of the example given previously
in figure 1. At this stage, we evaluate the architecture from a macro level and all
the inner details and structure of each layer is out of the scope of this paper.
Data aggregation architecture “Smart-Hub” for heterogeneous … 857

Fig. 2. Smart-hub layered architecture

4 Technical proposals

In this section, the authors propose the use of Model-based engineering (MBE)
techniques and operations to implement the smart-hub layers interoperability ar-
chitecture.

4.1 Modeling fundamentals

MBE [17] helps to handle semantic and syntactic interoperability between stand-
ards and languages. In MBE, everything is a simplified representation of certain
reality, i.e. a model. The OMG proposes 3-layer architecture and the main notions
is model, metamodel and metametamodel as shown in figure 3[18]. Each lower
notion conforms to the upper notion, i.e. conforms to its modeling language.
MBE supports model transformation. It is the generation of a model Mb from a
model Ma by a transformation Mt. The different heterogeneous systems are not in-
teroperable with MBE environments. This results in the requirement of projection
phases. Projection is the generation of a model in the chosen modeling environ-
ment from structured data in the technical space (TS) of the system and vice versa.
This operation is called injection and extraction, respectively.

Fig. 3. Model based Engineering and projection main concepts


858 A. Ahmed et al.

4.2 Architecture Implementation

Many tools support system modeling such as Papyrus4, Uml25, and Eclipse
Modeling Framework (EMF)6. According to the laboratory expertise, we have
chosen the Eclipse Modeling Framework (EMF) to support the implementation of
the smart-hub solution. EMF supports model and data interchange via the XML
Metadata Interchange (XMI) format. Metamodels are defined using the ECORE
language. The extensible layer for projection has been being realized using a
combination of the Acceleo tool7 and EMF's built-in XSD/XML support. The in-
tegration layer, for aggregation and fetching rules, has been being implemented
using a combination of the rule-based ATL8 tool [19] and direct java manipula-
tion. The communication and services layer has been being implemented
through extensible eclipse plugins. The evaluation and selection of the various
available tools are under investigation with regards to different case studies.
In figure 4, we will illustrate the aggregation of data from the data producers
(SCADA server and Backend Sigfox Server) and prepare it for the data consumer
(myHMI) that displays the aggregated data. The steps for aggregating the data for
the producers to the consumers are as follows:
x As a preliminary step, the smart-hub is configured with all the information
required to establish the communication between it and the SCADA serv-
er using OPCUA connector and the Backend Sigfox using the configura-
tion layer. Furthermore, the repositories in the extensible and integration
layer must be configured with the mapping rules for the projection and
transformation rules.
x By the use of the Communication & services layer, the smart-hub estab-
lishes a request for the information from the SCADA server and subscribe
to the data of the Sigfox backend server as well.
x Using the projection repository at the extensible layer, the data of both
the SCADA server (A) and Backend Server (B) are injected in the model-
ing environment.
x With the Integration layer functionalities, the data (A&B) are aggregated
and fetched (creation of C=A+B). This includes some operations such as
avg, sum, semantic mapping, etc.
x With the extensible layer, the data (C) are then formatted according to the
consumer component (myHMI).
x Finally, the data (C) are published to myHMI using the Communication
& services layer.

4 https://eclipse.org/papyrus/
5 https://wiki.eclipse.org/MDT-UML2
6 https://eclipse.org/modeling/emf/

7 https://eclipse.org/acceleo/

8 https://eclipse.org/atl/
Data aggregation architecture “Smart-Hub” for heterogeneous … 859

Fig. 4. Aggregating data from OPCUA server and backend server using smart-hub framework

4.3 Discussion

The solutions existing today (in the related works) require the development of ad-
hoc solutions and domain dependent solutions. They are time-consuming, error-
prone and lack both of flexibility and generality due to numerous human interven-
tions. Therefore, for all systems involved in information exchange, a separate ad-
hoc solution is required. The originality of our solution relies on the fact that the
smart-hub is developed as domain-independent and extensible blocks in each layer
of the architecture with respect to the related works. These blocks may require
more investment in terms of human intervention and time for development, but
once it is developed, it is reused for several times. The end user saves time be-
cause he just needs to reuse and setup these pre-built blocks to accomplish the re-
quired interoperability exchange.

5 Conclusion and future work

In this paper, we have studied interoperability solutions between heterogeneous


systems in an industrial environment. We proposed a model-based framework to
handle various format and standards. The value added of this proposal is the reduc-
tion of human intervention and time by just configuring and reusing the building
blocks of the smart-hub. Currently, the work focuses on defining the repository
models for the layers of the architecture. In our future work, we intend to fully de-
scribe and formalize this modeling architecture. This proposal is still in its early
phases of development and several practical and theoretical issues remain to be ad-
dressed. This includes hub data persistence and historization, the generality of ag-
gregation/fetching rules, performance, security, etc.
860 A. Ahmed et al.

Acknowledgments This research work takes place under a French National project, named
Gontrand, for the real-time management of the smart gas network.

References

1. M. Jamshidi, Systems of Systems Engineering: Principles and Applications, Boca Raton:


CRC Press, 2008.
2. K. Su, J. Li and H. Fu, "Smart city and the applications," in International Conference on
Electronics, Communications and Control (ICECC), Zhejiang, September 2011.
3. CEN-CENELEC-ETSI and Smart Grid Coordination, "Smart Grid Reference Architecture,"
European Committee for Standardization: Brussels, Belgium, p. 2437, 2012.
4. P. Zhang, Advanced Industrial Control Technology, Oxford: William Andrew Publishing,
2010.
5. The GridWise Architecture Council, "GridWise Interoperability Context-Setting
Framework," 2008.
6. D.-I. P. Adolphs, "RAMI 4.0 An architectural Model for Industrie 4.0," 2015. [Online].
Available: http://www.omg.org/news/meetings/tc/berlin-15/special-events/mfg-
presentations/adolphs.pdf. [Accessed 18 03 2016].
7. M. Uslar, M. Specht, S. Rohjans, J. Trefke and M. Specht, The Common Information Model
CIM: IEC 61968/61970 and 62325 - A Practical Introduction to the CIM, Springer-Verlag
Berlin Heidelberg, 2012.
8. Enterprise-control system integration, IEC 62264 Standard, 2000.
9. "MIMOSA, An Operations and Maintenance Information Open System Alliance," [Online].
Available: http://www.mimosa.org/.
10. Industrial automation systems and integration—Integration of life-cycle data for process
plants including oil and gas production facilities, ISO 15926, 2003.
11. Energistics, "PRODML Standards," [Online]. Available:
http://www.energistics.org/production/prodml-standards.
12. "Industrial automation systems and integration -- Product data representation," [Online].
Available:
http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=20579.
13. W. Mahnke, S.-H. Leitner and M. Damm, OPC Unified Architecture, Berlin: Springer
Science & Business Media, 2009.
14. G. A. Lewis, E. Morris, S. Simanta et L. Wrage, «Why Standards Are Not Enough to
Guarantee End-to-End Interoperability,» Seventh International Conference on Composition-
Based Software Systems (ICCBSS 2008), pp. 164-173, 2008.
15. S. Rohjans, K. Piech and S. Lehnhoff, "UML-based modeling of OPC UA address spaces for
power systems," in Intelligent Energy Systems (IWIES), 2013 IEEE International Workshop
on, November 2013.
16. V. Veyber, A. Kudinov and N. Markov, "Model-driven Platform for Oil and Gas Enterprise
Data Integration," International Journal of Computer Applications (0975 - 8887), vol. 49, no.
5, 2012.
17. V. G. Daz, Advances and Applications in Model-Driven Engineering, IGI Global, August
2013.
18. OMG. Meta Object Facility (MOF), version 2.4.1, OMG Document.
19. F. Jouault and I. Kurtev, "Transforming models with ATL," in Proceedings of the 2005
international conference on Satellite Events at the MoDELS, Montego Bay, Jamaica, 2005.
Preparation of CAD model for collaborative
design meetings: proposition of a CAD add-on

Ahmad AL KHATIB1*, Damien Fleche1, Morad Mahdjoub1, Jean-Bernard


Bluntzer1 and Jean-Claude Sagot1
1
University of Technology of Belfort-Montbéliard, IRTES-SeT Laboratory
* Corresponding author. Tel.: +33-3-84-58-37-43; fax: +33-3-84-58-33-42. E-mail address:
ahmad.al-khatib@laposte.net

Abstract: In New Product Design process, CAD Model is used in design meet-
ings to help design actors in their collaboration. However, it is important to pre-
pare digital mockups to be more effective in design meetings. Thus, we introduce
in this paper an approach to prepare CAD model for collaborative design meetings
supported by a self-developed CAD add-on. The proposed approach is based on
parametric modelling of CAD model and involves two key steps: a) automatic
transfer of Functional Requirements to CAD model and b) preparing design con-
figurations and validation reports for collaborative design meetings. Our approach
was implemented through the development of the CAD add-on in a commercial
CAD system environment.

Keywords: Product design; Collaborative design meeting; CAD model; Design


requirements

1 Introduction

In design meeting, different project stakeholders are working together to achieve


precise goals. It is also used to determine if the product respects the needs, to iden-
tify the problems and to define the action to conduct in order to fix these problems
[1]. Thus, it is necessary to improve communication between stakeholders and to
ease the decision making during these collaborative phases. CAD model is one of
the most used representations of the product during design meeting [2, 3]. They
are known as intermediary objects whose the main objective is to facilitate com-
munication and the decisions making [4]. In this context, design stakeholders
communicate, exchange about and validate the CAD model based on their re-
quirements and constraints. However, most of the time, CAD model may integrate
different choices of design (different values, alternatives, shapes, etc.) that stake-
holders should take decisions about. Furthermore, the entire CAD model is not

© Springer International Publishing AG 2017 861


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_86
862 A. Al Khatib et al.

always necessary during all the design meeting. Depending on the design meeting
objectives, stakeholders may need to discuss on one part of CAD model or they
may need to discuss on other parts. In the same way, a detailed level of definition
of the CAD model is not a requirement either. For instance, design meeting can be
based on simplified CAD models, like using a skeleton model [5]. In addition, the
decisions taken during design meeting must be controlled with design require-
ments that represent the expected performance of the product. Unfortunately,
CAD models are not always suitable for this objective. It is crucial to propose
tools that help design stakeholders to better prepare this CAD model beforehand;
so they can be used in an effective way during collaborative design meetings. This
should improve communication, decisions making and effectiveness of design
meeting.
Thus the aim of this paper is to propose an approach and the associated tools to
help in preparing CAD model for design meetings. The proposed approach and the
support tool are based on parametric CAD modeling approach through two princi-
ple pillars: a) an automatic transfer of Functional Requirements as Design Parame-
ters in CAD model and b) the prior definition of the design configurations of CAD
model for design meeting. These two steps are based on already existing function-
alities of a commercial CAD system.

2 State of the art

2.1 Digital Mock-Ups in collaborative design meetings

To assist project stakeholders in collaborative design meetings, the field of


CSCW (Computer Supported Cooperative Work) deals with how collaborative
activities and their coordination can be supported by means of computer systems
[6]. As a result of this field, many groupwares and groups support systems (GSS)
have been presented in the literature to support the productivity of design teams
and the collaboration of stakeholders engaged in design meetings. In these design
meetings, the participants are in the same place or geographically distributed, and
the computer support provides a medium for communication. Different works [7,
8] proposed an immersive virtual reality tools to facilitate collaboration between
design actors and to evaluate the use value of the product in design reviews. These
tools permits to visualise and to interact with the product representation in the
scale 1 before the realisation of physical prototype. Moreover, tabletop systems
are multi-user horizontal interfaces for interactive shared displays [9]. They im-
plement around-the-table interaction metaphors allowing co-located collaboration
and face-to-face conversation in a social setting [10]. Recent tools and applica-
tions based on Web 2.0 technology and knowledge based-engineering have been
proposed in order to increase flexibility and in operation and cost reduction in
companies [11, 12]. For instance, these tools are used to help in group decision
Preparation of CAD model for collaborative design meetings … 863

making [11] and learning [13]. These tools and systems permit to help stake-
holders in design meeting by the interaction with product representation or design
information.
Today, most design meetings use Digital Mock-Ups (DMU) to help design
stakeholders in their collaboration. DMU is defined as an extended numerical rep-
resentation of the product; it is used as a platform for developing the prod-
uct/process, communicating and validating during the different steps of the design
process [14]. In this work, we focus more on CAD models which are the most
used representation of the product in the design meetings and into NPD process.
Indeed, any design and drawings of consumer oriented products shall be based and
referenced in the 3D world, throughout the company and the supply chain. There-
fore, any iterations, changes and modifications are done firstly in the 3D world
and then fed to other applications and processed accordingly. The originality of
our work is to propose an approach to prepare the CAD model in advance to fa-
cilitate design meetings. Indeed, when we analyze the today design meetings,
there is a lot of time wasted through the decisions process. Actually, each stake-
holders proposes on the fly some modifications, and the CAD actors try to imple-
ment it on the CAD models. For instance, there is a lot of waiting time spent in
order to perform the modifications before taking decisions. The research question
exposed in this paper is therefore based on this assessment: how to be more effi-
cient during the collaborative meetings? In this paper we choose to study this
question which could be considered as an effective way to improve design meet-
ing.

2.2 Design requirements and product models

Designers into product design process apply their expertise knowledge to find
solutions for design problems, and then to optimize those solutions within the de-
sign requirements and also the constraints defined by material, technological, eco-
nomic, legal, environmental and human-related considerations [15]. The Customer
Requirements (CRs) are presented by the consumer in order to adapt a change in
market environment. The CRs are transformed in Functional Requirements (FRs)
that present the performance that the product itself should have in order to satisfy
the CRs [16]. FRs are satisfied by defining or selecting Design Parameters (DPs).
Finally, Constraints (Cs) exist in the physical domain and must be satisfied. If we
study the decisions making process in design meetings, we can pointed out that it
is controlled by FRs. So, stakeholder needs to better integrate FRs into parametric
CAD model definition; and to better follow the link between FRs and DPs. More-
over, depending on stakeholders’ objectives, decisions making is needed about
several variations of DPs and design alternatives during design meetings. For this
reason, we propose an approach supported by a specific tool that allows to inte-
grate FRs and prepare CAD model for design meetings. The main objective is to
improve effectiveness and decisions making during design meetings.
864 A. Al Khatib et al.

The question of integration of other information in CAD model is studied into


product Models which are generic representations of products called product mod-
els. The product models were initially studied by the works of Tichkiewitch et al.
[17, 18, 19] which highlighted that geometric information of the product in the
CAD model is not sufficient. Indeed, design actors need to share more information
about the product from various disciplines (ex. fabrication, forging, etc.). Tich-
kiewitch et al. also proposed the multi-views model that integrate the views of de-
sign actors (or professions) about product and the links between these views using
the concept of component, link and relation [20]. The Function-Behavior-
Structure (FBS) paradigm is a global framework that describes the product by its
functions, structure and behavior [21]. The FBS-PPR model (FBS Product-
Process-Resources) is an extension of the FBS model for enterprise process mod-
eling [22]. The PPO (product-process-organisation) model extend also the FBS
model [23]. It describes the link between product model, development process and
organisation model. The product model part of the PPO model describes the link
between functions, structure and behavior entities. The Core Product Model
(CPM) [24] is defined by a conceptual Entity-Relationship data model. This ab-
stract model is based on a representation of functions, behaviours and structure
paradigms. Finally, the SysML [25] is proposed as a System Modeling Language
extended form the Unified Modeling Language (UML).
The objective of our paper is to propose an approach to help in preparing CAD
models to design meetings. To do this, we used parameters and constraints’ func-
tionalities already existing in CATIA. The objective is not to investigate a new
product model. Our work is more based on already existing CAD system which
has its own product model with its structure, functions, parameters and constraints.
This approach and the self-developed add-on tool facilitate the automatic transfer
of FRs as DPs and Cs into the CAD model and on the definition of design config-
urations for collaborative design meetings.

3 Proposed approach

To deploy our approach, the figure (1) shows the different steps of the proposed
approach.
The first step of this approach is the definition of the FRs. This step is based on
the traditional phase of design process [15]. It includes the importation of the re-
quested FRs of the future product. For instance, these requirements could be from
different professions such as use, aesthetics, mechanical requirements, etc. In this
step, the FRs are provided as functions which are itself divided into criteria
(length, width, color, number, type, state, etc.).
After defining the FRs, the second step consists of the creation or the importa-
tion of the product CAD model.
Preparation of CAD model for collaborative design meetings … 865

The third step consist to transform the FRs criteria in DPs and Cs (as expert
check). The Cs are here the rules that define the values of the each parameter. For
instance, a DP could be the height of a table (ergonomic-height-table) linked with
the FR “to be ergonomic”. So, the Cs related to this parameter could be as {C1:
700 mm ≤ ergonomic-height-table ≤ 1000 mm}. It is clear that not all criteria of
FRs should be integrated in the CAD model as DPs and Cs. This is depending on
the stakeholders’ needs for the specified design meeting.

Fig. 1. Flowchart of the proposed approach

In the fourth step, the parametric CAD model step is performed in accordance
to the DPs and Cs. This process allows the stakeholder to integrate FRs the early
steps of the CAD modeling process. During this step, stakeholder can return to the
previous step to create new DPs and Cs which depend of the developed solutions
during the modeling process.
In the fifth step, before the design meeting, the stakeholder needs to prepare
this meeting. The preparation concerns the definition of the CAD model’s config-
urations that should be validated during the meeting. Indeed, most of the time, the
stakeholder could not be sure about his choices during the design process. So, de-
fining several configurations of the CAD model will permit to compare the differ-
ent options with other stakeholders and take decisions during the design meeting.
A configuration of CAD model is defined as the shape or arrangement of the CAD
model’s parts and parameters’ values that the stakeholder is not sure about. It can
866 A. Al Khatib et al.

be also named product variant. More precisely, we define in this paper a configu-
ration as the combination of the following elements:
a) The values of parameters in the CAD model. A DP could take multiples
values. The stakeholder may be not sure about the choice of the DP val-
ue. With this approach, he can make several configurations and give a
different value for the DP in each configuration.
b) The properties-state of CAD model’s parts. In fact, each part is dealing
with different properties such as alternative shapes, different colors, acti-
vated or deactivated depending the mechanism, etc. Stakeholder could
demonstrate this alternatives with different configurations in order to
takes decisions with other stakeholders during the design meetings.
With the CAD model and the different configurations, the stakeholder can also
automatically generate a report about the validation of FRs for each design con-
figuration. It shows if the DP is validated according to the proposed CAD model
(or configuration). Within this report, we assume that the preparation of the design
meeting is simplified and the decisions are forecasted. Indeed, based on the report,
it is possible to better understand for each participant in the design meeting the
level of validation of CAD model.
Finally, in the last step 7, the design meeting is realized using the FRs report
and the CAD model’s configurations. The designers communicate together around
the DMU and can take decisions in order to satisfy the FRs. To do this, each de-
signer can proposes modifications on the CAD model and the DPs values. At the
end of design meeting, a new version of CAD model is consequently defined.

4 Implementation of the approach

This section proposes an implementation of our approach into a CAD system.


Therefore, we developed a CAD add-on. The chosen CAD system is CATIA V5
and the add-on is developed using Visual Basic code. The development of the add-
on is following the same steps described in the proposed approach. In order to pre-
sent the CAD add-on, a case study based on the development of a pen is used.
This application is only a demonstrator in order to present the viability of the ap-
proach and exposes the CAD add-on.
The first step (1) lined to this interface (figure 2) is to upload and pick up FRs
and its criteria. By clicking on it, the CAD add-on will automatically open an Ex-
cel spreadsheet containing the FRs. On this spreadsheet, it is possible to define the
FRs of the products and all criteria related to these functions. Indeed, classifying
the design criteria into FRs help stakeholders to understand the meaning of each
criterion and its impact on the design of the product.
Once the FRs uploaded, the second step (2) corresponds to the creation or the
importation of the CAD model.
Preparation of CAD model for collaborative design meetings … 867

The third step (3) concerns the transformation of FRs’ criteria into DPs and Cs.
This step is performed during the CAD modeling process and it allows to add new
DPs and Cs depending on the developed solutions. To add DPs and Cs, the Excel
spreadsheet is opened again and the stakeholder add the DPs and Cs in their spe-
cific columns. We assume in this paper that a C presents a minimum and a maxi-
mum value for a DP depending on FRs’ criteria. Moreover, a column referring the
initial value of the DP is still present. This indicates the initial value of the DP
which can be changed during the CAD modeling process. A flexibility level is al-
so defined for each DP which determines the exigence level of the associated C.

Fig. 2. First interface of the developed macro

In our ballpoint pen example, the FR1 is “permit comfortable maintaining in


the hand”. This FR is described by the criteria of the anthropometrical characteris-
tics of the hand. Then, these criteria is transformed to DPs and Cs like body diam-
eter of the pen, button diameter and body length, etc. The value of body diameter
has to be between 10 mm to 20 mm with an initial value of 14 mm.
Once the third step is performed, the macro will automatically implement each
DP and C (through expert check relation) into the CAD model (figure 3). To easi-
ly understand each design parameters, they are distributed into FRs groups. Thus,
using these DPs and Cs, the project stakeholder can design and modify automati-
cally the CAD model. We can see on the figure 3-b the Cs are represented into the
CSG tree of the CAD models. As defined earlier, each constraint (C) needs to be
checked. More precisely, if the related DP’s value is between the minimum and
the maximal value, the constraint (C) is respected and a green light is generated in
the CSG tree. If the related DP’s value is out the minimum and the maximal value,
the constraint (C) is not respected and a red light is generated in the CSG tree”.
In parallel, a second interface has been developed and allows the designer to
prepare the design meeting. Using this interface, the stakeholder saves automati-
cally the needed CAD model’s configurations. As a reminder, each configuration
contains the validations he has to perform with the other stakeholders during the
design meeting. Therefore, each configuration of the CAD model is defined by the
values of parameters and the properties state of the CAD model’s parts. All de-
868 A. Al Khatib et al.

fined configurations are listed in a list as shown in the figure (4). Practically, each
configuration is saved in an Excel table as parameters. This table are thereafter
used to apply automatically these configurations during the design meetings.

(a) (b)
Fig. 3. Implementation of design parameters DPs (a) and the constraints Cs (b) in CAD model

Fig. 4. Second interface of the developed macro

To conclude the approach, the interface allows to save the CAD model and to
generate automatically the design meeting report. For each configuration, the re-
port shows the values of DPs and the level of validation depending on the Cs. Dur-
ing the design meeting, the stakeholders compare the different design configura-
tions and take decisions about the DPs.

5 Discussion

The proposed approach and the proposed tool provide four ways to be more effi-
cient in design meetings. The first way consists of the work on more design alter-
natives in the same period of time. Therefore, to prepare the modification before
the meeting and not to perform it on the fly during the meeting will allow the
stakeholders to check more alternatives. The second way is to focus largely on the
Preparation of CAD model for collaborative design meetings … 869

essential, i.e. the FRs. In fact, it is always required to remember the objectives. It
is really easy to get lost in the details which are not impacting FRs. With this ap-
proach, the modification are always seen in the functional context. The third way
increases communication between stakeholders. In fact, giving stakeholder a new
intermediary object like the report, can be a new way to discuss. The last way is to
report to the management board. In this context, it would be easier to distribute the
work at the end of the meeting. Focusing on the main objective which will be to
answer the FR, the management board can for example propose some action plans
to be reoriented in case of wrong solutions.
To improve the proposed approach, we think that it could be possible to gather
the proposed tool with more developed and related tools in one module of CATIA
that help in preparing the CAD model for design meeting. Moreover, associating a
configuration with functions is very good idea. In our approach, this is very possi-
ble because we have the design parameters classified in functional families. So, it
is possible to highlight the functions and design parameters related to each config-
uration. Furthermore, the FRs and DPs could be classified depending on view-
points of product lifecycle. Finally, we proposed to demonstrate the efficiency of
our approach in our perspectives works. This could be done by an experimentation
that includes using the proposed approach by design groups in design reviews.

6 Conclusion

Collaborative design meetings are steps in the design process in which different
stakeholders identify the compliance of the product to the design requirements and
evaluate the product to identify drawbacks. They compare different choices and
alternatives according to their requirements and then take decisions. CAD model
is used to represent the future products and the different design alternatives for de-
sign meetings. So, it is necessary to prepare this CAD model in advance to be
more effective and to facilitate collaboration in design meeting. In this paper, we
introduced an approach to prepare CAD model for collaborative design meetings
in order to be more efficient. It is based on parametric modelling and defining de-
sign configurations. The concept of configuration is presented as a combination of
design parameters. A CAD add-on is developed making it easy to transfer design
requirements into CAD model and save design configurations.

References

1. Dieter, G. Engineering design, 1991 (McGraw-Hill Inc., USA).


2. Hou, M., Barone, A., Magee, L., and Greenley, M. Comparison of Collaborative Display
Technologies for Team Design Review. In Proceedings of the Human Factors and Ergo-
nomics Society Annual Meeting, Vol. 48, No. 23, September 2004, pp. 2642-2646.
870 A. Al Khatib et al.

3. Fleche, D., Bluntzer, J. B., Mahdjoub, M., and Sagot, J. C. First Step Toward a Quantitative
Approach to Evaluate Collaborative Tools. Procedia CIRP, 2014, 21, 288-293.
4. Boujut, J. F., and Laureillard, P. A co-operation framework for product–process integration
in engineering design. Design studies, 2002, 23(6), 497-513.
5. Bluntzer, J. B., Ostrosi, E., and Niez, J. Design For Materials: A new integrated approach in
Computer Aided Design. In 26th CIRP Design Conference, 2016.
6. Greenberg, S. Computer-supported cooperative work and groupware, 1991 (Academic Press
Ltd).
7. Mahdjoub, M., Al Khatib, A., Bluntzer, J. B., and Sagot, J. C. Multidisciplinary conver-
gence about “product-use” couple: Intermediary object’s structure. In DS 75-5: Proceedings
of the 19th International Conference on Engineering Design (ICED13) Design For Harmo-
nies, Vol. 5: Design for X, Design to X, Seoul, Korea, 2013.
8. Al Khatib, A., Mahdjoub, M., Bluntzer, J. B., and Sagot, J. C. A Tool Proposition to Sup-
port Multidisciplinary Convergence in Immersive Virtual Environment: Virtusketches. In
Smart Product Engineering, 2013, pp. 795-804 (Springer Berlin Heidelberg).
9. Buisine, S., Besacier, G., Aoussat, A., and Vernier, F. How do interactive tabletop systems
influence collaboration? Computers in Human Behavior, 2012, 28(1), 49-59.
10. Shen, H., Ryall, K., Forlines, C., Esenther, A., Vernier, F. D., Everitt, K., and Tse, E. In-
forming the design of direct-touch tabletops. Computer Graphics and Applications, IEEE,
2006, 26(5), 36-46.
11. Turban, E., Liang, T. P., and Wu, S. P. A framework for adopting collaboration 2.0 tools for
virtual group decision making. Group decision and negotiation, 2011, 20(2), 137-154.
12. McAfee, A. P. Enterprise 2.0: The dawn of emergent collaboration. MIT Sloan management
review, 2006, 47(3), 21.
13. Bower, M., Hedberg, J. G., and Kuswara, A. A framework for Web 2.0 learning design. Ed-
ucational Media International, 2010, 47(3), 177-198.
14. Dolezal, W. R. Success factors for digital mock-ups (DMU) in complex aerospace product
development. Technische Universität München, Genehmigten Dissertation, Germany, 2008.
15. Pahl, G., and Beitz, W. Engineering design: a systematic approach, 2013 (Springer Science
& Business Media).
16. Suh, N.P. Axiomatic Design: Advances and Applications, 2001 (The Oxford Series on Ad-
vanced Manufacturing).
17. Roucoules, L., and Tichkiewitch, S. CoDE: a cooperative design environment—a new gen-
eration of CAD systems. Concurrent engineering, 2000, 8(4), 263-280.
18. Brissaud, D., and Tichkiewitch, S. Product models for life-cycle. CIRP Annals-
Manufacturing Technology, 2001, 50(1), 105-108.
19. Tichkiewitch S., De la CFAO à la conception intégrée, International Journal of CAD/CAM
and Computer Graphics, 1994, 9(5), 609-621.
20. Chapa Kasusky, E. C. Outils et structure pour la coopération formelle et informelle dans un
contexte de conception holonique, 1997 (Doctoral dissertation).
21. G. Thimm, S.G. Lee, Y.-S. Ma, Towards unified modelling of product lifecycles, Com-
puters in Industry, 2006, 57 (4), 331–341.
22. M. Labrousse, and A. Bernard, Modèle FBS enrichi pour la modélisation des processus
d’entreprise, in: Proceedings of the CPI Conference, Marocco, 2003.
23. Noël, F., and Roucoules, L. The PPO design model with respect to digital enterprise tech-
nologies among product life cycle. International Journal of Computer Integrated Manufac-
turing, 2008, 21(2), 139-145.
24. Sudarsan, R., Fenves, S. J., Sriram, R. D., and Wang, F. A product information modeling
framework for product lifecycle management. Computer-aided design, 2005, 37(13), pp.
1399-1411.
25. Friedenthal, S., Moore, A., and Steiner, R. A practical guide to SysML: the systems model-
ing language, 2014 (Morgan Kaufmann).
Applying PLM approach for supporting
collaborations in medical sector: case of
prosthesis implantation

1*
Thanh-Nghi Ngo , Farouk Belkadi1, Alain Bernard1
1
Ecole Centrale de Nantes, IRCCyN UMR CNRS 6597, France
*
E-mail: thanh-nghi.ngo@irccyn.ec-nantes.fr

Abstract: The medical sector is a wide and complex field that needs continuous
improvements in its efficiency. This paper deals with the problem of collaboration
and the data sharing in the medical field. The focus is on the case of treatment
processes requiring prosthesis implant. Several actors from both medical and in-
dustrial sectors are involved in these processes. They need to collaborate during
the whole process of prosthesis creation and implantation. From this perspective,
this paper presents a discussion about the advantages of the current PLM-based
approaches to deal with these issues. A first proposal of the conceptual approach is
also proposed.

Keywords: Product Lifecycle Management, Prosthesis, Medical sector, Collabo-


ration, Information sharing.

1. Introduction

Nowadays, the quality of life improved, so the medical sector is more interested in
the current social opinions. How to improve the efficiency in the medical field has
become a critical issue. In general, the medical sector is a wide and complex field.
However, this research work focuses on the patient treatment processes requiring
prosthesis implant.
The development of the prosthesis follows the main classical steps of design
and manufacturing processes of mechanical products. However, some particulari-
ties should be pointed in this context. First, the product (prosthesis) is completely
linked to its final user (patient). Generally, any prosthesis is unique and its geome-
try depends on the morphology of the patient. Second, all decisions taken about
the prosthesis lifecycle depend on the concerned patient disease data. The disease
data is stored and managed in the patient health folder whereas the prosthesis data
is stored in various technical documents.

© Springer International Publishing AG 2017 871


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_87
872 T.-N. Ngo et al.

Thus, the prosthesis lifecycle management is strongly linked to the patient dis-
ease lifecycle and several interactions between heterogeneous experts coming
from both medical and industrial sectors are required.
This paper describes the first results of a study that aims to develop new PLM
approach for the management of prosthesis lifecycle. The main idea is to adapt
currently used approaches in industrial domain by taking into account the specific-
ities of the product and its lifecycle described above. It indicates some limitations
in the patient treatment processes in terms of data sharing and collaborations be-
tween different actors involved in the treatment process. Based on this analysis,
the paper discusses the main advantages of the PLM approach that can improve
these issues. The main conceptual pillars of the PLM approach are then proposed.
For this goal, the first part of this paper gives a general description of medical
sector in this research work. Besides that, this part also indicates the need of col-
laboration and information sharing. The second part focuses on the definitions and
some applications of PLM in industry and in medical sector.
The third part of this paper proposes the patient treatment process and relative
actors. After that, this part also indicates the problem in the context and proposes
the PLM approach to solve that problem. Finally, the paper exposes the conclu-
sions and future work.

2. Collaboration and data sharing issues in the medical sector.

The medical sector is a wide field that can be divided into 4 main areas: Diagnos-
tic service, Treatment service, Healthcare service and Medicine. This research fo-
cuses on the patient treatment processes requiring prosthesis implant.
In the specific case when a prosthesis implant is needed, the treatment process
of a patient includes many sub-processes such as: the patient data acquisition,
prosthesis design, manufacturing, surgery and recovery. The accuracy and com-
pletion time of these sub-processes have a strong influence on the success of the
treatment process. For instance: with better quality of the prosthesis, it reduces the
incidents may occur during surgery process. Post-treatment and related costs can
be also reduced if the geometry of the prosthesis is completely adapted to the pa-
tient morphology. The patient will be recovered after surgery quickly in this case
[1, 2]. Therefore, enhancing the accuracy of all sub-processes in the patient treat-
ment process and shortening the development time of the prosthesis are emergen-
cy problems that can find new solutions by using new generation of ICT (Infor-
mation and Communications Technology).
There are many factors influencing accuracy and completion time of the patient
treatment process. Some of them can be mentioned such as: the collaboration, in-
formation sharing, and data integration between all actors, sub-processes during
the patient treatment process. However, there are no optimum methods to achieve
the efficient collaboration between actors. They often use ad-hoc network, email
Applying PLM approach for supporting … 873

to share the data together or even the data is archived to a CD/DVD, then supply
for related sub-processes. These data sharing methods usually lead to potential er-
rors and delays [3].
Therefore, the main issue in this research work is to propose new solutions
based on PLM concept to improve the collaboration and information sharing be-
tween all actors including doctor, radiologist, design engineer, manufacture engi-
neer, surgeons, administration services, etc.
The main assumption is that the concepts of PLM approach can provide several
advantages to this research question. The main question is then how to adapt these
concepts to deal with the specificity of the studied domain. The next section pre-
sents an overview about the concept of PLM.

3. Some PLM applications in industry and in medical sector

PLM is defined as a strategic business approach that applies a consistent set of


business solutions to support the collaborative creation, management, dissemina-
tion, and use of product definition information across the extended enterprise from
concept to end of life – integrating people, processes, and information [4]. It co-
vers and manages all product information, engineering processes and applications
along the different phases of product lifecycle [5, 6]
In general, PLM is used in all sizes of companies, ranging from large corpora-
tions to small and medium enterprises. It is applied across a wide range of indus-
trial sectors such as in automotive and aerospace industry, as well as in research,
education, medical, military and other governmental organizations.
The first generation of PLM is primarily used in automotive and aerospace in-
dustry, because the products of these areas have long lifecycles, high complexity
and nearly no possibility of physical prototyping. [7-9]
Several applications of PLM approach can be found in the literature. The au-
thors of [10, 11] presented the modelling framework to support a new PLM ap-
proach to improve information sharing in collaborative design at the conceptual
level. In another case, Tang and Qian [12] proposed a PLM framework to connect
Automotive OEM and supplier in collaboration.
About applying PLM in SMEs (Small and medium-sized enterprises), the au-
thors of [13, 14] proposed a meta-model that can be applied for the SMEs that
have the same product data problematic, although there are many difficulties to
implement PLM systems in SMEs [15].
Besides the automotive and aerospace industry, PLM has now been widely
adopted by the whole manufacturing industry world, including pharmaceutical
domain. Nevertheless, it has not widely set up in medical sector except for pros-
thesis design, manufacture and healthcare companies [9]
One of the rare application is the proposition of Allanic et al [9, 16] who con-
structed a Meta model for PLM in bio-medical imaging (BMI). Authors used the
874 T.-N. Ngo et al.

workflow module to manage the processing of BMI data between phases in lifecy-
cle. Workflow is also used to launch the data check processes, data calculation and
follow how data has been computed. Author proposed a BMI data model, it can
store and manage all BMI data from specification study phase to published result
phase. This data model has been implemented to PLM software tool to verify the
results.

4. Towards a PLM approach

The main issue in this research work is to improve the collaboration and infor-
mation sharing between all actors including radiologists, medical doctors, design
engineers, manufacture engineers, surgeons, etc.
PLM has proven its large capabilities to deal with such kind of problematic.
The implementation of any PLM approach requires a crucial step of business pro-
cess analysis and roles identification in order to identify the main data to be con-
nected and workflows to be automatized. The next sub-section proposes a global
description of the patient treatment process in case of prosthesis implant.

4.1. The patient treatment process

The patient treatment process requiring the prosthesis implant is a complex pro-
cess that includes many sub-processes with different actors [17]. It can be summa-
rized as follows:
After interviewing and conducting some initial checking on the patient, the ra-
diologist will scan the patient using 3D computed tomography (CT) or Magnetic
resonance imaging (MRI) [18, 19]. The scanning data is stored and send to medi-
cal doctor who will diagnose and make the treatment plan. After that, the scanning
data is exported in DICOM format in order to be suitable with the input of
MIMICS software which converts the scanning data into the highest possible qual-
ity STL data files. Then, the STL files are imported into the CAD software [20,
21]. At this stage, surgery plan will be simulated by using the software tools. Until
all specifications are satisfied, the design engineer will begin to design prosthesis.
The final design model will be then exported to the high quality STL file for
prosthesis manufacturing process. Depend on kind of prosthesis, it can be manu-
factured by different technologies (3D printing, CNC…). Before going into sur-
gery, the prosthesis must be approved by medical doctors and surgeons [18]. Then
surgical process will be carried out on the patient. The final process is rehabilita-
tion and recovery.
Applying PLM approach for supporting … 875

4.2. Need of lifecycles integration

After the prosthesis is implanted on the patient, there are 3 possible cases that may
occur as follows:
x Case 1: The patient adapt well to the new prosthesis. Prosthesis will exist
forever along the rest of the patient’s lifetime.
x Case 2: This case concerns the temporary implants used by the doctors to
fix some problems. In this work the implants are considered as particular
kind of temporary prosthesis. After recovery, the surgeons will perform an-
other surgery to remove the prosthesis. Prosthesis will be then recycled at
its last life stage.
x Case 3: During the recovery process, some problems may occur such as:
Patient has an accident in daily life; prosthesis is unsuitable with the pa-
tient's physical condition, defective design or even in case the prosthesis is
replaced according to the predefined calendar. At this case, the patient will
be cured again with a new prosthesis.
With 3 cases above, the patient treatment process will only repeat correspond-
ing to case 3. In this case, there are two lifecycles that exist in parallel: The patient
disease lifecycle and the prosthesis lifecycle (Fig. 1)

Fig. 1. Linking between the patient disease lifecycle and the prosthesis lifecycle

This research presents the global approach described at conceptual level. It


proposes a management solution of the prosthesis lifecycle integrated to the pa-
tient disease lifecycle through the identification of different links between these
two lifecycles (Fig. 1). These links are used to guide the identification of all col-
laborative situations, data exchange, processes, PLM functionalities, etc.
Fig. 2 shows an example to clarify the links between two sub-processes: pros-
thesis implantation and manufacturing process. Link 1 is used for the non-
emergency situations. The date of surgery is decided by manufacturing process.
876 T.-N. Ngo et al.

The prosthesis order is considered by the supplier with a high priority. On the con-
trary, link 2 is in emergency situations, the prosthesis will be manufactured as
soon as possible so that the date of delivery will be considered in the definition of
the surgery date.
Date of surgery (1)

Prosthesis Provide prosthesis


implantation Manufacturing

Date of delivery (2)

Fig. 2. The links between prosthesis implantation and manufacturing process.

4.3. Proposal of PLM Hub

Fig. 3. Connect the actors, patient treatment data and prosthesis data through PLM
Applying PLM approach for supporting … 877

The previous sections indicated that the collaboration and data sharing play an im-
portant role in not only enhancing the accuracy, quality but also shorten comple-
tion time of whole process.
From the problems above, in this research, the paper proposes PLM approach
to enhance efficiency of collaboration and information sharing between various
business sub-processes, different actors in the treatment process. All actors, pa-
tient treatment data, and prosthesis data will be connected through PLM hub
shown in Fig. 3
The PLM approach will propose a common repository associated to several
collaborative functionalities like notification, light model viewing, automatic up-
date of connected information, etc. This figure illustrates the need to connect dif-
ferent business processes and heterogeneous tools that are used in different busi-
ness sectors. This implies a complex interoperability to be resolved as a part of
this research work.
In order to implement the model as proposed in Fig. 3, another very important
issue that should be considered is the data access management. Depend on the role
of each actor, the system will supply the different data access rights. They include
view data, validate data, track the progress, modify data or only give the com-
ments.

5. Conclusion

The paper pointed out a critical problem that can strongly impact on the processes’
performances in the medical sector. The problem concerns the information ex-
change and collaboration between actors. It also indicates the main problems to be
solved to develop the appropriate PLM approach. Then a collaborative model is
presented based on PLM approach. It connected all actors and processes through a
PLM hub. All actors can send, receive the information and data through this HUB.
It minimizes errors during the data exchange, improves the prosthesis quality and
shortens time of the whole process. Besides that, actors can monitor the progress
of other processes, so they can make their own plan actively. In addition, the paper
also proposes the links between the patient treatment lifecycle and the prosthesis
lifecycle.
Future research work will continue to develop the initial proposition. We will
analyse in detail the data exchange tools and collaboration in the treatment pro-
cesses. Knowledge models will be built to describe, in an organized way all con-
cepts and constraints that can be involved in the relationship between the actors
and sub-processes.
The development of specific processes based on the meta-modeling framework
is constructed to demonstrate the communication between different processes and
actors.
878 T.-N. Ngo et al.

References
1. Zdravković, M. and M. Trajanović, Towards Semantic Interoperability Service Utilities, in
On the Move to Meaningful Internet Systems: OTM 2011 Workshops, R. Meersman, T. Dil-
lon, and P. Herrero, Editors. 2011, Springer Berlin Heidelberg. p. 39-48.
2. Zdravković, M., et al., A case of using the Semantic Interoperability Framework for custom
orthopedic implants manufacturing. Annual Reviews in Control, 2012. 36(2): p. 318-326.
3. Bibb, R., D. Eggbeer, and A. Paterson, 5 - Case Studies, in Medical Modelling (Second Edi-
tion), R. Bibb and D.E. Paterson, Editors. 2015, Woodhead Publishing: Oxford. p. 99-472.
4. CIMdata, Product lifecycle management: empowering the future of business. 2002.
5. Abramovici, M., Future Trends in Product Lifecycle Management (PLM), in The Future of
Product Development, F.-L. Krause, Editor. 2007, Springer Berlin Heidelberg. p. 665-674.
6. Saaksvuori, A. and A. Immonen, Introduction, in Product Lifecycle Management. 2008,
Springer Berlin Heidelberg: Berlin, Heidelberg. p. 1-6.
7. Abramovici, M. and O.C. Sieg, Status and development trends of product lifecycle manage-
ment systems. Proceedings of IPPD2002, Wroclaw, Poland, 2002.
8. Nguyen, D.S. Total quality management in product life cycle. In Industrial Engineering and
Engineering Management (IEEM), 2014 IEEE International Conference on. 2014. IEEE.
9. Allanic, M., et al., Application of PLM for Bio-Medical Imaging in Neuroscience, in Product
Lifecycle Management for Society, A. Bernard, L. Rivest, and D. Dutta, Editors. 2013,
Springer Berlin Heidelberg. p. 520-529.
10. Belkadi, F., et al., A meta-modelling framework for knowledge consistency in collaborative
design. Annual Reviews in Control, 2012. 36(2): p. 346-358.
11. Penciuc, D., et al., Towards a PLM Interoperability for a Collaborative Design Support Sys-
tem. Procedia CIRP, 2014. 25: p. 369-376.
12. Tang, D. and X. Qian, Product lifecycle management for automotive development focusing
on supplier integration. Computers in Industry, 2008. 59(2–3): p. 288-295.
13. Le Duigou, J., et al., Global approach for technical data management. Application to ship
equipment part families. CIRP Journal of Manufacturing Science and Technology, 2009.
1(3): p. 185-190.
14. Le Duigou, J., et al., Generic model for the implementation of PLM systems in mechanical
SMEs. IFIP WP5. 1 Proc of 7th International Conference on Product Lifecycle Management,
2010: p. 200-209.
15. El Kadiri, S., et al. Current situation of PLM systems in SME/SMI: Survey’s results and
analysis. In Proceedings of 6th International Conference on Product Lifecycle Management.
2009.
16. Allanic, M., et al. Towards a data model for plm application in bio-medical imaging. In 10th
International Symposium on Tools and Methods of Competitive Engineering, TMCE. 2014.
17. Bibb, R., D. Eggbeer, and A. Paterson, Medical modelling: the application of advanced de-
sign and development techniques in medicine. 2015: Woodhead Publishing.
18. Tukuru, N., S.M. Ahmed, and S. Badami, Rapid prototype technique in medical field.
Research J. Pharm. and Tech, 2008.
19. Jardini, A.L., et al., Cranial reconstruction: 3D biomodel and custom-built implant created
using additive manufacturing. Journal of Cranio-Maxillofacial Surgery, 2014. 42(8): p. 1877-
1884.
20. Bibb, R., et al., Rapid manufacture of customǦfitting surgical guides. Rapid Prototyping
Journal, 2009. 15(5): p. 346-354.
21. Truscott, M., et al., Using RP to promote collaborative design of customised medical
implants. Rapid Prototyping Journal, 2007. 13(2): p. 107-114.
Section 5.6
Surveying, Mapping and GIS Techniques
3D Coastal Monitoring from very dense UAV-
Based Photogrammetric Point Clouds

Fernando J. AGUILAR1*, Ismael FERNÁNDEZ2, Juan A. CASANOVA3,


Francisco J. RAMOS3, Manuel A. AGUILAR1, José L. BLANCO1 and José C.
MORENO4
1
University of Almería, Departament of Engineering, Ctra. de Sacramento s/n, La Cañada de
San Urbano, Almería 04120, Spain
2
Servicios Técnicos NADIR S.L., C/ Cortina del Muelle 5, Málaga 29015, Spain
3
SANDO S.A., Avda Ortega y Gasset 112.Málaga 29006, Spain
4
University of Almería, Department of Informatics, Ctra. de Sacramento s/n, La Cañada de
San Urbano, Almería 04120, Spain
* Corresponding author. Tel.: +34-950-015339; fax: +34-950-015491. E-mail address:
faguilar@ual.es

Abstract In the present study, the potential use of unmanned aerial vehicles
(UAVs) as a platform to flexibly obtain sequence of images along coastal areas
from which producing high quality SfM-MVS based geospatial data is tested. A
flight campaign was conducted over a coastal test site covering an area of around
4 has near Malaga (Spain). Images were taken on 1st December 2015 at a height
above the ground ranging from 113.5 to 118 meters by using a Sony α6000® con-
sumer camera mounted on a UFOCAM XXL v2® octocopter. 40 RTK-GPS sur-
veyed ground points were evenly distributed over the whole working area. Fur-
thermore, a very dense and accurate point cloud was collected by using a FARO
Focus 3D X-130 terrestrial laser scanner (TLS). The photogrammetric block was
computed by using two widely known SfM-MVS commercial software implemen-
tations such as Inpho UASMaster® and PhotoScan Professional®. PhotoScan
provided a highly accurate bundle adjustment with errors of 1.5 cm, 1.5 cm and
6.1 cm along X, Y and Z axis respectively. The triangulation errors computed
from UASMaster turned out to be slightly poorer along Z axis. In this sense, the
very high resolution Surface Model built up from the corresponding photogram-
metric point cloud depicted higher Z-differences with respect to the reference TLS
derived surface model in the case of the UASMaster workflow. Summing up, the
high degree of automation and efficient data acquisition provided by UAV-based
digital photogrammetry makes this approach competitive and useful to be applied
in high resolution 3D coastal mapping.

Keywords: SfM-MVS; UAV; Photogrammetric point cloud; Coastal monitoring;


3D coastal mapping.

© Springer International Publishing AG 2017 881


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_88
882 F.J. Aguilar et al.

1 Introduction

Mediterranean coastal areas are being progressively degraded mainly because they
are withstanding a high pressure linked to an increasing economic activity that
provides large profits from the tourist industry. This process is causing the emer-
gence of new infrastructures (harbors, roads, urbanizations, engineered structures,
etc.) which are seriously affecting coastal environment [1]. In this sense, it is
worth noting that urban development of coastal areas and resource use conflicts
spawn environmental degradation and increasing hazard vulnerability [2]. As a re-
sult, some specific programs have been developed for the Mediterranean Sea (e.g.
United Nations Environment Program/Mediterranean Action Plan) in order to
study the degradation and conservation processes along Mediterranean coastal ar-
eas.
Among the coastal environments, sandy beaches constitute the most dynamic
natural system as well as the most exposed to morphological variations. Further-
more, they are usually under a large anthropic influence. Sandy beaches behave
differently regarding the spatial and temporal scales. On one hand, seasonal
changes along shoreline cross-profiles may be observed from winter to summer
and also due to specific atmospheric events such as storms surge [3]. On the other
hand, the general trend of coastal evolution can be assessed by means of a long-
term evolution study [4].
Historically, the shoreline has been used as the main indicator of the coastal
dynamic [4, 5]. Hence the geomatics techniques utilized for its extraction have
been extended to a wide set of fields such as researching, engineering, manage-
ment, land-use planning and environmental issues. Notice that long coastlines and
dynamic processes make the application of traditional surveying difficult, but re-
cent advances made in the geomatics discipline allow for more effective method-
ologies to be investigated.
The development of some techniques that make possible to efficiently obtain
high accuracy Digital Surface Models (DSM), such as Digital Aerial Photogram-
metry or airborne LiDAR technology, have pointed out to the datum-coordinated
shorelines, based on either tidal or vertical reference datums, as the most suitable
shoreline indicator [5], since a shoreline extracted from a stable vertical datum can
be treated as a reference shoreline and used to accurately compute shoreline
change rates at local scale so allowing the reliable simulation of coastal erosion.
Moreover, coastal geomorphology requires accurate topographic data of the so-
called beach systems to carry out a number of simulations related to flooding phe-
nomena and assessment of the coastal sediment budget [6]. In this way, the emer-
gence of Structure from Motion (SfM) with Multi-View Stereo (MVS) in recent
years has revolutionized 3D topographic surveys by significantly boosting effi-
ciency in collecting and processing data. Coming from the fields of computer vi-
sion and photogrammetry, SfM-MVS can produce, under certain conditions, very
dense and accurate 3D point clouds of comparable quality to existing laser-based
3D Coastal Monitoring from very dense … 883

methods (e.g. Airborne Laser Scanning (ALS) or even Terrestrial Laser Scanning
(TLS)) [7]. In fact, the image-based approach [8], supported by recent develop-
ments in computer vision [9], is helping to provide additional automated methods
for both image orientation and 3D reconstruction at different scales [10].
The main goal of the present study was to test the potential use of unmanned
aerial vehicles (UAVs) as a platform to flexibly obtain sequence of images along
coastal areas from which efficiently producing high quality and dense SfM-MVS
based geospatial data.

2 Study site and dataset

An UAV-based flight campaign was conducted over a coastal test site covering a
target area of around 4 has near the city of Málaga, located at south of Spain (Fig-
ure 1). The working area consisted of a sandy Mediterranean beach (foreshore)
and a shrubland zone (backshore). The flight pattern consisted of three strips com-
posed of 97 vertical (nadir) images with a forward and side overlap of 90% and
50% respectively.

Fig 1. Location of the coastal study site.

Images were taken on 1st December 2015 at a height above the ground ranging
from 113.5 to 118 meters by using a Sony α6000® consumer camera mounted on a
UFOCAM XXL v2® octocopter whose acquisition rate was automatically set on
one shot per second. The high redundant set of images acquired facilitated the ap-
plication of SfM-MVS approach. The camera, equipped with a 24.3 megapixel
APS-C CMOS sensor (4.19 μm/pixel), was set at 30 mm focal length, therefore
capturing RGB images at approximately 1.5 cm Ground Sample Distance (GSD).
40 ground points, constituted of different targets (marks painted on the ground, ar-
tificial targets, etc.), were evenly distributed over the whole working area and sur-
veyed by applying Real Time Kinematic (RTK) GPS technique from using a cou-
ple of base and rover Trimble R6 receivers (Figure 2).
884 F.J. Aguilar et al.

Fig 2. Orthomosaic (GSD = 2 cm) depicting ground points distribution and artificial targets. 8
(top), 24 (bottom-left) and 16 (bottom-right) GCPs configurations.

A range from 8 to 24 ground points (Figure 2) were employed as Ground Con-


trol Points (GCPs) manually marked on the digital images to carry out the triangu-
lation and bundle adjustment process to obtain the external orientation parameters
for each photogram belonging to the photogrammetric block. It allowed the trans-
formation of the structure-from-motion point cloud into real world coordinates
(UTM 30N ETRS89 and orthometric heights EGM08 REDNAP). They were also
used to carry out a camera self-calibration process, except focal length (fixed at 30
mm), headed up to optimize the camera model (principal point offset and radial
and tangential distortion). The remaining ground points were used as independent
check points (ICPs) for assessing the 3D accuracy of the bundle adjustment. A
very dense and accurate point cloud, representing a proper ground truth for com-
3D Coastal Monitoring from very dense … 885

parison purposes, was also obtained by co-registering four independent point


clouds collected through a FARO Focus 3D X-130 terrestrial laser scanner (TLS).
This reference TLS point cloud was georeferenced by applying a 3D conformal
coordinate transformation based on the surveyed RTK-GPS world coordinates of 6
spheres evenly located over the whole working area. Finally, the photogrammetric
block was computed in the same way but using two SfM-MVS commercial soft-
ware implementations such as Inpho UASMaster® and PhotoScan Professional®.

3 Results and discussion

Regarding the photogrammetric triangulation accuracy assessment results, com-


puted at ICPs and working on only 8 GCPs evenly distributed over the test site
(corners and sides of the photogrammetric block. Please see Figure 2, top),
PhotoScan software provided a highly accurate bundle adjustment with errors
(measured as root mean square error or rmse) of 1.5 cm, 1.5 cm and 6.1 cm along
X, Y and Z axis respectively. The triangulation errors computed from UASMaster
bundle adjustment turned out to be slightly poorer along Z axis, with rmse z = 10.6
cm, performing very similar with respect to planimetric accuracy (rmse x = 1.3 cm
and rmsey = 1.6 cm). Table 1 depicts that the residuals of the bundle adjustment
transformation measured at ICPs decreased when increasing the number of GCPs.
The accuracy results at this first stage (SfM phase) between PhotoScan and
UASMaster turned out to be quite similar only when the number of GCPs was
equal or higher than 16 GCPs. Note that the output of the SfM stage is a sparse
and unscaled 3D point cloud in arbitrary units along with camera models and pos-
es. At least three GCPs with XYZ coordinates are needed to scale and
georeference the SfM-derived point cloud by means of a seven parameter linear
similarity transformation [11]. Therefore, and unlike conventional photogramme-
try, each photograph does not require to contain visible GCPs. Nowadays, it is
even possible to undertake the so-called “direct” georeferencing, so avoiding pho-
togrammetric ground control, through known attitude and camera positions given
by RTK-GPS measurements and an Inertial Measurement Unit [12].

Table 1. Accuracy results computed at the Ground Points not used in the photogrammetric bun-
dle adjustment computation (32, 24 and 16 independent check points respectively).

rmsex (cm) rmsey (cm) rmsez (cm)


PhotoScan® 1.5 1.5 6.1
8 GCPs
UASMaster® 1.3 1.6 10.6
PhotoScan® 1.2 1.2 3.7
16 GCPs
UASMaster® 1.1 0.9 3.9
PhotoScan® 1.2 1.0 3.4
24 GCPs
UASMaster® 1.2 0.9 3.8
886 F.J. Aguilar et al.

With regards to the second stage, that is point cloud densification or MVS
phase, which is very related to 3D surface reconstruction, MVS algorithms usually
increase the density of the initially sparse point cloud by at least two orders of
magnitude applying, among other approaches, depth-maps merging methods [7].

Table 2. Vertical accuracy results computed at N = 2041997 points for the digital surface models
(5 cm grid spacing) provided by the SfM-MSV commercial software implementations tested.

Z-differences (cm) Z-differences (cm)


TLS – PhotoScan TLS – UASMaster
Mean error -1.5 -11.0
8 GCPs Median -1.5 -8.8
Standard Deviation 4.1 9.0
Mean error -3.3 -1.0
16 GCPs Median -3.1 1.5
Standard Deviation 3.1 8.8
Mean error -2.7 -1.6
24 GCPs Median -2.3 0.7
Standard Deviation 3.1 8.8

The very high resolution DSM built up from the corresponding photogrammet-
ric point cloud (5 cm grid spacing), strictly computed over the sandy beach land
cover and only using eight GCPs , depicted higher Z-differences with respect to
the reference TLS derived surface model (ground truth) in the case of UASMaster
workflow, providing a non-negligible systematic error of -11 cm (given as the
mean of ZTLS – ZUASMaster differences) and a random error, measured as standard
deviation, of 9 cm (Table 2). Notice that, on the contrary, the mean error comput-
ed for the Z-differences between TLS data and PhotoScan derived DSM took a
value of -1.5 cm, also presenting a standard deviation of 4.1 cm significantly low-
er than that calculated in the case of UASMaster (Table 1). Furthermore, the
standard deviation of Z-differences remained fairly steady when increasing the
number of GCPs on which the photogrammetric block was computed (SfM stage)
in the case of UASMaster approach. In this sense, there was little improvement
from adding more GCPs to compute the photogrammetric triangulation and the in-
itial sparse point cloud regarding random error. Yet, systematic error was clearly
lowered after increasing the number of GCPs constraining the photogrammetric
block. PhotoScan software performed better and showed more stable vertical error
regardless the number of GCPs used to adjust the photogrammetric block.
The corresponding histograms of Z-differences, given as TLS minus
photogrammetrically derived point cloud, both for the case of PhotoScan and
UASMaster and eight GCPs, are shown in Figure 3. It is important to highlight
that PhotoScan vertical residuals fitted better a Gaussian distribution (overlaid on
3D Coastal Monitoring from very dense … 887

the histogram) than UASMaster ones, proving the better performance of the MVS
algorithm implemented in PhotoScan software. This finding was clearly corrobo-
rated by plotting the 2D spatial distribution of residuals (Figure 4), where a sharp
mosaicked effect can be made out in the case of the UASMaster-derived DSM.

Fig 3. Histogram of Z-differences within the sandy beach land cover between TLS point cloud
minus PhotoScan-derived DSM (left) and UASMaster-derived DSM (right) (5 cm grid spacing).

Fig 4. Z-differences (meters) within the sandy beach land cover given as PhotoScan (left) and
UASMaster (right) point cloud (both based on eight GCPs) minus TLS point cloud.

4 Conclusions

The approach proposed in this work integrates Structure-from-Motion (SfM) and


Multiview-Stereo (MVS) algorithms and, in contrast to traditional photogramme-
try techniques, it requires little expertise, few control measurements and, moreo-
ver, processing is practically automated. Although the absolute accuracy of TLS
point clouds is superior to SfM-MVS photogrammetry when working over a range
of several meters, the high degree of automation and efficient data acquisition
provided by UAV-based SfM-MVS digital photogrammetry makes this approach
888 F.J. Aguilar et al.

extremely competitive and useful to be applied in high resolution 3D coastal map-


ping. It is worth noting that the quasi-flat surface surveyed in this work would not
be the best 3D surface to perform camera auto-calibration. In this sense, a pre-
flight camera calibration would be required in order to improve the current results
by accurately modeling the inner camera geometry.
Among other added-value geospatial products provided by SfM-MVS, we can
highlight the 3D photorealistic models made up of high quality 3D textured trian-
gular meshes ready to be 3D printed or inserted in 3D immersive environments.

Acknowledgments The research work reported here was made possible through the re-
search project 3DCOAST, funded by “Corporación Tecnológica de Andalucía” (Regional Gov-
ernment of Andalusia) and the Spanish company SANDO S.A. It takes part of the Internacional
Campus CEIMAR and the 3DLAB research lines (UNAM13Ǧ1EǦ1191 Spanish Government).

References

1. Suárez de Vivero J.L. and Rodríguez Mateos J.C. Coastal crisis: The failure of coastal man-
agement in the Spanish Mediteranean region. Coastal Management, 2005, 33(2), 197-214.
2. Mills J.P. Buckley S.J. Mitchell H.L. Clarke P.J. and Edwards S.J. A geomatics data integra-
tion technique for coastal change monitoring. Earth Surface Processes and Landforms, 2005,
30(6), 651-664.
3. Hernández L. Alonso, I. Sánchez-Pérez I. Alcántara-Carrió J. and Montesdeoca I. 2007, Short-
age of sediments in the Maspalomas dune field (Gran Canaria, Canary Islands) deduced from
analysis of aerial photographs, foraminiferal content, and sediment transport trends. Journal
of Coastal Research, 2007, 23(4), 993-999.
4. Douglas B.C. and Crowell M. Long-term shoreline position prediction and error propagation.
Journal of Coastal Research, 2000, 16(1), 145-152.
4. Moore L.J. Shoreline mapping techniques. Journal of Coastal Research, 2000, 16(1), 111-124.
5. Fernández I. Aguilar F.J. Aguilar M.A. Pérez J. and Arenas A. A new, robust, and accurate
method to extract tide-coordinated shorelines from coastal elevation models. Journal of
Coastal Research, 2012, 28(3), 683-699.
6. Mancini F. Dubbini M. Gattelli M. Stecchi F. Fabbri S. and Gabbianelli G. Using unmanned
aerial vehicles (UAV) for high-resolution reconstruction of topography: the structure from
motion approach on coastal environments. Remote Sensing, 5, 2013, pp.6880-6898.
7. Smith M.W. Carrivick J.L. and Quincey D.J. Structure from motion photogrammetry in physi-
cal geography. Progress in Physical Geography, 2016, 40(2), 247-275.
8. Remondino F. and El-Hakim S. Image-based 3D modelling: a review. Photogrammetric Rec-
ord, 2006, 21(115), 269-291.
9. Furukawa Y. and Ponce J. Accurate dense and robust multiview stereopsis. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 2010, 32(8), 1362-1376.
10. Remondino F. Grazia M. Nocerino E. Menna F. and Nex F. State of the art in high density
image matching. Photogrammetric Record, 2014, 29(146), 144-166.
11. James MR and Robson S. Straightforward reconstruction of 3D surfaces and topography with
a camera: Accuracy and geoscience application. Journal of Geophysical Research: Earth Sur-
face, 2012, 117(F03017), doi:10.1029/2011JF002289.
12. Colomina I. Molina P. Unmanned aerial system for photogrammetry and remote sensing.
ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 92, pp. 79-97.
Section 5.7
Building Information Modelling
BiMov: BIM-Based Indoor Path Planning
Ahmed HAMIEH*, Dominique DENEUX, Christian TAHON

LAMIH, UMR CNRS 8201, University of Valenciennes and Hainaut-Cambresi. Le Mont-Houy.


59313Valenciennes Cedex 9, France
* Corresponding author. Tel.: +33 (06) 60125892;
E-mail address: ahmed.hamieh@etu.univ-valenciennes.fr

Abstract: Indoor path planning in a building means determining a short


practicable route between two distant inner spaces, through other spaces and
passages such as doors or stairs, while avoiding collisions against obstacles like
walls or equipment. This paper presents an original indoor path planning method
called BiMov, based on a BIM (Building Information Model). The analysis
process involves several phases. First, all the possible indoor paths across the
containers (spaces) are algorithmically determined based on a BIM represented by
an IFC (Industry Foundation Classes) file, which characterizes a rather stable
situation. In a second phase, the number of paths is potentially reduced depending
on the kind of MOoP (Mobile Object or Person) considered, that can be a person,
disabled or not, a handling machine, a mobile robot, or a bulky equipment. In a
third phase, the paths within the spaces are possibly refined, depending on their
contents, which may be affected by the presence of machinery or restricted areas.
In a fourth phase, the number of paths is again optionally reduced, depending on
the real-time or planned status of the building’s spaces and passages, i.e. whether
they are conjuncturally accessible or not. The paper will emphasize phases one
and two.
Keywords: BIM, IFC, Indoor Path Planning, Building, Navigation.

1 Introduction
Automatic determination of paths inside a building represents a major interest for
various Architecture Engineering and Construction applications (AEC) [1]. In
architectural design, to help study and validate emergency evacuation scenarios
[2]. During the construction phase, to guide workers, equipment and trucks in the
worksite [3], [4]. During the exploitation phase, to inform visitors about their
environment in real time [5] or provide path planning support to maintenance
engineers [6]. So as to prepare navigation, it is necessary to consider a model
representing the structural elements of the building (bounded spaces, columns,
gates and doors) and to assess their contribution to the navigation possibilities
(space: public/private, obstacle: fixed/mobile, etc). Compared to the traditional
CAD technology, a BIM model is capable of restoring both geometric and rich

© Springer International Publishing AG 2017 891


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_89
892 A. Hamieh et al.

semantic information of building components, as well their mutual relationships,


to support lifecycle data sharing. As a major data exchange standard for BIM, the
IFC standard led by BuildingSMART [7], plays a crucial role in the process. The
IFC format models all types of AEC project information, from tangible
components such as walls, doors, beams, slabs, to abstract concepts such as space,
geometry, and material. It also includes project costs, schedules and organizations.
The IFC define a standard format on which path planning can be based [7].
The paper will firstly justify the interest of automated indoor path planning and
provide a short review about this challenging topic. After that, a general overview
of the different phases of the BiMov analysis process will be provided. This paper
will focus on the two steps actually developed so far, namely: container analysis
and MOoP (standing for “Mobile Object or Person”) analysis.

2 Related work
Many path planning methods have been developed in computer science and
robotics in the past 30 years [1]. Today, with the growing development of BIM in
the AEC industry, several approaches can be found that are based on a BIM model
to generate an indoor or outdoor navigation model. Choi et al. [2] developed an
automated system called InSightBIM-Evacuation, to help designers and owners
check the evacuation regulation compliance of BIM data. Li et al. [8] proposed an
“environment aware beacon deployment” algorithm coupled with BIM, to support
a sequence based localization schema for locating first responders and trapped
occupants in a building fire emergency scene, where BIM is integrated to provide
the geometric information of the sensing area and evaluate the sensor deployment
effort. Lin et al. [1] developed a method to cope with path planning in 3D indoor
spaces based on an IFC file, consisting of three main steps: (1) extracting both
geometric and semantic information of the building components from the IFC
data; (2) sampling and mapping the extracted information into a planar grid; (3)
applying FMM (Fast Marching Method) to find the shortest path in the grid. In
this approach, the authors proposed new Psets (property sets) to extend the IFC
schema and enhance its semantics. Taneja et al. [9] developed algorithms to
automatically generate navigation models from IFC data. They identified 3 types
of navigation models: one based on the median of each space (centerline network),
a second one based on the dimensions of building components (metric network)
and a third one based on a 2D grid of each storey. Koch et al. [6] developed an
application of augmented reality based on BIM and indoor markers like exit signs,
fire extinguishers, etc. Their application aims to help the maintenance workers
find a possible path. Some authors attempted to combine and extend BIM and
CityGML. This 3D city modeling standard, in its finest level of detail (LOD4),
permits to represent the interior of buildings with their geometry, semantics,
topology and appearance. Isikdag et al. [10] developed a BIM Indoor-Oriented
Data Model (BO-MI). They transferred a BIM model into a geo-located
environment (based on ArcGIS), preserving all the geometric and semantic
BiMov: BIM-Based Indoor Path Planning 893

information needed to study indoor navigation. The pattern of the enriched model
is similar to the IFC model. Tashakkori et al. [5] developed an Indoor Emergency
Spatial Model (IESM) based on IFC, dynamic information and outdoor
information. The system is dedicated to indoor or outdoor navigation in case of
disasters. Although the above studies [1], [2], [5], [6], [8]–[10] demonstrate some
valuable attempts to combine BIM with path planning, they suffer from several
limitations: (1) path planning is almost always addressed statically, without taking
into account the variable status of the building spaces and passages, such as
rerouting due to hazardous maintenance works, or temporary modification of the
building’s operating conditions; (2) path planning it always focused on human
navigation, without taking into account other types of mobiles, such as
wheelchairs or handling machines; (3) the approaches implicitly consider
residential buildings, where no particular restrictions of circulation apply as, for
example, in industrial facilities where machines and people often move alongside,
but each one using specific lanes. The following section describes the outlines of
the BiMov system, which aims to overcome these limitations.

3 BIM-based path planning in BiMov


BiMov is likely to be used during the construction or exploitation phases of a
building [11], especially in a complex building, characterized by many spaces,
alternative routes, restricted navigation and a dynamic behavior. The system
should provide answers to path planning problems relevant for a variety of
MOoPs. The initial and rather stable stage consists in analyzing an IFC model in
the construction phase of the building (As-designed model combined with a 4-D
construction status: 3D plus time), or in the exploitation phase (As-built model). In
both cases, the initial phase aims to generate a maximal navigation graph, likely to
be reduced or densified, according to different criteria. In the reminder of this
paper, BiMov is considered to be applied on an IFC architecture model (As-built).
The main phases and resulting architecture of BiMov are depicted on (Figure 1).

event
IFC file
- Real time
- Planned

Identification Identification / restriction Identification / restriction Identification / restriction


- Spaces - Accessible spaces - Accessible sub-spaces - authorized sub-spaces Indoor path
start Conditions planning
- Transitions - Crossable transitions - Crossable transitions - Transitions allowed
- Neighborhoods between spaces - Neighborhoods - Neighborhoods - Neighborhoods
(4)
Algorithms Rules Rules

Container MOoP Contents


Analysis Analysis Analysis

(1) (2) (3)

Figure 1: BiMov architecture


894 A. Hamieh et al.

This sequence is justified by the fact that analyses focused on stable elements are
likely to be conducted less frequently than those focused on quickly changing
situations.
1. Analyze the containers of the IFC model, i.e.: identify all the spaces and
possible transitions between these spaces, by making explicit the notion of
neighborhood and connectivity between spaces.
2. Take into account the characteristics of the MOoP and apply the navigation
rules associated to it, so as to identify the subset of accessible spaces and
traversable transitions, which possibly leads to reduce the initial graph of
navigation.
3. Analyze the contents of spaces (machinery, hazardous areas, fixed furniture,
HVAC equipment, etc) and discretize spaces into subspaces, accessible or not.
4. Take into account the planned or real time conditions of circulation across
the remaining spaces and transitions (further reduce the graph).
The current paper particularly highlights the first and second phases, i.e., BIM
based container analysis for creation of the initial graph (section 3.1) and first
reduction of the graph based on MOoP characteristics analysis (section 3.2).

3.1 Phase 1: Container Analysis


In this phase, only the relevant concepts in the sense of navigation are extracted,
namely the horizontal transitions (such as doors represented by IfcDoor), vertical
transitions (such as stairs, elevators, ramps respectively represented by IfcStair,
IfcTransportElement type Elevator, and IfcRamp) and obstacles (such as walls or
columns, respectively represented by IfcWall and IfcColumn) [7].
For horizontal navigation, the system identifies common walls with openings and
common doors between pairs of spaces. Non closed openings, or openings filled
with a door suggest creating a connection between every pair of spaces exposing
such a common transition (non filled opening, or door). For this purpose, BiMov
exploits the IfcRelSpaceBoundary relation, which provides information about the
physical or virtual boundaries of every space that is useful to reveal the notion of
neighborhood.
For vertical navigation however, no such direct relationship between the spaces and
the vertical transitions connecting them can be extracted from the IFC schema. A
lower level approach, based on the geometric coordinates of the bounding boxes of
every item can be used to determine the shorter list of spaces that possess a vertical
transition with another space (upward or downward).
Vertical and horizontal unit connections are then assembled together so as to generate
the initial graph of navigation (i.e. without taking into account any kind of restriction).
Additional details can be found about the algorithms used in the first phase in [11].
The following section exposes the method used in BiMov so as to take into account
the characteristics of the MOoP.
BiMov: BIM-Based Indoor Path Planning 895

3.2 Phase 2: MOoP analysis


A MOoP is a Mobile Object or Person requiring path planning so as to move itself
or to be moved in the building. The navigation graph suitable for any MOoP will
be at most equal to the initial graph generated in the first phase (Container
Analysis), but will generally correspond to a reduced version of this initial graph,
insofar as some nodes or some arcs should be invalidated (space or passage
prohibited or incompatible with the MOoP features). Invalidation may be due to
physical properties (size, reduced mobility), administrative reasons (status or
authorization) or imposed by standards, as suggested in enterprise circulation
guides, such as [12].
So as to address the above described problems, several authors attempted to
characterize different types of MOoPs and model them. Tashakkori et al. [5]
developed a navigation system dedicated to human navigation. A person is
represented by a point in 3D (neglecting the bulk). In this approach, the average
speeds of horizontal and vertical movements of a human, which are respectively
of 75 ft/min and 40 ft/min according to the authors, are considered to compute a
delay for evacuation in absence of congestion. Lee et al. [13] defined a
computational method for measuring walking distances within a building, based
on a length-weighted graph structure for a given building model. It has been
implemented as a plug-in software extension in Solibri Model Checkers. The
concept is to create links between doors (door-to-door) and between vertical
transitions (stair-to-stair). So as to consider the characteristic of MOoPs, the
authors use buffered boundary polygons populated from the space object boundary
polygons, to narrow the circulation space. Buffer distance is the distance between
a wall and the top-centre of a person. In this approach, this distance is half the
width between the shoulders of a person (1' 0''). Han et al. [14] developed a
specific approach for analyzing the accessibility of a wheelchair inside buildings,
based on the American accessibility code (ADAAG). The authors identified
several scenarios of wheelchair movements: along a straight line, crossing doors,
entering toilets, turning in T and U-shaped space. In this work, the MOoP
(wheelchair) is presented in a top view with its real dimensions. Lin et al. [1]
developed 2 path planning methods. The first one is based on traditional FMM
(Fast Marching Method), where the walking object is represented by a point. The
drawback is that if there is a narrow opening in the building, the computed
shortest path may directly go through it. The second method, based on an
enhanced FMM, considers a size factor (R) to offset the size of the MOoP. The
value (R) can be decided by the size range of the human and the habit of human
walking. The approaches described above all attempted to represent some specific
MOoPs for indoor navigation purpose. The objective of BiMov requires the ability
to analyze, with a common approach, the navigation capabilities of various types
of MOoPs inside the same building. The overall dimensions should not be
considered only from a top view for planar navigation, but in every dimension
(length, width, height) for 3D navigation (as for a drone). Additional
896 A. Hamieh et al.

characteristics should also be taken into account, such as the ability or not to
perform longitudinal displacement (Tx), lateral displacement (Ty) or even vertical
displacement (Tz). Inspired from the wheelchair problem, it is also required to
represent the ability or not to rotate around the vertical axis during planar
navigation (yaw), but also to characterize the minimal turning radius in such
movement (Rz). Considering the general case of 3D navigation, the corresponding
characteristics of pitch (Ry) and roll (Rx) should be represented too. As there may
be some navigation restriction related to the weight (or more generally to the
pressure exerted on a floor by the MOoP), the mass properties (M) belong to the
important characteristics to model. For vertical displacements, the ability or not to
use stairs (S) and the maximum allowable slope along a ramp (P) should be
represented. As far as administrative authorizations are concerned, a vector of
Boolean authorizations (Ha [1 .. n]), with respects to a given reference list can be
valuable. These features are synthesized in (Table 1).

Table 1: Relevant features of MOoP


Type Description
L Length [mm] Longitudinal dimension of the bounding box (along the X axis)
l Length [mm] Transversal dimension of the bounding box (along the Y axis)
H Length [mm] Vertical dimension of the bounding box (along the Z axis)
Tx Boolean Ability to move along the X axis of the MOoP (T/F)
Ty Boolean Ability to move along the Y axis of the MOoP (T/F)
Tz Boolean Ability to move along the Z axis of the MOoP (T/F)
Rz Length [mm] Yaw radius, or -1 if the rotation is blocked
Rx Length [mm] Roll radius, or -1 if the rotation is blocked
Ry Length [mm] Pitch radius, or -1 if the rotation is blocked
M Mass [kg] Mass of MOoP
P Angle [deg] The maximum allowable slope
S Boolean Ability to climb stairs (T/F)
Ha[] Booleans List of MOoP authorizations, with respects to a reference list

The reference point of a MOoP can be placed at the centre of gravity of the
bounding box envelop. So as to represent the MOoP’s bulk in BiMov, the
boundary offset approach is used. An offset distance (Doff) depending on the bulk
of the MOoP in every dimension has to be determined. Doff corresponds to Dxy
in case of a rotation around the Z axis (yaw), to Dxz in case of a rotation around
the Y axis (pitch) and to Dyz in case of rotation around the X axis (roll). In this
phase, only the structural elements of the building are considered (mainly walls,
opening in walls, door frames or columns), without taking into consideration the
contents (equipment, machinery, etc) that will be treated in a subsequent phase
(phase 3: contents analysis). So far, there is no noticeable difference between
different types of buildings (private or public, residential or industrial), since
BiMov: BIM-Based Indoor Path Planning 897

hazardous machinery (contents) is not taken into account yet. In the case of
horizontal movements, the buffer distance (Doff=Dxy) can be determined based
on the longitudinal (L) and transverse (l) dimensions, and on the minimal yaw
radius of the MOoP. If the rotation of the MOoP is free in XY plane, Dxy
corresponds to half of the swept width (Sw) when the MOoP turns with the
minimal radius permitted by its yaw ability. So, Dxy = Sw / 2 (in mm), where:

Sw Rex  Rin
Dxy  
2 2
l
Rin  Rb 
2
 l
2
L 
2
Rex    Rb      
 2  2  

Figure 2: Swept width during a yaw


According to the model exposed in Table 1, the main characteristics of a standard
European mannequin are as follow: L=346mm, l=630mm, H=1763mm,
Tx=Ty=”T”, Tz=”F”, Rx=Ry=-1mm, Rz=0mm, M=83kg, S=15deg, S=”T”.
Based on the previously defined characteristics, taking into account the MOoP in
BiMov results in determining whether each space, each wall opening, each door
frame, each stair, … can be navigated through by the MOoP. In the case of
horizontal navigation, a planar cross section is applied to each storey, at an
elevation relevant for the MOoP (e.g.: at the Z value of its gravity center). In each
section, spaces are bounded by polygons. These polygons can be open or closed.
Openings (if any) correspond to doors between spaces or gates between open
spaces. Closed polygons correspond to spaces that can only be accessed via stairs
or other vertical paths (like a cellar). A different offset transformation is applied
respectively to the polygons and the openings. Polygons are offset inwards the
space by the distance (Dxy) calculated as indicated previously (Figure 2).

Figure 3: Narrowed space boundaries after offsetting the walls inward


A bulky MOoP will be able to move within a space if the corresponding closed
polygon is non void. In figure 3, three closed polygons are highlighted (in green).
They represent the areas where the center of mass of the MOoP is allowed to be
placed. The different polygons in a storey, however, are disjoint, like the green
areas are in Figure 3. So as to be allowed to move from one space to another, a
MOoP should temporarily move across a transitional practicable area
898 A. Hamieh et al.

characterizing the opening. The management of openings differs from the one of
space boundaries. Considering an opening like a (small) space would only suggest
reducing the practicable space through it. It is however necessary to expand this
small space (or even, the inner edge of this space, representing the real gate, or
border between two spaces) so as to generate a continuous path to and from each
neighboring spaces connected by the opening. The creation process of this
transitional area is explained Figure . First, the extreme points (in blue) of the
virtual segment representing the passage width are offset towards the middle of
this segment by the same distance Dxy. Two points (in green) are created. The
MOoP’s center of mass is allowed to be present between these two points. These
points are projected onto the two neighboring polygons, so as to create 4 new
points (in red). The closed (rectangular) polygon based on these 4 points
represents the transitional practicable area between the two spaces.

Figure 4: Creation of a practicable area for an opening in a wall (door or gate)


After an offset operation, if the remaining space is void (which can happen to
polygons or openings), it means that these items cannot be traversed at all by the
MOoP. So, the corresponding spaces or doors (or both) should be removed from
the initial navigation graph. Some special cases must also be considered. In the
case of a stairway, it must be checked whether the MOoP can use the stair
(S=”T”). If so, the boundaries of the staircase should be offset similarly for further
investigation. For elevators, it must be checked whether the mass of the MOoP
remains below the capacity (M <= capacity) of the cabin. If so, the boundaries of
the elevator (inner space and door frame) should be offset too. In the case of a
ramp, the slope should remain compatible with the skills of the MOoP (P).

4. Developments
An experimental development was undertaken to demonstrate the ability to
automatically generate a full graph of navigation (unrestricted) from an
architectural model IFC and validate the BiMov approach. For this, an
environment likely to load and browse a standard IFC file, extract geometry,
perform geometric reasoning and visualize this geometry in 3D was needed. These
functionalities are available in the IfcOpenShell (ifcopenshell.org) open source
environment. This latter includes libraries for reading an IFC schema, select
building elements according to their type and convert their geometry into the
OpenCascade (opencascade.com) 3D geometric modeler, the results are stored in a
spreadsheet as a list of space-transition-space. Additional code is developed in
BiMov: BIM-Based Indoor Path Planning 899

Python language. The container analysis phase based on the "As-built" building
model has been developed to construct the initial navigation graph. The results of
this treatment, applied to a complex residential building, are shown Figure 5.
Current developments are now focused on the consideration of different types of
MOoPs according to the method exposed above.

Figure 5: Initial navigation graph within a 4 storey building

Conclusion
Indoor path planning represents a major interest for AEC industry, but also for
future mobile applications. It is relevant for many purposes throughout the life cycle of
a building. This paper introduces a new approach called BiMov for indoor navigation
support, based on a standard IFC schema. BiMov automatically extracts and manages
geometric and semantic information of the building components and spaces, in order
to define an initial navigation graph. This graph an then reduced according to various
criteria, such as the type of MOoP (Mobile Object or Person) considered. The first
phase of BiMov (Containers analysis) has been demonstrated in the scope of a
software prototype. Some limitations exist: the algorithm currently fails to manage
open spaces and external staircases. The current developments are dedicated, on the
one hand to resolve these limitations (i.e.: make the generation of the initial graph
more robust), on the other hand to implement the proposed common method for MOoP
consideration, whenever it deals with standard mannequins visiting spaces across
doors and stairs, wheelchairs across spaces and elevators, etc. The subsequent phases
of BiMov, dedicated to analyze the contents of the spaces and managing the real-time
status of spaces and transitions currently belong to the research perspectives.
900 A. Hamieh et al.

References
[1] Y.-H. Lin, Y.-S. Liu, G. Gao, X.-G. Han, C.-Y. Lai, and M. Gu, “The IFC-based path planning
for 3D indoor spaces,” Adv. Eng. Inform., vol. 27, no. 2, pp. 189–205, Apr. 2013.
[2] J. Choi, J. Choi, and I. Kim, “Development of BIM-based evacuation regulation checking system
for high-rise and complex buildings,” Autom. Constr., vol. 46, pp. 38–49, Oct. 2014.
[3] S. Kang and E. Miranda, “Planning and visualization for automated robotic crane erection
processes in construction,” Autom. Constr., vol. 15, no. 4, pp. 398–414, Jul. 2006.
[4] A. R. Soltani, H. Tawfik, J. Y. Goulermas, and T. Fernando, “Path planning in construction sites:
performance evaluation of the Dijkstra, A*, and GA search algorithms,” Adv. Eng. Inform., vol.
16, no. 4, pp. 291–303, Oct. 2002.
[5] H. Tashakkori, A. Rajabifard, and M. Kalantari, “A new 3D indoor/outdoor spatial model for
indoor emergency response facilitation,” Build. Environ., vol. 89, pp. 170–182, Jul. 2015.
[6] C. Koch, M. Neges, M. König, and M. Abramovici, “Natural markers for augmented reality-based
indoor navigation and facility maintenance,” Autom. Constr., vol. 48, pp. 18–30, Dec. 2014.
[7] BuildingSmart, “Industry Foundation Classes (IFC), http://www.buildingsmart-
tech.org/ifc/IFC4/final/html/schema/ifcproductextension/lexical/ifcspace.htm.” 2013.
[8] N. Li, B. Becerik-Gerber, B. Krishnamachari, and L. Soibelman, “A BIM centered indoor
localization algorithm to support building fire emergency response operations,” Autom. Constr.,
vol. 42, pp. 78–89, Jun. 2014.
[9] S. Taneja, B. Akinci, J. H. Garrett Jr., and L. Soibelman, “Algorithms for automated generation
of navigation models from building information models to support indoor map-matching,”
Autom. Constr., vol. 61, pp. 24–41, Jan. 2016.
[10] U. Isikdag, S. Zlatanova, and J. Underwood, “A BIM-Oriented Model for supporting indoor
navigation requirements,” Comput. Environ. Urban Syst., vol. 41, pp. 112–123, Sep. 2013.
[11] A. Hamieh, D. Deneux, and C. Tahon, “BiMov : vers une analyse dynamique de navigabilité
dans les bâtiments”. CPI International Conférence, Tangiers, Morocco, Dec. 2015.
[12] INRS, "Le guide de la circulation en entreprise.” Institut National de recherche et de sécurité, 2010.
[13] J.-K. Lee, C. M. Eastman, J. Lee, M. Kannala, and Y.-S. Jeong, “Computing walking distances
within buildings using the universal circulation network,” Environ. Plan. B Plan. Des., vol. 37,
no. 4, pp. 628–645, Aug. 2010.
[14] C. S. Han, K. H. Law, J.-C. Latombe, and J. C. Kunz, “A performance-based approach to
wheelchair accessible route analysis,” Adv. Eng. Inform., vol. 16, no. 1, pp. 53–71, Jan. 2002.
Part VI
Education and Representation
Techniques

The track of Education and Representation Techniques h as been divided into three
topics. The first one, focused on “Teaching Engineering Drawing”, consists of
three contributions. A first paper reports a collaborative experience carried out by
three universities that have created a tool called TDEG (Technical Drawing Edu-
cation Grid) whose success has lead to the development of several learning aids
for undergraduate level students of technical drawing. In the second contribution,
its authors approach the ‘Learning by playing’ paradig m introducing in the clas s-
room an experience of “gamification”, with results mainly focused on the level of
interest, satisfaction and involvement of the participant students. A third contrib u-
tion reports on a comparative study of the most recent CAD software tools, with a
main interest on those having an open access.
The second group reports experiences on “Teaching of product design and
drawing history”. A first work shows a case study of product design in assistive
technology where an interdisciplinary teaching situation of rapid prototyping is
detailed. A second work is motivated by the aim of rescuing and recovering a pa r-
ticular part of Malaga´s industrial heritage and poses to undergraduate students a
challenge: considering several historical machines to be analysed and graphica lly
modelled. A third experience is reported in which the student’s skills are improved
by means of the development of real projects where professionals of the design
market provide the functional specifications for which the final product must be
obtained. A final work intends to integrate marketing activit ies in the design pro-
cess, particularly in the mechanical field. The paper suggests reducing the time
spent on training of CAD tools in order to make roo m for broader knowledge of
industrial design techniques. This is carried out in some classes where a lecturer
specialised on market ing teaches the students how to conduct interviews with cl i-
ents in order to identify their needs.
902 Education and Representation Techniques

A third block shows results of “Representation Techniques”. Two strongly re-


lated contributions report on a new geometric locus, a new intrinsic curve and a
new demonstration, all o f them related to the axonometric pro jection trihedron and
the Polke’s theorem.
Philippe Girard - Univ. Bordeaux

César Otero - Univ. Cantabria

Domenico Speranza - Univ. Cassino


Section 6.1
Teaching Engineering Drawing
Best practices in teaching technical drawing:
experiences of collaboration in three Italian
Universities

Domenico SPERANZA1 , Gabriele BARONIO2 , B arbara MOTYL3 , Stefano


FILIPPI3 and Valerio VILLA2
1
DICEM Dept. - University of Cassino and Southern Lazio, Cassino, Italy
2
DIMI Dept. – Università degli Studi di Brescia, Brescia, Italy
3
DPIA Dept. – Università degli Studi di Udine, Udine, Italy

*Domenico Speranza. T el.: +39-776-239-3988. E-mail address: d.speranza@unicas.it

Abstract Th is work present some best practice cases in teaching technical draw-
ing done by three Italian Universit ies: Brescia, Ud ine, and Cassino and Southern
Lazio. The intention to innovate and improve the basic technical drawing courses
offered by these three Universities started in 2014. The objective of this collabora-
tion was the development of some tools to help the students in understanding the
fundamental concepts of technical drawing. The first tool developed, in order of
time, was the Technical Drawing Evaluation Grid – TDEG. Starting fro m this
tool, other learn ing aids were developed for the undergraduate engineering st u-
dents. Some of them are: an online test for students’ self-assessment of technical
drawing knowledge; a questionnaire to collect students’ opinions on different
technical drawing and engineering design topics; a method for the imp rovement of
students’ motivation to study; and a self-learning tool for teaching manufacturing
dimensioning. The preliminary results of these different practices are presented
and discussed in the following, posing the basis of the definition of some best
practice methods that can be used for the improvement of the teaching and learn-
ing of technical drawing basic concepts for engineering students.

Keywords: technical drawing, teaching & learning tools, best practices, collabora-
tion.

1 Introduction

Since 2014, the research groups of the Universities of Brescia, Udine and Cas-
sino started a collaboration on technical drawing education. This collaboration

© Springer International Publishing AG 2017 905


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_90
906 D. Speranza et al.

aimed at the improvement of the quality of teaching, and of the acquisition of spe-
cific technical skills by engineering students [1, 2, 3, 4]. One of the particular ob-
jectives of this collaboration was the development of some tools to help the st u-
dents in understanding the fundamental concepts of t echnical drawing as:
projection methods, use of standards, and dimensioning and tolerancing – GPS/
GD&T. For these reasons, different kinds of educational tools were developed,
applied and tested during the technical drawing courses offered by these three uni-
versities.
Usually, technical drawing courses involve engineering students enrolled in the
first year (Brescia and Cassino) and in the second year (Udine) of the Bachelor
degrees in Management and Industrial Engineering. The courses’ contents and the
number of hours dedicated to the in-class lessons and exercises are similar and
comparable: 56 hours in Brescia and Cassino, and 60 hours in Udine. All of these
courses three are basic courses on Technical Drawing - TD, moreover they are at-
tended by a large nu mber of students.
Generally, a large portion of the enrolled students does not have any a priori
knowledge of the topics proposed because of their school of origin. This way, it is
important to highlight that TD is the basic language for these future engineers, and
it is used to deal not only with the other following engineering courses but also,
especially for their future emp loyment. Conversely, experience and skills, a c-
quired during the subsequent courses, such as mechanical technology, engineering
design, allow a better understanding and application of the rules of TD in general.
In this context, the TD instructor is forced to anticipate, in a qualitatively way,
some topics related to these courses to allow the students a better understanding
and application of certain standardized rules and conditions. Thus, the main pur-
pose of TD courses is to standardize the level of knowledge of TD and engineer-
ing graphics principles, focusing on different drawing tools as orthogonal and
isometric projections, sectional views, dimensioning rules and practices, and the
introduction of related drawing and international technical standards (ISO, EN, or
UNI). Starting fro m these considerations, Brescia, Udine, and Cassino Universi-
ties have established a collaboration based on of the use of the Technical Drawing
Evaluation Grid – TDEG –, developed by the University of Brescia [5]. The
TDEG represents a proposal for an assessment grid of the learn ing levels for TD
both in academic and in industrial contexts and it is based on the European Quali-
fications Framework -EQF. The basic idea of TDEG is to consider TD as a lan-
guage. This way, as we can use a reference grid for the different levels of lan-
guage knowledge we can use a reference grid for the evaluation of the knowledge
and acquired skill in TD. Thus, TDEG may be used as a reference framework for
TD education both in national and international contexts and in formal, non-
formal, or informal learn ing and education initiatives. Moreover, TDEG may con-
stitute a guideline for learn ing paths in engineering design and it may be a useful
tool to recognize skills and abilities in corporate contexts and to represent a refer-
ence point for the design of the engineers’ curriculu ms. The structure of TDEG
follows the original eight levels of EQF, which have been split into two sub-levels
Best practices in teaching technical drawing … 907

A and B, for an examp le see Figure1. These two sub-levels represent skills and
competences that a person may acquire in each level. This was made in analogy
with the “co mprehension” and “production” used in foreign language assessment
grids. The A sub-level is related with TD used as a pure communication language,
with special reference to TD understanding capability. The B sub-level is related
with the capability to produce correct technical drawings aimed to design synthe-
sis [5].

Fig. 1. An example of T DEG levels: Level 1, extract from [5].

In the next section, some of the developed best practice experiences are de-
scribed.

2 Examples of best practice experiences in teaching Technical


Drawing

2.1. Online test for self-assessment on Technical Drawing

During the 2014-15 and 2015-16 academic years, the students of Brescia,
Udine and Cassino Universities were asked to participate to an online test for self-
assessment on TD. The main goals of this test are to provide students with a tool
for self-assessing their knowledge of the basic TD rules, and principal standards
and to give teachers some feedback on the different TD topics .
The test is designed to be managed and administered by the Moodle platform,
customized by CINECA [6, 7] and it is available online for the regularly enrolled
students through the University students’ administration system “ESSE3” using
their student account. The test is composed of 15 mult iple -choice closed
questions, which are related to the first six levels of knowledge of TDEG. In
particular, there are four questions for Level 1, three for Level 2 and Level 4, two
for Level 3 and 5 and one for Level 6. Each question has six answer options, five
908 D. Speranza et al.

are predefined responses, among these answers, only one was correct, and the
others are completely wrong or inaccurate. The sixth option considers the
possibility of not answering the question. Some of the questions are only in text
format, others are illustrated using technical drawings, see Figure 2.

a) b)

Fig. 2. A) An example of a text question; b) an example of an illustrated question.

The students have 15 minutes to complete the test and, d uring the execution
they are guided by a navigation quiz toolbar were the progressive numbers of the
questions and a timer are highlighted. At the end of the test, students can view
immed iately the results and can navigate through the questions to see the correct
and the wrong answers. For each correct answer is assigned a value of +1. For
each wrong answer is assigned a value of - 0.25. If students do not answer to a
question the value assigned is 0. This way, if a student responds correctly to all
questions he can reach the maximu m of 15 points, which are then converted into a
vote equal to 30/ 30. To pass the test a student must take at least 9 points
corresponding to a vote equal to 18/ 30.
The test is not anonymous and a maximu m of 10 attempts was set for each
student, and the fifteen questions are shuffled for each different attempt coming
fro m a DB o f about 250 questions.
This test was developed by Brescia research unit and was adopted for Brescia,
Udine and Casino students. Now quite 300 students participate to the test. In Table
1 are reported preliminary results. This experience has proved to be very positive
and the students of the three seats appreciated it. The percentage of participation to
the test was significant. In fact, more than the 50% of the students have taken the
test using this self-learning tool both in the initial and in the final phases of the
course, especially as a checking tool their knowledge before the final exam,
students can also use the test during the course to self-assess their preparation on a
specific course topic.
Best practices in teaching technical drawing … 909

Ta ble 1. Significant data of online test students’ participation.

University N° of N° of N° of N° of N° of Mean
students students students students passed score
participating participating ’ ’ test in in
to the test in to the test in attempts attempts at least passed
a.y. 2014 -15 a.y. 2015 -16 in a.y. in a.y. 5 attemp
compared to compared to 2014-15 2015-16 attempt ts
students students s
enrolled in enrolled in
the course the course
(% of (% of
participation participation
) )
Brescia 61/120 - 272 - 46/61 23.75
(51%)

Cassino - 124/150 - 445 60/124 22.00


(80%)
Udine 102/160 - 171 - 54/102 21.89
(64%).

Similarly, it is a very useful tool also for teachers , because they can monitor the
students’ progress and they may identify potentially difficult topics, which need
further clarification. No w, the statistical analysis of the data collected by the three
seats is in progress and the data of the current academic year are not complete b e-
cause only one session of exams have been done.

2.2. Online questionnaire on technical drawing

Still, during academic year 2015-16, in all of the three universities , students of
TD courses were asked to answer to an online qualitative questionnaire on TD.
The main goal of this survey was to collect the different opinions of the students
on some topics related to TD skills and competences and to other issues to have a
picture of the current situation in terms of what are the students expect ations from
a traditional TD course and what could be used for possible improv ements. [8, 9].
In particular, the questionnaire is composed by twenty-three questions, sixteen
of them are close questions related to skills and competences, which students may
acquire during TD courses. Then, other four questions are related to the “digital
behavior” of the students in relation to digital technologies an d their practice or
with the innovative 3D printing theme. Finally, there are one closed question
about the students’ expectations of TD course in terms of knowledge imp rove-
ment, and two open questions, one is related to the possibility to participate to a
design competition and, the other asks students’ personal opinion on TD in gen-
eral. So me examp les of the used questions are reported in Table 2.
910 D. Speranza et al.

Ta ble 2: Some examples of questions used in the questionnaire, Italian and English transla-
tion.
# Question text Kind of answer
Q1 Quale ritieni sia il livello d’importanza da attribuire alla Da / From 1 = Per
capacità di saper schizzare a mano dei disegno tecnici? / niente importante / Not
What do you think is the level of importance needed for Important at All
the ability of freehand sketching of technical drawin g? A / T o 5 = Molto
Importante / Very
Important
Q5 Quale ritieni sia il livello d’importanza da attribuire alla Da / From 1 = Per
capacità di saper disegnare, utilizzando strumenti soft- niente importante / Not
ware CAD (Computer-Aided Design) bidimensionali? / Important at All
What do you think is the level of importance needed for A / T o 5 = Molto
the ability of drawing using a 2D CAD System? Importante / Very
Important
Q6 Quale ritieni sia il livello d’importanza da attribuire alla Da / From 1 = Per
capacità di saper costruire, utilizzando opportuni niente importante / Not
strumenti software, modelli CAD (Computer-Aided De- Important at All
sign) tridimensionali? / What do you think is the level of A / T o 5 = Molto
importance needed for the ability of drawing using a 3D Importante / Very
CAD System? Important
Q15 Quale ritieni sia il livello d’importanza da attribuire alla Da / From 1 = Per
capacità di poter interagire con il docente, anche durante niente importante / Not
la lezione, attraverso l’utilizzo di dispositivi digitali Important at All
personali/personal device (smartphone, tablet, pc)? / A / T o 5 = Molto
What do you think is the level of importance attributed to Importante / Very
the ability of interacting with the teacher, even during the Important
lessons, with personal digital devices (e.g. smartphones,
tablets, PCs)?
Q17 Quali tra i seguenti dispositivi digitali possiedi: Computer Indicare uno o più
fisso o portatile; Smartphone; T ablet; Lettore e-book; dispositivi o altro…
Lettore MP3; Consolle di videogioco; Other …? / Which /Indicate one or more
of the following digital devices have: fixed or portable devices or other…
computers; smartphones; T ablet; E-book reader; Mp3
player; Con-console video game; Other ...?
Q20 Hai mai sentito parlare di / Have you ever heard of: Sì Spesso; Sì T alvolta;
Realtà virtuale/ Virtual Realty; Realtà aumentata/ Aug- Di Rado; Molto
mented Reality; Realtà mixata/Mixed Reality; Raramente; Mai / Yes,
Prototipazione Rapida/ Rapid; Prototyping; Stampa 3D/ often; Sometimes yes;
3D Printing; FABLAB Seldom; Very rarely;
Never
Indicarne anche più di
una / indicate more
than one

As can be seen in Table 3 the number of student participating to the survey is


relevant, more of the 50%. The preliminary results of this work highlight the in-
Best practices in teaching technical drawing … 911

terest of students for TD in general and for both handmade TD techniques and 3D
CAD modelling. Moreover, some interesting aspects concerning the “digital b e-
havior” of the students emerged. In particular, the students’ relationship with dig i-
tal devices and their level of knowledge of innovative topics as 3D printing may
be interesting for future research and teaching improvements.

Ta ble 3. Preliminary results of the online questionnaire.

University N° of students % of
participating to participation
the questionnaire
in a.y. 2015 -16
Brescia 52/102 51%
Cassino 116/150 77%
Udine 77/133 58%

2.3. Interactive self-learning tool for teaching manufacturing


dimensioning

The Technical Drawing Learning Tool– Level 2 - TDLT-L2 represent an inter-


active self-learning tool designed to teach dimensioning criteria of mechanical fea-
tures concerned to elementary machin ing processes [10]. TDLT-L2 is based on
video and drawing animat ions, that links real machining processes with the di-
mensioning of the correspondent workpiece represented in technical drawings , see
figure 3a.

a) b)
Fig. 3. a) Screenshot sequence of the video of milling a keyway type A; b) a picture of the tablet
application.

In particular, this tool was developed to improve the knowledge of wh ich ele-
ments to measure, considering a manufacturing or technical point of v iew and, to
provide students with greater awareness of the strong connection between machin-
ing operations and the des ign and drawing of the correspondent workpieces. For
912 D. Speranza et al.

these reasons, TDLT-L2 tool is addressed to the students enrolled in the first years
of courses in Mechanical and Management Engineering and it is implemented as a
standalone application for PCs and tablets (Figure 3b).
This tool allows students better understanding the implications of different ma-
chining processes on dimensioning of specific geo metrical features of a mechani-
cal object. This tool aims at improv ing the students’ abilities in TD and it is not a
goal of this tool to be used for investigating the principles of machining or to e x-
plain the dynamics of those processes. Moreover, the focus of TDLT-L2 is to
highlight the different steps of the machining processes performed on the
workp iece without considering the particular kind of mach ine used (traditional,
numerical control, or working center, …). Th is way, the videos and the animation
of quite twenty elementary machining process, belonging to basic drilling, milling
and turning operations, were realized [10]. To evaluate the effect iveness of the
tool a preliminary validat ion test was conducted with some students of Udine Uni-
versity during the course lessons . First, students were asked to complete two di-
mensioning exercises using only the knowledge and competences acquired during
previous lessons or their personal background. Only after the test, the new TDLT-
L2 tool was presented and all the videos, of the different machining processes,
were shown to the students. Then, the students were asked to repeat the two di-
mensioning exercises again trying to apply the concepts seen in the videos. The t o-
tal marks obtained in the first run of the exercise (before videos) and in the second
(after videos) were analyzed statistically using t-test. As a result the average mark
obtained by all the students after the videos was about 8.8% higher than the aver-
age mark obtained before and t-test results evidenced that this difference of the
mean of the two populations was statistically significant at 95% confidence level.
The t-test was also applied separately to evaluate the difference in behavior be-
tween the Management and the Mechanical Eng ineering students involved in the
dimensioning exercises. The t-test was significant in the case of Management En-
gineering students but not in the case of Mechanical Engineering students prob a-
bly due to their previous background of knowledge [10]. This way TDLT -L2 tool
can be considered as a valuable aid for self-learning and for knowledge imp rove-
ment of the students. For the following academic years, the authors would like to
use the tool as a didactic aid for the students and as a support for the improvement
of the TD lectures.

2.4. Other practices

In each location there are also used other best teaching practices. In particular,
at Brescia University an extended version of the online test with thirty questions,
is currently used as the written part of the final exam for TD courses. As a result,
the data collected were also used for an extensive discussion, better described in
[11], considering the influence of the a-priori knowledge of TD arguments, in
Best practices in teaching technical drawing … 913

function of students’ secondary school of origin, with their performance during the
final examination. Another possible use of the online test, to be verified in the next
period, is to use it as a standard placement test for TD.
At Udine University, it was also introduced an entrance test, based on simp ly
graphical questions to assess the level of knowledge of basic TD concepts such as
axonometric and orthogonal projection and the sketching skills held by the
students before attending the course .

3 Conclusions and future development

In this paper, some examples of best practices in teaching, implemented, and


regularly used in Brescia, Udine and Cassino Universities have been presente d.
These experiences were particularly interesting for both teachers and students that
participated. The learning tools developed are performing well and the preliminary
results of their use are encouraging, and some of them are already published. Cur-
rently, it is expected to maintain the collaboration between our three research
groups, and to extend it to other interested partners. The preliminary results of the-
se experiences may be used for the improvement of the quality of teaching and
learning of different TD topics. The ongoing analysis of the data collected from
the use of the online test tool shows the great interest and significant participation
of students and consolidate the evolution of the learning activities fro m teacher-
centered to student-centered. Moreover, the evaluation of the effectiveness of
these kind of tools will contribute to the improvement of the basic concepts
belonging to the TDEG and to the development of new assessment and self-
assessment tools to be used in basic TD courses. In addition, other initiat ives
aimed to improve the quality of teaching and the support to the study, such as the
use of some self-evaluation materials are currently under development.

References

1. LemusǦZúñiga L G, Montañana JM, BuendiaǦ García F, PozaǦLuján JL, PosadasǦYagüe JL,


BenllochǦDualde JV. ComputerǦassisted method based on continuous feedback to improve
the academic achievements of first Ǧyear students on computer engineering. Computer Appli-
cations in Engineering Education. 2015, 23(4), 610-620.
2. Cerra, P.P., González, J.M. S., Parra, B.B., Ortiz, D.R. and Peñín, P.I.Á. Can Interactive Web-
base d CAD T ools Improve the Learning of Engineering Drawing? A Case Study. Journal of
Science Education and T echnology, 2014, 23(3), 398-411.
3. Carnegie Mellon University. Principles of teaching.
www.cm u.edu/teaching/principles/teaching.html accessed 21/04/2016
4. Violante MG., Vezzetti E. Design of webǦbased interactive 3D concept maps: A preliminary
study for an engineering drawin g course. Computer Applications in Engineering Education.
2015, 23(3), 403-411.
914 D. Speranza et al.

5. Metraglia R, Baronio G, Villa V. Learning Levels In T echnical Drawin g E ducation: Proposal


For An Asse ssment Grid Based On T he European Qualifications Framework (EQF). Intern a-
tional Conference on Engineering Design, I CED 11, Vol. 8, Lyngby/Copenhagen, Denmark,
August 2011, pp 161-172
6. MOODLE: https://moodle.org/?lang=it accessed 21/04/2016
7. CINECA: http://www.cineca.it/it/content/e-learning accessed 21/04/2016
8. Barr, R. E. T he current status of graphical communication in engineerin g education, Frontiers
in Education, FIE 2004, Vol. 3. Savannah, GA, October 2004, pp. S1D/8- 8. Barr, R., "Engi-
neering Graphics E ducational Outcomes for the Global Engineer: An Update," Engineering
Design Graphics Journal, Vol. 76, (2012), 3, pp. 8 -12
9. Barr, R. Engineering Graphics E ducational Outcomes for the Glo bal Engineer: An Update.
Engineering Design Graphics Jo urnal, 2012, 76(3), 8-12
10. Baronio, G. Motyl, B. Paderno, D. T echnical Drawing Learning T ool-Level 2: An interactive
self-learning tool for teaching manufacturing dimensioning. Computer Applications in Engi-
neering Education. DOI - 10.1002/cae.21728
11. Metraglia, R. Villa, V., Baronio, G. and Adamini, R. High school graphics experience influ-
encing the self-efficacy of first -year engineering students in an introductory engineering
graphics course. Engineering Design Graphics Journal, 2015, 79(3)
Gamification in a Graphical Engineering course
- Learning by playing

Valentín GÓMEZ-JÁUREGUI1*, Cristina MANCHADO1, César OTERO1


1
EGICAD Research Group, Universidad de Cantabria, ETSIIT, Escalera C, Planta -2, Avda.
Los Castros s/n, 39005 Santander (Spain)
* Corresponding author. Tel.: +34-942-20 67 57. E-mail address:
valen.gomez.jauregui@unican.es

Abstract Even though engineering is a vocational degree, students need to be


stimulated so that they are more participative and active. New pedagogical meth-
odologies have been proposed to inspire and motivate students while training to
become professionals in the engineering field. One of them, gamification, is based
on the use of game design elements in non-game contexts, like learning environ-
ments. Several studies have dealt with the use of gamification in engineering
courses, 87% of them reporting that their implementation had some degree of pos-
itive outcome. This paper shows the experience of game-based learning applied to
a Graphical Engineering course in the second year of the Mechanical Engineering
curriculum at a Spanish university. Several activities were developed during the
academic course, in which students had to face different kind of tasks: competi-
tions, simulations, research tournaments, pokes, social forums, survey games, etc.
As participation was voluntary in most cases, the current study is focused on the
individual perception of the students, rather than on the overall learning outcomes
of the group. The results of this experience based on “Learning by playing” have
been positive, according to lecturers and students’ perception. The number of stu-
dents considering the game-based activates as beneficial or entertaining is between
3 to 10 times higher than those who think the opposite. The main advantages of
gamification have been, thus, increasing the attending ratio of the class, enhancing
the interest for the topics of the lectures, bringing fun and joy to the classroom and
giving more initiative to the students.

Keywords: Teaching; Graphical Engineering; Gamification; Drawing; Represen-


tation.

© Springer International Publishing AG 2017 915


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_91
916 V. Gómez-Jáuregui et al.

1 Introduction

One of the main challenges of engineering schools is how to teach students to ap-
proach the identification and solution of the types of problems that they will expe-
rience as practicing professionals. As an opposite approach to the traditional mas-
ter lectures, there is a need to engage students to be more participative and more
active. Even though engineering is a vocational degree, it is essential to stimulate
and encourage students to train themselves, in order to be prepared to solve the
problems that they will face in real world situations.
In the last decades, new pedagogical methodologies have been proposed to in-
spire and motivate students while training them to become professionals in the en-
gineering field: flipping classroom, problem based learning, online courses, mo-
bile platforms, bite-sized learning etc. One of these new trends, which is not so
new, is gamification, and can be synthetized as “Learning by playing”.
Gamification is considered to be the use of game design elements in non-game
contexts, like learning environments. We can define a game as a voluntary play
that is structured by a set of rules, where players may make choices that can influ-
ence the actions of other players and the overall outcome [1]. Therefore, they are
distinguished from simulations or simple plays. Application of gamification is
widely spread in many fields and sectors: organizational marketing [2], recruit-
ment and management of human resources [3], ERP systems [4], physical fitness
[5], traffic regulation [6], participation in electoral processes [7], learning, etc.
There are many interesting and illustrative examples of the practical utility of
gamification, not only in education. For instance, by means of solving scientific
puzzles, players identified the structure of a retrovirus enzyme [8]. By playing
FoldIt, a multiplayer online game, players solved the problem in 3 weeks, while
scientists were not able to solve it for 15 years. In the field on engineering train-
ing, a free trial of the software Autodesk 3D Max was gamified by creating a cap-
tivating storyline. It created a healthy reward scheme to encourage users to finish
the trial and understand the program completely. In fact, the amount of experienc-
es reported in bibliographic databases about the use of games (mainly digital ra-
ther than non-digital) has grown exponentially over the last decade, especially in
the computer, mechanical, and electrical engineering disciplines.
Related to undergraduate engineering courses, several studies have dealt with
the use of gamification in the classroom. Bodnar et al. [1] identified and analyzed
191 of those studies, focusing specially on 62 of them that performed an assess-
ment of student learning outcomes. The main conclusions were that both student
attitudes and learning are improved by game-based activities (87% of papers stat-
ed that gamification had positive results in different aspects of the courses). How-
ever, studies of game-based learning usually lack accurate evaluation and empiri-
cal evidence, which does not help to assure the learning benefits for the students
beyond the stimulating factors.
Gamification in a Graphical Engineering … 917

In the specific area of graphical engineering, there have been some experiences
that have dealt predominantly with the improvement of visual spatial skills. Crown
[9] proposed and analyzed a methodology for improving visualization skills of en-
gineering graphics students using simple Javascript web based games. The author
concluded, based on performance in exams and positive feedback on surveys, that
game-based learning had a positive effect on students’ understanding and devel-
opment of visualization skills. Chen et al. [10] showed a case where mobile learn-
ing games were applied to the engineering drawing lessons, which enhanced stu-
dents’ self-learning capability and spatial imagination ability.
The potential of gaming for learning is enormous, as has been proved in the last
decades, mainly thanks to the development of the software industry [11]. Even
though there is an evident fascination of students for video-games and consoles,
this attraction is manifested towards any kind of game, including the non-digital.
However, there are more benefits apart from technical learning, such as im-
proving student retention, developing student abilities in teamwork and enhancing
communication skills. These skills are not always achieved in traditional lecture
classes, but they could be acquired in a multiplayer game environment. Addition-
ally, gamification in education can boost experimentation and creative problem
solving by developing situated understanding [12].
The main criticism to gamification in education has been the lack of empirical
and scientific evidences about the direct relationship between games and effec-
tiveness in technical leaning. Some authors [1, 13] have manifested that most of
the studies and cases have focused more on engagement benefits than on actual
learning benefits. Additionally, it is also stated that games in education are an ex-
trinsic motivation, which must be avoided because it could reduce students’ intrin-
sic motivation for learning. This fact may not help in post-collegiate professional
learning where there may not be such an extrinsic and attractive incentive.

2 Gamification experience in Graphical Engineering

This paper shows the experience of gamification applied to teaching Graphical


Engineering, a subject related to drawing and representation, included in the se-
cond year of the Mechanical Engineering curriculum at the University of Canta-
bria (Spain). The approach of this paper focuses more on the subjective perspec-
tive of the students and lecturers, from a pedagogical point of view, rather than in
the objective conclusion of learning outcomes. In fact, the objective of this study
case is not to find out if game-based implementations increase student learning,
but assessing participants’ satisfaction in terms of usefulness and motivation.
Several activities are developed during the academic course, in which students
have to face different kind of tasks: competitions, simulations, research tourna-
ments, pokes, social forums, survey games, etc. One of the voluntary games pro-
posed in the classroom is the Championship of Limits and Fits, conceived as a
918 V. Gómez-Jáuregui et al.

play-off tournament to stimulate the students to practice calculations of the ISO


system of limits and fits. This game is an analogy to the traditional “Battleship”, a
guessing game where the players have to find the secret settings of the opponent.
In “Battleship”, the player arranges ships and records the shots by the opponent,
while in our game the player arranges 10 preferred fits and records the guesses of
the other player. In each round, each player takes a turn to announce 5 targets of
the opponent's list of preferred fits. The opponent announces whether or not any of
the fits have been correctly guessed, the winner being the player that has guessed
more fits and who will pass to the next round.
There is also a Tournament of Video Search of Fabrication Processes, where
students must find interesting videos (in YouTube or anywhere else) showing fab-
rication processes (e.g. milling, extrusion, 3D printing, forging, etc.). A leader-
board is created depending on the number of different videos, different processes,
non-repeated links, singular procedures, etc.
In the Forum of Doubts of the learning management system (Blackboard) stu-
dents send posts with questions that would usually be addressed to the lecturers.
The rest of the students have the chance of replying to those questions, while their
answers are moderated and assessed by the teachers. The students with good punc-
tuations have a positive rise in their final marks.
Students can also participate in the forum of Proposals of Trivial Questions.
There is a discussion list where anyone can upload interesting and complete mul-
tiple-choice questions (along with their wrong and correct answers) related to the
different topics of the course. Some of the most remarkable questions are chosen
and included in the tests of the final evaluation exams.

Fig. 1. 3D model in Autodesk Inventor by the winner of the Contest of 3D Modelling of 2015
(left). ABS plastic mock-up, produced and assembled by the student, before painted (right).

During the whole semester there is an open Contest of 3D Modelling, where


students can submit any mechanism or assembly modelled and assembled with the
CAD software used in the course (i.e. Autodesk Inventor). The last week of the
course, all the enrolled students can vote anonymously for the best and nicest pro-
posal. The winner of the contest receives as a price the 3D-printed version of
Gamification in a Graphical Engineering … 919

his/her model, being responsible for the correct fabrication process and assembly
of the mock-up (Figure 1).
There have been several previous experiences where gamification was focused
on finding mistakes, errors or bugs, e.g. Microsoft launched a game-like activity to
identify mistakes in Windows OS [8]. In our case, we proposed to search for mis-
takes and errors in the bibliography and lecture notes, giving recognition to the
students that helped significantly in the task.
The use of mobile smartphones in the classroom has been a vexing problem
since the hatching of this technology in the last years. However, many experiences
have proved that the situation is worse if they are banned rather than if they are in-
cluded as an additional pedagogical tool in the classroom. In this experience, the
use of personal mobile phones is required and necessary, not only at home but also
in the classroom. For instance, their devices let them explore, by means of 3D
models represented in augmented reality (AR), the mechanical components and
mechanisms explained during the lectures [14]. AR serves as a gamified technolo-
gy gateway used by students to familiarize themselves with this technology.
Another useful tool, used also by means of their smartphones, is the application
Socrative, which helps the lecturer to visualize the students’ understanding. In this
case, the game consists of trying to guess the right answer of a question given by
the teacher; students reply on their phones using the app, anonymously or not, and
then check if their answer was correct or not and compare it with those of the oth-
er students. At the end of every lecture, there is a winner depending on the number
of questions correctly answered.
All of these activities have different kinds of rewards: some of them are aca-
demic incentives, materialized in better marks; some others are material, e.g. Con-
test of 3D Modelling; and some other are intangible, e.g. public recognition of
having done the best work of the class.
Participation in the games proposed during the course is not always compulso-
ry, although it is always very advisable, as it is an empirical manner of evaluating
the interest and engagement of the students. According to some authors [15], mak-
ing a game compulsory could be considered as a contradiction to the factors that
make the participation entertaining. However, in the context of an engineering
course, it is not always possible to leave the decision of taking part of the activities
up to the students, as that may influence notably their learning process.

3 Survey

The study has been conducted with second year students of the Graphic Engineer-
ing subject, belonging to the Mechanical Engineering Degree at Universidad de
Cantabria (Spain). The total population consisted of 55 students. Controls or com-
parison groups were not used, because it was considered that none of the students
should be excluded of enjoying the playful time of the activities. The satisfaction
920 V. Gómez-Jáuregui et al.

questionnaire was filled by a total of 51 students (a 93% of the 55 active students


of the course); 22% of the students were women and 59% of the students were
familiarized with games or videogames.
The questions of the survey were grouped in pairs, two for each different game-
based activity. For instance, Q1 asked if the student had participated actively and
with interest in the Championship of Limits and Fits; Q2 asked if practicing exer-
cises through that Championship was more useful and enjoyable, than done con-
ventionally and up to what level (1: Strongly disagree – 5 Strongly agree); Q3 and
Q4 were analogous but related to the Tournament of Video Search of Manufactur-
ing Processes; Q5 and Q6 for the Forum of Doubts; Q7 and Q8 for the Proposals
of Trivial Questions; and Q9 and Q10 for the Contest of 3D Modelling.

4 Results

The results of the user satisfaction questionnaire are shown in the next figures. As
can be seen in Figure 2, there are much fewer students displeased with the
gamified experience than those who consider that it has been either useful or en-
joyable. In fact, figure 3 shows that the number of students considering the game-
based activities as beneficial or entertaining is between 3 to 10 times higher than
those who think the opposite. It should be remarked that there is not special rela-
tionship between the participation ratio of the voluntary game-based activities and
the students’ degree of satisfaction. As can be seen in Figure 4, there is a range of
43-67% of students that consider the proposed tasks as useful or very useful, while
their participation varies between 18-94%.

Fig. 2. Answers to student’s satisfaction questionnaire about usefulness and enjoyment.

There is no relationship between the students that do not play games or video
games with their perception of usefulness of this game-based activities. It is not
unusual to note that 82% of the women in this group (9 out of 11) do not play any
game or videogame usually. However, the difference between the satisfaction ratio
for men and women is not very significant: 3,6/5 for men and 3,3/5 for women.
Gamification in a Graphical Engineering … 921

Fig. 3. Comparison of answers to student’s satisfaction questionnaire about usefulness and en-
joyment.

Fig. 4. Answers to student’s satisfaction questionnaire: relationship between participation and


opinion about usefulness and enjoyment.

5 Conclusions

The results of this experience based on “Learning by playing”, developed gradual-


ly during the last academic years, have been very positive in general terms. Bene-
fits have been observed not only in the students’ perception of their own perfor-
mance, but also for the teaching methodologies used by the lecturers. The main
advantages of gamification have been, thus, increasing the attending ratio of the
class, enhancing the interest for the topics of the lectures, bringing fun and joy in
the classroom and giving the initiative to the students. In conclusion, this paper
922 V. Gómez-Jáuregui et al.

explores a novel approach that can be very relevant for similar courses related to
technical drawing and graphical representation of any other engineering degree.
In future analysis about game-based learning, it would be interesting to in-
crease sample sizes and focus not only on students’ attitudes and perceptions but
also on learning outcomes. In order to do so, further research will take into ac-
count final students' marks in years with and without gamification. An alternative,
discarded in the current experiment, would be using control groups, although it
would deprive some students of enjoying the benefits of this methodology.

References

1. Bodnar, C. A., Anastasio, D., Enszer, J. A. and Burkey, D. D. (2016), Engineers at Play:
Games as Teaching Tools for Undergraduate Engineering Students. Journal of Engineering
Education, 105: 147–200. doi: 10.1002/jee.20106
2. Zichermann, G. and Cunningham, C. Gamification by Design: Implementing Game Me-
chanics in Web and Mobile Apps. O'Reilly Media, Inc., 2011. ISBN 1449315399.
3. Herger, M. Gamification in Human Resources. Enterprise Gamification Vol. 3. 2014.
(EGC Media) ISBN 1500567140.
4. Herzig, P., Strahringer, S. and Ameling, M. Gamification of ERP Systems - Exploring
Gamification Effects on User Acceptance Constructs. Multikonferenz
Wirtschaftsinformatik 2012 (MKWI'12).
5. Hamari, J. and Koivisto, J. Working out for likes: An empirical study on social influence
in exercise gamification. Computers in Human Behavior, 50, 2015, pp. 333–347.
6. Schroeter, R., Oxtoby, J. and Johnson, D. 2014. AR and Gamification Concepts to Reduce
Driver Boredom and Risk Taking Behaviours. In Proceedings of the 6th International Con-
ference on Automotive User Interfaces and Interactive Vehicular Applications, 1–8. ACM.
7. Zichermann, G. Rethinking Elections With Gamification. The Huffington Post. 20th No-
vember 2012.
8. Park, H. J., & Bae, J. H. Study and Research of Gamification Design. International Journal
of Software Engineering & Its Applications, 2014, 8(8). 19-27.
9. Crown, S.W. Improving visualization skills of engineering graphics students using simple
javascript web based games. Journal of Engineering Education, 2001, 90(3), 347–355 1
458. http://dx.doi.org/10.1002/j.2168-9830.2001.tb00613.x
10. Chen, H., Chen, L., Chen, J., and Xu, J. Research on mobile learning games in engineering
graphics education. Lecture Notes in Electrical Engineering, 2013, 269, pp. 2981–2986.
http://dx.doi.org/10.1007/978-94-007-7618-0_379
11. Stizmann, T. A meta-analytic examination of instructional effectiveness of computer based
simulation games. Personnel Psychology, 2011, 64(2), pp. 489–528.
http://dx.doi.org/10.1111/j.1744–6570.2011.01190.x
12. Shaffer, D. W., Halverson, R., Squire, K. R., and Gee, J. P. Video games and the future of
learning. Working Paper No. 2005–04. Wisconsin Center for Education Research. 2005.
13. Gee, J. P. Reflections on empirical evidence on games and learning. Sigmund Tobias and
J. D. Fletcher (Eds.). Computer games and instruction, 2011, pp. 223–32.
14. Gómez-Jáuregui, V., Manchado, C. and Otero, C. An Experiment with Augmented Reality
applied to Education in Graphic Engineering. Proceedings of the XXV International Con-
ference On Graphic Engineering. 17-19 June 2015, (San Sebastián).
15. Salen, K. and Zimmerman, E. Rules of play: Game design fundamentals. Cambridge, MA:
MIT Press. 2003.
Reliable low-cost alternative for modeling and
rendering 3D Objects in Engineering Graphics
Education.

Santamarí a-Peña, J.1* ; Benito-Martín M.A.1 ; Sanz-Adán, F.1 ; Arancón, D. 1 ;


Martinez-Cal vo, M.A.1
1
Departamento de Ingeniería Mecánica. Universidad de La Rioja ( Spain)
* Corresponding author. T el.: +034-941299530 ; fax: +034-941299794.
E-mail address: jacinto.santamaria@unirioja.es

Abstract

In Engineering Schools is a growing concern to offer its students the latest in 3D


CAD modeling software that respond to advanced design needs. The latest soft-
ware is increasingly co mprehensive and complex. However, the time availab le for
teaching such software is scarce and rarely get to know more than 20% of the p o-
tential of the software for a particular area. However, other 3D CAD solutions are
on the market, low cost, very friendly, easy to learn interface, with much potential
and you will get to have all the features of more advanced software, can fu lly meet
the teaching requirements this area. Since the introduction of the EHEA (Europe-
an Higher Education Area), the target level of teaching an engineering school in
the field of 3D CAD modeling, should be to develop in students in the time avai l-
able, the maximu m capacities and skills three-dimensional geometric design. And
this, when well planned, can be achieved with the proper use of low cost software.
The authors have analyzed the potential of various freeware or shareware soft-
ware, well suited to the typical subjects of Engineering Design, in the areas of
CAD-3D design and rendering.

Keywords: 3D CA D models; Engineering Graphics Software; 3D Modeling; 3D


Rendering

1 Introduction

The teaching program of subjects related to engineering design in the Spanish un i-


versity, includes the use of software to develop its practical side. These softwares
are usually geared largely to the world of manufacturing and/or simulat ion.
The general trend is to use, for teaching purposes, large software packages that
respond to the learning needs arising fro m the different subjects [1,2]. This deci-

© Springer International Publishing AG 2017 923


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_92
924 J. Santamaría-Peña et al.

sion has its advantages, as it focuses students on knowledge and mastery of a few
design softwares, which to some extent come to control. However, the decision to
focus on learning a few examp les of design software, even if they are powerful,
makes the student does not get to know other solutions and other methodologies,
equally interesting [3,4].
For basic subjects of 2D and 3D design, the use of software like AUTOCAD®
and MicroStation® is generalized. And for a little more specialized subjects, other
powerful packages based on the previous (Inventor®, Architecture®, Mechani-
cal®, Civil3D®, ...) is extended, follo wing all philosophy imagines, designs and
create . Other powerfu l software packages typically used in our universities use
the parametric design and are basically oriented to manufacturing, such as PTC
Creo®, Cat ia® and SolidWorks ®. In general, all of these softwares are based on
making a first phase of 3D design of the co mponents, follo wed by assembly ele-
ments and use of simu lation tools for validation. In addition, these softwares usu-
ally have built surfaces rendering utilit ies more or less powerful, to give more re-
alis m to designs.
In this paper, we intend to highlight other 3D objects modeling solutions exist
in the market, null or low cost, which can be used to teaching in the university en-
viron ment. Solutions are often technically limited in many ways, but have learn-
ing curves very fast for students. These incorporate generally very friendly envi-
ronments, simplify ing to the maximu m the design and rendering process of
objects.
The study will focus on four applications oriented object design (2D and 3D)
and three to rendering surfaces, s eeking to highlight their potentialities and de-
scribe their limitat ions:
- OBJECTS DESIGN: FreeCAD, Rhinoceros, nanoCAD and Scketchup.
- RENDERING: Blender, KeyShot and Kerkyhea.

In addition to describing the advantages and disadvantages of each of the tools,


an example of three-d imensional set generated with each of them in different areas
and a 3D model whose manipulation can be performed by the reader on the image
itself is included.

2 Design Software.

1.1 FreeCAD [5,6]

It is a 3D parametric modeling software cross-platform (W in-


dows, Mac and Linu x), free, open source and very oriented to
mechanical design. It has a large community of users and pro-
grammers behind that keep it constantly updated, and it does not
require high-end computers. It presents ease of use, good interface, quick learning
Relia ble lo w-cost alternative for modeling … 925

and has many scripts with additional functionality. The import and export cap a-
bilities to other formats are high.
Disadvantages might be cited: the great difficulty for an imating objects easily
(only on Phyton programming) and the scarcity of simu lation tools .
This software has complementary modules of d imensioning, dependencies
graphics and animation.

Fig. 1. Screw M8 on FreeCAD. Fig. 2. Screw M8 as 3D model.

1.2 NanoCAD

It is basically a free CA D2D software, until version 5.0 built


2000 and has a very similar to Autocad environment,
although it is much less powerful in all its tools. Managed
by the Russian company Nanosoft, it has a users and
developers community of of mediu m size. 3D potentiality is fee required and only
translated into English and Russian.

Fig. 3. Crane hook on 2D NanoCAD. Fig. 4. Floor/Front views on NanoCAD.


926 J. Santamaría-Peña et al.

1.3 Google Sketchup [7,8,9]

Although Sketchup is not designed for software engineering,


it has very powerful tools for geometric design in general and
particularly for architecture. It is very easy to use and
learning, with a very simple interface and its free version is
very powerful. It has a large network of users who provide an
important library of 3D models already created. It is not open
source and has paid versions. You can make animat ions with
created models setting the camera path and record videos.

Fig. 5. Outside house view on Sketchup. Fig. 6. Outside house view 3D Model.

1.4 Rhinoceros [10]

It is not a free, but low cost software. It is oriented for general


CAD design and for and specifically for handling surfaces . It has
large capacities of import and export formats and high quality
tools for rendering and animation. It is not open source. Interface
is unfriendly and very improvable. It has a great capacity to
generate animat ions, but by acquiring added supplements .

Fig. 7. Slider- Crank on Rhino. Fig. 8. 3D Model of Slider-Crank .


Relia ble lo w-cost alternative for modeling … 927

2 Rendering Software.

2.1 Blender [11]

It is a free and open source software, with 3D


modeling NURBS-based and a very powerful
rendering engine "Cycles Render". You can
perform mult i-tude of simulations with real physics
and they are of great precision and quality. Blender has many external addons,
both free and paid. It is a little heavy software and with a very fast learning curve.
You can set the dynamic rendering view and use mu ltip le simu lations
simu ltaneously interact between them. It has a variety of simulat ions from rigid or
deformable solids, particle simu lations, fluid, etc.
Disadvantages might be cited: impossibility of geometric precision designs as
you would a CAD software and it requires a high-end co mputer to simulate mid-
level simulations. It has high RAM requirements , processor and Hard Drive to
make flu id simulat ions of high quality.

Fig 9. Rendering collision with "Cycles Render". Fig. 10. 3D Model of simulation.

2.2 KeyShot [12]

It's not a free software, but generates high quality rendering results in a short time.
It has a good performance and can control the power used
by the computer. Also it has high quality textures and it is
very easy to create animations. Files are directly imported
fro m main CAD software and displays dynamic rendering preview. Disadvantages
can be cited: it can generate a few compatibility issues with Windows 10 and it is
not compatible with hardware accelerat ion, so it only use the microprocessor,
which means that to reach the maximu m performance, the processor must work
always at 100%.
928 J. Santamaría-Peña et al.

Fig 11. Rendering of ARDUINO UNO. Fig. 12. 3D Model of ARDUINO UNO [13].

2.3 Kerkythea [12]

It is a free rendering software, with a very good quality finish


and is simple to use. It has advanced tools for creating new
textures and lighting. Power offered by the co mputer during
rendering can be controled. It has a very basic interface and is
closely related to Scketchup.
As main disadvantages could be mentioned it has not
dinamic render view and it is not compatible with hardware acceleration. There is
an improved fee required version, called TheaRender.

Fig 13. Indoor rendering with external light. Fig. 14. . Indoor rendering with interior light .
Relia ble lo w-cost alternative for modeling … 929

3 Analysis and discussion.

Regarding 3D modeling and design software studied, it is noteworthy for alterna-


tive teaching purposes, FreeCAD software because it really is very similar to typi-
cal 2D / 3D CAD software used in the subjects of the first courses of engineering.
In the field of Architecture and Civil Engineering, it would be interesting to incor-
porate Scketchup because it provides some design and rendering tools for 3D ob-
jects and environment, really interesting.
Rhinoceros and nanoCAD software are d iscarded, the first for being too basic
and the second, by having similar price to traditional solutions with powerful and
affordable educational license.
Regarding rendering and simu lation software analy zed, we must emphasize and
advise BLENDER, as it has many possibilities for teaching advanced level cours-
es. Its possibilities for simu lations endowing objects of real physical properties
and quality achieved with them, make it a necessary software in the field of grap h-
ic engineering and simu lation.
These graphic solutions may eventually replace traditional teaching software at
university level, but can be a revulsive for students to supplement train ing. It
would be very interesting to incorporate in the training programs of the University
teachings, although they are used as alternatives to the basic software.
Two alternatives appear:
1.- To continue as currently designed, studying in depth a single software.
2.- Studying different software, d ifferent tools, although in less depth .

4 Conclusions.

The main objective of this paper has been seeking viable alternatives to 3D ob-
jects modeling and rendering in the field of university teaching and analyze their
possibilit ies of inclusion in the teaching program.
Regarding softwares commonly used both in engineering and architecture stud-
ies, there are low-cost solutions that can bring another vision and even other useful
tools in these fields.
We must recognize the difficulty of incorporating into teaching other graphic
design softwares different to those traditionally used, basically due to time con-
straints and the rigidity of the subjects schedules. But they should not be ruled out
as additional training or as individual or group diversification tasks. Free tools as
FreeCAD, Scketchup and Blender can clearly fulfill this purpose.
Our educational experience in primary levels of Graphic Engineering is that
students enjoy more knowing several types of software and it gets a better adapta-
tion to the challenges of the future.
930 J. Santamaría-Peña et al.

References

1. M. X. Luo, "T hinking of Engineering Graphics T eaching Management Model and T eacher
T eam Building", Advanced Materials Research, Vols. 591-593, pp. 2190-2193, 2012
2. Wang B. An Approach for Engineering Graphics Education Reform in Modern Information
T echnology. In Advances in Computer Science, Environment, Ecoinformatics, and Education
2011 Aug 21 (pp. 42-46). Springer Berlin Heidelberg.
3. Hong S, Fei G, Xia C. Discussion on the teaching of many-class-hour mechanical drawing
course with 3D CAD [J]. Journal of Graphics. 2012;1:019.
4. Marunic, Gordana, and Vladimir Glazar. " Spatial ability through engineering graphics educa-
tion." International Journal of T echnology and Design Education 23.3 (2013): 703 -715.
5. Obijuan Academy. Ed. Dr. Juan González Gómez. Published 14 July 2014. Web. 10 Nov.
2015. <http://www.iearo botics.com/wiki/index.php?title=Obijuan_Academ y>.
6. Collette, Brad, and Daniel Falck. FreeCAD [how-to] solid modeling with the power of Python.
Birmingham: Packt Pub, 2012.
7. Bonnie Roskes, 2015. SketchUp 2015 Hands-On: Student Coursebook. 8th Edition. 3DVinci.
8. Google SketchUp and Kerkythea – fast start 4Architects :: SketchUp 3D Rendering T utorials
by SketchUpArtists. 2016. Google SketchUp and Kerkythea – fast start 4Architects ::
SketchUp 3D Rendering T utorials by SketchUpArtists. [ONLINE] Available at:
http://www. sketchupartists.org/tutorials/sketchup-and-kerkythea/sketchup-and-kerkythea-
fast-start-4architects/. [Accessed 20 December 2015].
9. Bonnie Roskes, 2015. SketchUp 2015 Hands-On: Basic and Advanced Exercises. 8th Edi-tion.
3DVinci.
10. Kley, Michiel. Working with Rhinoceros 5.0. T ilburg, the Netherlands: Rhinoacademie,
2013.
11. Simonds, Ben, and T homas Dinges. Blen der master class a hands-on guide to modeling,
sculpting, materials, and rendering. San Francisco, Calif: No Starch Press, 2013.
12. Jei Lee Jo, 2012. KeyShot 3D Rendering. Edition 1. Packt Publishin g.
13. Arduino Uno. Andrew Whitham. Published 3 January 2014. Web. 25 Apr. 2016
<https://grabca d.com/library/arduino-uno-r3-1>.
Section 6.2
Teaching Product Design and Drawing
History
How to teach interdisciplinary: case study for
Product Design in Assistive Technology

Guillaume THOMNN1,2*, Fabio MORAIS3 and Christine WERBA3


1
Univ. Grenoble Alpes, G-SCOP, F-38000 Grenoble, France
2
CNRS, G-SCOP, F-38000 Grenoble, France
3
Authors two and three affiliation
* Corresponding author. Tel.: +33 4 76 82 70 24; fax: +33 4 76 57 46 95. E-mail address:
guillaume.thomann@grenoble-inp.fr

Abstract

In the medical field, Assistive Technologies (AT) are one of the most dynamic due
to the evolution of the population (elderly and disable people). Dedicated products
are complex to design and to manufacture because of the end users’ specificities
and particularities. The integration of multiples competencies in the design pro-
cess are necessary to be able to define a complete list of requirements. This col-
laborative work with the involvement of the end user necessitate a reflection about
the design method. Statistics of products abandonments illustrate the difficulty for
companies to create a favorable working environment taken into account multiple
parameters from various expertise.
Based on these states of difficulties, the present study aims to develop an inter-
disciplinary experimented teaching situation focused on the use of Rapid Prototyp-
ing. The AT field has been chosen to develop this design process teaching. The
favorable pedagogic context takes place in the Industrial Department of the Feder-
al University of Paraiba. The proposal was to implicate several departments from
this university to organize a complete course with the objective to teach Interdis-
ciplinary in graduation level in the university: the initial idea was to give all nec-
essary resources to the groups of students (two types of progressive workshop dur-
ing the course); they choose the ones that are necessary to design and prototype
something adapted to the user’s requirements.

Keywords: Interdisciplinary, Assistive Technology, User Centered Design, Design Meth-


odology, Teaching Product Design

© Springer International Publishing AG 2017 933


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_93
934 G. Thomnn et al.

1 Introduction

In more and more cases, products have to be developed in collaboration between


actors from several professional domains. In these cases, we spoke about interdis-
ciplinary. The medical field is a typical context where various actors have to work
together to elaborate a common list of requirements. Around the final user or a
group of user, the aim of the design team is to create a favorable collaborative en-
vironment for the development of the complex product or system. The collabora-
tion between actors from different disciplines can be facilitated by the use of spe-
cific tools and collaborative management strategies. One difficulty along the
whole development process is to maintain the motivations of all the actors; anoth-
er is to combine the different objectives into one unique requirements list. These
daily issues constitute some challenges for professionals in industries. The rule of
the universities is to prepare future managers to this complex and realistic situa-
tion. Students in the universities are always more interesting by the medical field
and the innovations linked to this area. Surgical robotics, new technologies for re-
habilitation, Smart materials and products like MEMS (Micro Electro Mechanical
Systems) or Lab one Chip, connected devices, etc. are some examples that prove
the increase of the medical market and justify the students’ motivations. The ob-
jective of the paper is to share experience about the organization of an interdisci-
plinary course proposed in the university. It allow students to meet professors and
health professionals from lots of disciplines in a design process context. The pro-
posed study aims to imagine and propose a design process that motivate students
from various engineering departments to experiment tools, methodologies and
practices user oriented.

1.1 Design Methodologies

The field explored in the present case study is about design methodology for
products in assistive technology (AT). Especially in this specific domain, final
products have to match perfectly to the user requirements. They gave several ben-
efits to people in difficult situations but research studies showed many problems
with AT products abandonments [1]. The most common is the acceptance of the
own limitation problem. The second reason is about the product itself. The charac-
teristics identified concerning the product are cost to purchase and to maintain, du-
rability, reliability, ease of use and transport, safety, efficiency and aesthetics. And
lastly, researchers listed four others factors as (1) change in the needs of the user,
(2) ease of purchase, (3) device performance and (4) consideration of user opinion
in the selection process. Others researchers have analyzed seventeen projects car-
ried out by students designing for and with disabled children [2]. In this situation,
a coding scheme was built based on a review of the literature. This was then im-
How to teach interdisciplinary … 935

proved through direct observation of the design reports. After analysis of the re-
ports, three groups have been sorted: managing interactions with the disabled chil-
dren, difficulties with respect to evolving user needs and identifying the children’s
relevant abilities.
One the one hand the products never satisfy disabled users, one the other hand
the design process is not easy to understand and to apply. The user centered ap-
proach has the objective to improve the usability of the product as a quality factor
for disabled users. The User Centred Design (UCD) methodology provides five
points the design project has to take into account: knowledge of end users (tasks,
environments), an active participation of users (needs and requirements), the ap-
propriate repartition of functions between users and technology, an iterative ap-
proach to design, the intervention of a multidisciplinary team [3]. The user-
centered design cycle is broken down into six steps. This is an iterative process
which ends when the design solution meets the requirements of the end user [4].
Prototypes have to be used to create interactions between students’ groups and the
users.

1.2 Design and rapid prototyping

The complexity of current problems requires that design process adapt to current
demands. Multidisciplinary and participatory designs are a trend without return.
The Top-down approach in which designers created their solutions and delivered
to users is replaced by the Bottom-up approach. Users should be involved since
the beginning of the design process. Not just in the testing phase. At last, as quot-
ed [5], "The design is a participatory process in which the designer makes a part-
nership with problem owners (consumer, customer, user, etc.)."
Experts working disconnectedly, with different goals in so far timelines and
following logics that are not concatenated properly, and not communicate with
each other. "Every expert has a limited object-world, with assumptions, rules and
particular goals. They see the design object of different ways, according to the
pragmatic core of their discipline" [6]. Although, he mentions that "the object-
worlds divide the design in different but not unrelated kinds of efforts." It implies
that whatever the object to be designed, it cannot be thought like a simple overlap
of technical systems. It is necessary integrating different parts. For this, the pro-
cess actors should coordinate each other seeking a set of representations and mak-
ing a shared context of logics [7][8]
Working in design using intermediate objects is fundamental. They allow to
achieve this logic joint and a leveling ideas. The term Rapid Prototyping generally
refers to prototype build methods using additive systems [9]. The time is minimal
among the appearance of a possible solution and its transformation into something
tangible, material. It is just the time to 3D printer receives the digital model and
transform raw material into a product. The field of possibilities broadens enough.
936 G. Thomnn et al.

Prototyping in the medical field has enabled anticipate both usability and func-
tionality tests [10] [11] [12].
Thus, users of assistive technology products can interact with ideas, even be-
fore being completed or manufactured in scale. Benefits are (1) to decrease design
errors or inadequacies (2) a greater acceptance of end-users, (3) the possibility to
quickly remake a product or modify it or to incorporate something that has been
proposed by anyone involved in the project.
Facilitating the involvement of the final users (musicians and disabled children)
in the design process using rapid prototyping technology has been shown by [13].
The proposition was to use several rapid prototyping technology in function of the
objective to demonstrate and the process phase. Classical machining, z-printer,
and two types of Fused Deposition Modelling (FDM) have been used during the
developing process.
To resume, the design for and with final users imposes specific tools and meth-
odologies. The user is rarely unique. One essential step in the beginning is identi-
fying the variety of users once the final product in final environmental situation of
use. From this state, professors’ objective is to learn an optimal design process to
students. It implies the knowledge of the users, the participation of experts from
various disciplines, and the control of the technology used for rapid prototyping.
In the paper, authors present an experimentation in the Technological Center of
the Federal University of Paraiba (CT/UFPB), Brazil. This inter departments
course was organized by three professors which allow groups of students to devel-
op some AT products in optimal conditions.

2 Course Construction and objective

The main objectives of the course was the opportunity to meet professors and stu-
dents from others disciplines in a design process context. The responsible of this
course entitled “Metodologia de Projeto Multidisciplinar focado em Tecnologia
Assistiva” (“Methodology of Multidisciplinary Project focused on Assistive Tech-
nology” in English) wanted to break disciplines barriers between departments.
During the three months duration of the course, all the participants share ex-
periences with people from others departments during their product development
process. This context is a necessity in AT development. Moreover, the rapid pro-
totyping technology (FDM 3D-Printer) and one dedicated expert student were
available every time in each step of the project. All the groups have one product to
design and develop in relation with one or more users. Each group was constituted
of several students and professors from different departments.
The methodological approach taught to students was a combination of UCD
approach, iterative design, Scenario Based Design and tools from ergonomics like
activity analysis. Moreover, to better understand the users’ needs, the use of rapid
prototyping was practically imposed.
How to teach interdisciplinary … 937

3 Application: “Methodology of Multidisciplinary Project


focused on Assistive Technology”

The initial proposal in building multidisciplinary teams to solve problems related


to Assistive Technology was given by two professors from ergonomics and Indus-
trial Engineering (CT/UFPB – Brazil) and one from Mechanical Engineer (Greno-
ble University – France).
This proposal of an exceptional course for all volunteers from the UFPB ex-
ceeds administrative barriers. The professors obtained agreements from many en-
gineering departments, medical department and doctoral school. This allowed the
concrete formations of heterogeneous group: level of formation and disciplines
(Table 1). Five groups of 8 to 9 students were structured.

Table 1. Participants list of the course.

Discipline Professors Master thesis Graduate Graduated Total


students students
Industrial Engineering (IE) 02 09 06 17
Mechanical Engineering (ME) 02 02
Design (Des) 02 03 03 08
Ocupacional Therapy (OT) 02 01 03
Phisioterapy (Physio) 03 02 10 15
Architecture (Archi) 01 01 02
Informatics (Info) 01 01 02
TOTAL 11 17 21 01 50

The course was divided into 11 weeks. Each week should contain brief presenta-
tions on relevant topics to the projects (main workshops), and also interaction
times in group. Each meeting lasted about 4 hours. At these meetings, the project
groups should proceed with the project, according to the expertise of each partici-
pant in the group.
All course participants were classified according to their graduation course. A
balanced distribution between the groups was promoted. Groups have been consti-
tuted based on some criteria: project motivation, expertise, formation level, disci-
pline. One of the main objectives of the course was achieved, the multidiscipli-
nary. Unfortunately, each discipline could not be represented within the groups.
Before the first meeting, the students and professionals were invited to propose
one product that could be developed. Projects have been chosen based on some
criteria: presence and participation of user(s), possibility to prototype quickly, ne-
cessity of a multidisciplinary project team. Moreover, the unique focus was to de-
sign and develop an AT product. Many people showed demands that were already
938 G. Thomnn et al.

in development. Others brought unknown needs or not yet studied ones. As might
be expected, the demands emerged, largely, from health professionals. At least
one user participated to each design project.
Some experts have been identified. They constitute the staff of the course. They
have been defined not only because of their expertise maturity but also because of
the lack of competencies compared to the number of projects. Their rules were to
help all groups, depending of their needs. The areas of expertise are: ergonomics,
quality, ethics, digital modeling and rapid prototyping.

These weekly workshops have been mainly given by the defined experts. They
served to leveling on some important issues in AT area. The first two workshops
were giver by professors:
- Introduction of the course: Objectives, methodology, organisation, explana-
tion of the workshops and the rules of the experts. Choice of the projects
and the groups.
- Presentation from professors of each department: design methodologies
used with user’s involvement or not, examples in AT or not.
Others workshops have been presented to share knowledge in complementary
areas. All along the design process and depending of the projects main phases, the
experts proposed interventions about UCD iterative methodologies, activity analy-
sis, quality, management, rapid prototyping and CAD software, patents, ethics and
scientific valorisation. The activity analysis has been taught and practically di-
rectly applied on the situation case with the supervision of the ergonomics profes-
sor. The time management was in the responsible of each group and the first meet-
ing with the health professional and the user have allowed to clarify the design
goals.

4 Results

After the course, each group held a final presentation in which the projects objec-
tives, the methods and techniques used in the product development and the experi-
enced difficulties were described. Time was a factor that hampered the good de-
velopment of some projects as well as the dependency on the users’ participation.
Some groups were dependent on the free times of observed users. However, all
groups reached a result. Concepts of the product were designed and sometimes
prototyped. All of them meet the objectives established initially. The table 2
shows the main results that have been observed and discussed between the profes-
sors responsible of the course.

Table 2. Projects’ Main results.


Product and Main goal Methods and techniques used Problems encountered
Building device that performs Activity Analysis. The understanding was hard by
the pressure measurement of Direct observations in patients technological experts. A previ-
How to teach interdisciplinary … 939

the cuff external steadily in pa- underwent tracheostomy. ous study about traqueostomy
tient underwent tracheostomy Prototyping with alternative was required for engineers and
materials. de-signers.
Creating tablet support by indi- Activity Analysis. Absence of a mechanical engi-
viduals without coordination of Direct observations of disabled neering. The mechanical design
upper limbs and who use people using tablets. Digital feeling could help in making
wheelchair modeling and rapid prototyping other concepts.
Designing device to independ- Activity Analysis. Digital mod- Observing end-users in perform-
ent neatness (individual clean- eling and prototyping (alterna- ing the task was so hard. A short
ing) after do physiological tive materials and ABS plastic) time to prototyping and testing.
needs
Designing a packaging opener Activity Analysis. Digital mod- There was impossibility to make
that helps people with upper eling and sketches making functional prototypes due the
limbs disabilities. absence of constructive mechan-
ical elements
Developing a toy that enables Activity Analysis. Survey with High diversity between the char-
that children with cerebral pal- disabled people and their ca- acteristics of cerebral palsy peo-
sy playing. reers. Usability testing with ple. Limitations of Sphero con-
functional prototype trol software.

According to broad multidisciplinary construction of groups, different method-


ologies were followed. Techniques, tools and models were used according to the
expert’s areas or situations that were emerged. No patterns were required.
Prototypes were used by almost all groups. The used way was different be-
tween them. The 3D printer was not used by every groups. The projects needed an
initial understanding of the relationship users have with the prototype. In the case
of tracheostomy, the prototype had to be tested to various users in different situa-
tions. The choice of a prototype made of alternative materials was due to the need
of transportability. The Sphero Soccer already has a part of the solution. What re-
mained to be designed was the site for Sphero’s use related to needs of disabled
user. Thus, the choice of alternative materials was due to the paucity of ideas.
The groups in which there were more professionals in technical areas (Engi-
neering and Design) followed more specific and detailed steps with more restric-
tive decision gates. It is noticed that the project progress to these groups was more
linear than other groups that are using more iterative methodologies.
A learned lesson became apparent at all groups. Dealing with people from dif-
ferent areas in the same project is not simple. Specific phases are necessary to
share tools and techniques from all the experts.

5 Observations and discussions

Two main profiles were identified for a better project progress: project manage-
ment and health professional. Each group was independent and used tools and
technics they need in their context. The negative feedbacks were about (1) the lack
of time (10 weeks) to realise the project, (2) the poor anticipation with ethics com-
ity for working with patients, (3) sometimes the number of members per group,
940 G. Thomnn et al.

(4) the difficulty to meet users from others departments and to organize others
meetings during the week. The point highlighted was the presence of health pro-
fessionals to interact with the user in the context of use. Despite of the difficulty to
evaluate benefits for students, all of they effectively used learnt tools and experi-
mented user interaction with prototypes and activity analysis.

Conclusion

To mix health professionals, disabled user, students and professors from different
engineering departments and is a challenge. With the objective to teach interdisci-
plinary, applying design methods and using tools from various disciplines is es-
sential. Moreover students’ motivation is a key factor. Teaching on a real case to
promote users’ interactions can be and adequate context answering these criteria.

References

1. Phillips B., M.S. and Zhao H., Predictors of Assistive Technology Abandonment, Assistive
Technology, Vol 5, pp.36-451, 1993
2. Magnier C., Thomann G., Villeneuve F., Seventeen Projects carried out by Students Designing
For And With Disabled Children: Identifying Designers’ Difficulties During The Whole De-
sign Process, Assistive Technology, Vol 24, Issue 4, pp. 273-285, 2012
3. ISO 9241-210.: International Organization for Standardization. Ergonomics of human-system
interaction -- Part 210: Human-centred design for interactive systems, 2010
4. Ma, M.-Y., Wu, F.-G., and Chang, R.-H., A new design approach of user-centered design on a
personal assistive bathing device for hemiplegia. Disabil Rehabil 29, 1077–1089, 2009
5. SIMON, Herbert A. The sciences of the artificial. 3.ed. MIT Press: Cambridge, 1996. 215p.
6. BUCCIARELLI, Louis L. Designing and learning: a disjunction in contexts. Design Studies,
v.24, n3, 2003b.
7. WILKINSON, Christopher R. Applying user centred and participatory design approaches to
commercial product development. Design Studies 35 (2014) pp.614 – 631
8. TOH Christine A. and MILLER, Scarlett R. How engineering teams select design concepts: A
view through the lens of creativity. Design Studies 38 (2015) pp. 111 – 138
9. BUSWELL, R.; SOAR R.; GIBB A.; THORPE A. Freeform Construction: Mega-scale Rapid
Manufacturing for construction. In: Automation in Construction 16, 2007, p.224–231
10. Zhang, S., et al. Application of Rapid Prototyping for Temporomandibular Joint Reconstruc-
tion, American Association of Oral and Maxillofacial Surgeons. Elsevier Inc. All rights re-
served, 2011
11. SHEREKAR, Rahul M. & PAWAR, Anand N. Application of biomodels for surgical plan-
ning by using rapid prototyping: a review & case studies. International Journal of Innovative
Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163 Volume 1 Issue 6 (July 2014)
12. NEGI, Sushant; DHIMAN, Suresh; SHARMA, Rajesh Kumar, (2014) "Basics and applica-
tions of rapid prototyping medical models", Rapid Prototyping Journal, Vol. 20 Iss: 3, pp.256
– 267
13. Justine Coton, Marcel de Gois Pinto, Julien Veytizou, Guillaume Thomann, Design for disa-
bility: Integration of human factor for the design of an electro-mechanical drum stick system,
24th CIRP Design Conference, Abril 14-16, Milano, Italy, 2014
Learning engineering drawing and design
through the study of machinery and tools from
Malaga’s industrial heritage

Ladrón de Guevara Muñoz, M. Carmen1*, Montes Tubio, Francisco1 ;


Blázquez Parra, E. Beatriz2 and Castillo Rueda, Francisca 2
1
University of Córdoba, Spain
2
Univesrity of Málaga, Spain
* Corresponding author. Tel.: +34-666-307151. E-mail address: clguevaramu@gmail.com

Abstract (250 WORDS MAX)


The aim of this paper is to put across one of the work lines developed by the research
group of Graphic Engineering and Design. A work oriented to rescue and recover our in-
dustrial past through heritage retrieval that might be achieved virtually or, physically when
possible (buildings, machines, files, etc.), by students of Design and Mechanical Engineer-
ing. This retrieval is performed not only from the real state perspective but, as the National
Plan of Industrial Heritage indicates, focusing as well in movable property such as devices,
tools and files, and most especially in machines, without disregarding intangible assets such
as testimonies and institutions that could help retrieve our magnificent industrial past. This
work is understood as vital from a double perspective: Cultural heritage tourism and Engi-
neering students practice. A study of movable properties such as machines, tools and files
related to the ancient industries located in Malaga is presented here. Those movable proper-
ties that once made Malaga one of the first most industrialized cities in Spain and are now
included in the project called REVIMAQU that drives Malaga’s industrial heritage revival.

Keywords: industrial, heritage, design, CAD, old machinery.

1 Introduction

In 1998, the Sorbonne Declaration and later, in 1999, the Bologna Declaration
signed by 29 European states laid the foundation for the construction of the Euro-
pean Higher Education Area (EHEA) which, among others, aims to boost em-
ployment within the European Union.
Traditional teaching methods have been mainly focused on the teacher and very
little on the student. However, the relatively new European framework involves
the learning and acquisition of skills by the students by means of the teacher help
on their journey to learning [1]

© Springer International Publishing AG 2017 941


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_94
942 L. de Guevara Muñoz et al.

In order to achieve the goals set, a change in the educational model is needed.
Students’ skills should be trained and the adoption of European credits implies the
selection of new methodologies focused in learning results and competencies to be
attained by students.
The Framework Document of the Education and Science Ministry (MEC) of
2013 reads: “the design of the study planning and the educational programming
will be carried out taking as reference the students’ own learning”. Therefore, this
document shows a vision where learning should be considered as an autonomous
and individual process.
The European system demands a model or way of learning where academic and
professional knowledge are linked to each other. This is usually developed, at best
case, in the form of a short period internship in a real scenario, namely, a com-
pany, factory or industrial facility [2]. Schön, D.A. already pointed out the existing
differences between the training requested by society and the one that profession-
als receive. Hence, the European credits-based system stands out the significance
of and the need to prepare students through processes directly related to the spe-
cific demands of professions to be performed.
Consequently, competencies-based education seeks to minimize the existing
difference between the training being delivered and what it is actually demanded
in the professional market so that it can end with the traditional separation be-
tween theory and practice.
From another perspective, as stated by the Spanish National Plan of Industrial
Heritage (PNPI) at its introduction [3];“The industrialization testimonies consti-
tute an essential legacy for understanding the last two centuries of the Spanish
history”. However, this statement has been criticized for its temporal limitation to
the 19th and 20th centuries [4]. Later, the National Plan also notes: “the industrial
heritage is related to cultural appropriation processes that society sets with traces
of the past, in this particular case of the industrial age, by preserving material or
immaterial evidence linked to the place or the work memory”.
Based on the afore mentioned, the group of Graphic Engineering and Design at
the University of Malaga (UMA), jointly with professors from the University of
Cordoba (UCO), aim to investigate and make use of the large and rich industrial
heritage of Malaga and its province. Through real learning, Mechanics and Prod-
uct Development Engineering students should be able to acquire some of the skills
and competencies which, in the long term, will also result in an economic profit
since Malaga is known as a touristic destination with several museums where
these skills turn to be very valuable. Despite actual tendencies and the interest
shown by some visitors in getting to know Malaga’s industrial past, nothing re-
markable has been done to promote this asset yet.
In short, the main goal of the work presented here is that students, individually
but preferably in groups, become able to learn by a process of discovery employ-
ing the project based learning (PBL) strategy. In this sense, the student is regarded
as the main character, being capable of solving autonomously and in a creative
manner the collection of information on Malaga’s industrial heritage as well as its
subsequent filing, sorting and classification. In addition, in the case of students at-
Learning engineering drawing and design … 943

tending the course of Computer Aided Mechanical Design they may interpret and
solve the task by modelling the elements. Afterwards, they will have gained some
knowledge in Malaga’s industrial past by means of interpreting and rediscovering
ancient documents and drawings. This process also serves as an educational pur-
pose since it rises awareness on heritage conservation and its actual value to hu-
manity among the students. In addition, students have the opportunity to visit old
factories of Malaga, and learn first-hand how they used to work and what made
them stop functioning, which gives them a broader understanding of the engineer-
ing environment of the past.
As a result of the students’ work, new files and drawings of old machinery are
generated, creating a catalogue that promotes industrial heritage conservation and
reinforces the importance of preserving evidences of such magnificent industrial
past in Malaga; naturally,making the students part of this happening.

2 The industrial heritage in Malaga

The province of Malaga, located in eastern Andalucía (Spain), was an impor-


tant site in terms of industries that turned Malaga into an industrial center during
the second half of the 19th century, counting on abundant mineral extraction espe-
cillay iron, lead and graphite.
Malaga pioneered industrial development in Andalucía. King Felipe V ordered
the construction of a tin smelting in Ronda, as Fig. 1 displays. The project of the
Royal Tin Plate Factory of San Miguel in Ronda, located next to the Genal river
and 4 km away from Júzcar, is currently listed as Immovable Heritage of Anda-
lucía. The project began in 1725 only to be abandoned in the 1780s. This factory
is considered the first iron and steel industry in Andalucía, stating the beginning of
this industry in Malaga. As Alcalá Zamora y Queipo de Llano [5] hold, it was the
first blast furnace out of the twelve that were raised in Spain during the 18th cen-
tury, developing into the fifth casting factory.
944 L. de Guevara Muñoz et al.

Fig. 1: San Miguel Royal Tin. Source: Institute of culture and military history. Madrid

It was thanks to Manuel Agustín Heredia (1786-1846) that Malaga witnessed


the flowering of the largest iron and steel industry in Spain. The creation of the
foundry called ‘La Concepción’ in 1826 at the ‘Green River’ side in Marbella was
its starting point and aimed at exploiting a rich iron deposit discovered in Ojen.
Sometime later, in1834, a new foundry was opened in the city of Malaga, ‘La
Constancia’ which is said to have been baptized with such name in order to re-
member the perseverance of the contractor. The method employed for mineral
melting and Walona tuning were blast furnaces; afterwards, due to economic rea-
sons, English tuning was used; for this, English machinery from Hick had to be
acquired.
Given the success of Heredia, in 1841 a new foundry emerged ‘El Ángel’
whose founder was the entrepreneur Juan Giró. This fact turned Malaga into the
first producer of iron in Spain until the 1850s.
Due to the installation of large ironworks in Malaga, new foundry workshops
arouse in order to transform iron ingots into modified material- strips, plow pillars,
presses, etc.- then, in 1853 the ‘Trigueros’ foundry began its journey.
Industrialization is also based on the textile sector, where Martín Larios stood
out. He founded the ‘Industria Malagueña’ (Malaga’s Industry) that started work-
ing in 1846. Provided with new technology, it stood among the best Spanish facto-
ries. In 1856, Carlos Larios opened a new textile factory: ‘La Aurora’.
The chemical industry wastes no time and jointly with ‘Industria Malagueña’
develops a new factory ‘La Química’ (The Chemistry) where products required
for different processes such as sulphuric acid and sodium hydroxide are obtained,
pioneering in using the Leblanc method for soda production.
In the agro-food sector, besides the commercialization of products such as wine
and raisins, sugar cane industry gained a strong boost thanks to the Galician busi-
nessman and scientist D. Ramón de la Sagra. He was the main character that drove
Learning engineering drawing and design … 945

the modernization of crop and sugar cane production turning it into a powerful
competitive industry. Although he constituted the Peninsular Sugar Cane Society
(La Sociedad Azucarera Peninsular), various differences with former associates
made him abandon the society and pick the sugar cane mill in Torre del Mar to
carry out his project. After several vicissitudes, the factory was taken over by the
Larios family who, from 1850, promoted the expansion of sugar cane cultivation,
acquiring new mills such as the ones known as ‘San Rafael’ in Torrox and ‘San
José’ in Nerja. Later, these two mills became modern sugar cane factories. Along
with the factories outlined above, there were others that coexisted with them like:
- the sugar factory ‘ La Malagueta’ founded in 1857 and located where the Pro-
jected plant named ‘GasPeninsular’ was to be.
- La Riojana, chocolate factory founded in 1857
- La Aurora, textile factory founded in 1858
- the sugar cane factory known as ‘La Concepción’, founded in 1862
- the foundry plant ‘La Victoria’, founded in 1873.
- the foundry plant ‘La Esperanza’ founded in 1878 by Ruperto Heaton and
Bradbury
- the foundry plant ‘La Unión’ founded in 1899
There are many others not gathered here. However, all of them together give an
idea of the industrial significance of Malaga during a great part of the 19th century
and the beginning of the 20th.

2.1 Background

To achieve the goals outlined in the introduction, a while ago different Final
Year Projects (FYP) were started by the students where they work individually
with the teacher’s help and support. Through these projects students got in touch
with workpieces conducted by outstanding inventors, unfortunately not known by
the students, such as Al-Jazari, Taqui-Al-Din, Agostino Ramelli, George Agricola,
Betancourt or the engineer from Malaga, López Peñalver, among others.
These projects have consolidated over time since students have shown a strong
interest. Their work and effort have been fruitful and have materialized in produc-
tive results allowing mechanical interpretations of furry drawings developed by
some of these inventors and, in some cases, performing 3D modelling of the work
pieces afterwards.
For instance, one of the works developed by various students is the analysis of
suction pumps performance (such as bellows and oscillating pumps using shovels)
and dual piston pumps like the one shown in Fig. 2 originally by Agostino Ramelli
[6], whose works are materialized as Fig. 3 displays below.
946 L. de Guevara Muñoz et al.

Fig. 3: Virtual representation of design 5 by


Fig. 2: Design 5 by Agostino Ramelli Agostino Ramelli

As a result, the desire to discover and investigate inventions and developments


of old machines has been instilled in future mechanical and design engineers. In
fact, they are willing to deepen into inventors and their inventions, bringing about
a great mind opening beyond theory books on machines and mechanisms.

3 Current status – current development

The great interest arisen among the students in this kind of FYP has led to the
completion of master thesis (MT) and doctoral thesis (PhD) built on this subject
too. Particularly, this paper is part of a PhD thesis where the link between graphic
documents and machines employed in Malaga’s industry of past centuries is stud-
ied. Similar works have been carried out by professor Chris Leslie with his stu-
dents from the NYU Polytechninc School of Engineering on technology transfer
where they have studied the Keller mechanical engineering collection [7]. Like-
wise, our students aim to recover and digitize building and ancient machines’
drawings, as well as, making a bond between this knowledge and lectures im-
parted to future mechanical and design engineers.

3.1 Methodology and current state


Learning engineering drawing and design … 947

A methodology was established according to Fig. 4 below. The first stage con-
sisted on searching for information about the distinct factories and industrial in-
stallations already disappeared. Afterwards, a compilation of all these existent
buildings is done through the use of written references, seminars or conferences
attendance, or employing databases such as the IRPH database or others. Then,
some fieldwork is encouraged in order to visit the places where the heritage indus-
trial buildings were located and take pictures of their current state, do some
sketches and measurements. Simultaneously, it was necessary to explore and dive
into several provincial, national and even international historical archives.

Fig. 4: Methodology

The buildings chosen to be researched by the students during the development of


their projects were selected by the teachers attending to their importance in the in-
dustrial history context, their current state of conservation and machines that were
employed inside. In addition, some of these buildings are not considered as ‘BIC’
or Assets of Cultural Interest yet, and have raised great social controversy. There-
fore, this kind of buildings were also powerful targets for the projects in order to
raise their voice for further protection against other future constructions in the
area.
The novelty of this methodology lands on the subjects that apply it and how it
affects their learning process. However, this teaching practice still remains at a
948 L. de Guevara Muñoz et al.

first stage of development that mainly consists in encouraging students by offering


this kind of projects. Later, once the interest in this matter has been raised, the im-
pact on the learning process will be measured.

3.2 Results

Due to this project, it has been possible to locate various old drawings as the
one displayed in Fig. 5, where the front and side view of the location of two multi-
tube boilers by Babcock-Wilcok is presented.

Fig. 5: Babcock-Wilcok multitube boilers

In addition, drawings of riveted structures where wrought or not’s details can


be observed, Fig. 6, or workshop blueprints corresponding to a registration door
(Error! Reference source not found.) were found and spotted.
Learning engineering drawing and design … 949

Fig. 6: Registration door. Workshop blueprint Fig. 7: Ribetted structure. Wrought

From the drawings and photographs taken, it has been possible to determine lo-
cal, national and international companies that contributed in the installation of ma-
chinery during the period of greatest industrial splendour in Malaga.
These findings are particularly interesting regarding future research, since a
catalogue is created where old blurry drawings are completed with 2d and 3d rep-
resentations carried out by the students. Meanwhile, students develop a deeper un-
derstanding of the process of dismantling complex machines into each of their
pieces and represent them virtually.
Thus, among other international companies and enterprises, the French Fives-
Mille and Cail or the Scottish Mirrles Watson stand out, fact that is manifested by
professor Rojas Sola and Ureña Marín [8]; alike other companies from Malaga as
the foundries Ruperto Heaton, Fig. 8, or Martos&Cia, Fig. 9.

Fig. 9: Martos & Cia builders


Fig. 8: Ruperto Heaton casting press

Finally, it seems important to remark the great welcome from the students, both
undergraduate and postgraduate, of the works related to industrial heritage, which
remains materializes in the form of 3D modelling of immovable properties and
equipment, the realization of a large number of PFCs, MTs, and diverse PhDs.
As far as this PBL has been implemented, within a population of 150-160 stu-
dents, an increase of approximately 30% demand of Final Year Projects and Mas-
ter Thesis .related to industrial heritage in the department of Graphic Engineering
and Design has been identified.
950 L. de Guevara Muñoz et al.

4 Conclusions

By means of this teaching model, our students get to know Malaga’s industrial
heritage promoting their interest in it and encouraging research work that allows
not only to analyze immovable properties but also, movable properties such as the
machinery employed and its origin. They learn how to dismantle machines from
old drawings and represent them virtually generating 2d or 3d models of them to
be included in a catalogue that will serve future researchers on this matter.
Three major milestones are achieved with the completion of this work: to make
our industrial past known and analyze, by doing an exercise of reverse engineer-
ing, the operation of old machines that still endure, to encourage research in this
field, and last, to put in value, regarding tourism, all of our past that still exists be-
fore it disappears.

References

[1] P. M. S. L. Bayón, J.M. Grau, J.A. Otero, M.M. Ruiz, “EEES: Nuevas
actividades de Enseñanza / Aprendizaje en asignaturas de Matemáticas,”
in 2o Congreso Virtual sobre Tecnología, Educación y Sociedad, 2013, pp.
1–16.
[2] D. A. Schön, The Reflective Practitioner: How Professionals Think in
Action. Basic Books, 1983.
[3] Instituto Nacional de Patrimonio Industrial;, “Plan Nacional del
Patrimonio Industrial,” 2011.
[4] J. Claver and M. A. Sebastián, “Basis for the Classification and Study of
Immovable Properties of the Spanish Industrial Heritage,” Procedia Eng.,
vol. 63, pp. 506–513, 2013.
[5] J. Alcalá-Zamora, “Producción de hierro y altos hornos en la España
anterior a 1850,” Moneda y crédito, vol. 128, p. 117, 1974.
[6] T. I. Williams, “The various and ingenious machines of Agostino
Ramelli,” Endeavour, vol. 12, no. 4, p. 196, Jan. 1988.
[7] Tandon School of Engineering, “Engineers in the archive,” 2013.
[Online]. Available at:
http://engineering.nyu.edu/news/2013/05/06/engineers-archive.
[8] J. I. Rojas-sola and J. R. Ureña-marín, “The steam engines in sugarcane
production in spain: comparative awnalysis,” dyna, vol. 79, no. 171, pp.
183–190.
Developing students’ skills through real projects
and service learning methodology.

Anna BIEDERMANN1*, Natalia MUÑOZ LÓPEZ1, Ana SERRANO TIERZ1


1
Department of Design and Manufacturing Engineering, María de Luna 3, Zaragoza, 50018,
Spain.

* Tel.:+ 34 976 76 00 00; fax: +34 976 76 22 35; E-mail anna@unizar.es

Abstract In the present times companies demand professional skills from the can-
didates, desirable attitudes and values that go beyond knowledge and skills. In or-
der to provide students with an experience that encourages the development of
these kind of profiles, it has been proposed a service learning project in collabora-
tion with public funded business incubators where the students could meet real
customers whose businesses are within creation processes. This paper shows the
experience of the students of Industrial Design and Product Development Engi-
neering Degree at the University of Zaragoza challenged to fill a need (by social
request), while studying Graphic Design and Communication. The students evalu-
ation reveals that it has been a positive and enriching experience. Such initiative is
transferable to other studies degree to increase student motivation and involve-
ment and enhance the image of a socially committed university. Entrepreneurs
have assessed the experience as helpful to show students the companies’ needs,
facilitating the University to approach business’ environments as well as to pro-
vide an image of the Degree open to the society.

Keywords: Real project, service learning, labor market, skills, motivation, educa-
tion, client.

1 Introduction
The labor market shows companies’ growing requirements in their future employ-
ees selection processes. They demand the candidate to reveal a new professional
profile: to be able to develop a continuous learning and adaptation to a changing
market, which are desirable talents and values that go beyond knowledge and
skills [1]. That is why it is important to promote education through innovative
methods and approaches to allow students working with real clients projects
(PRC) and responding to their needs [2]. This methodology provides students the
possibility to connect learning in today's classrooms with the real work experienc-

© Springer International Publishing AG 2017 951


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_95
952 A. Biedermann et al.

es. It also gives them the opportunity to address real-world problems in which to
develop academic skills and knowledge [3]. Aizpun, Sandino and Merida (2015)
[4], point that it gives them the opportunity to implement the contents learned dur-
ing the lessons and to practice a set of skills job market demands such as creativi-
ty, teamwork, problem solving, leadership and the ability to generate innovative
ideas. Such practices result useful by approaching students to possible profession-
al situations within a safe environment of university education.
Different active methodologies [5] such as Problem-Based Learning [6-11],
Module Work or Project- Based Learning share activities and approaches common
also to PRC. A substantial difference consists of the fact that in PRC methodology
the project problem is presented by a real customer and not by a teacher, and stu-
dents not only have to be able to solve the problem but also to maintain a proper
relationship and communication with the customer.
Forbes magazine [12] in the article "Education to Employment: Boost Skills,
Bridge the Gap" (2016) highlights the following skills: critical thinking and prob-
lem solving, collaboration, agility and adaptability, initiative and entrepreneurship,
communication, curiosity and analysis, to which Dr. Tony Wagner (Innovation
Education Fellow at the Technology & Entrepreneurship Center at Harvard) refers
to as the 'Seven Survival Skills'. In the same line the ability to communicate effec-
tively is one of the highlighted skills presented in the Criteria for Accrediting En-
gineering Programs (2016-2017) [13]. The higher education focuses on incorpora-
tion of different methodologies with the aim of endowing students with the
necessary skills to achieve the employment in the professional sector. Today’s
competitive global market and changing work environment demand that engineers
possess “soft skills” in addition to technical skills [14]. Projects aimed at develop-
ing interpersonal skills prepare students for professional relations with future cus-
tomers. For example, the program conducted at the Federal University of Sao Pau-
lo in which different strategies are adopted to develop interpersonal skills of
engineering students, such as interpersonal social skills and leadership [15]. At the
University of Turku (Finland) and at the Stanford University (USA), students de-
velop working life skills such as communication skills, teamwork, design think-
ing, problem solving and an entrepreneurial mindset during master courses that
use design thinking processes and problem-based learning as the main educational
approach [16]. College of Technological Science - CCT at the Santa Catarina
State University implements Junior Companies as projects that give students real-
world experience with the goal of describing these experiences from their own
point of view. The analysis of the effectiveness of these activities shows student
satisfaction regarding acquired knowledge, networking, obtaining internship rec-
ommendations, and participation [17]. Another example of real project developed
at the University of Zaragoza (Spain) is the case of the Project Management sub-
ject taught during the 4th year of the Industrial Design and Product Development
Engineering Degree (IDPDED). It has involved 41 clients and 240 students getting
very satisfactory results [18]. Inside the Design Workshop VI subject of the same
Developing Students’ Skills Through Real Projects … 953

degree, there were developed several projects with various companies related to
product design.
Martinez Garza, Baez and Trevino (2013) [19] indicate that competency-based
training offers a new approach to a higher education because it provides students
with basic and key tools to their future professional and allows them to learn over
what is taught. Skills tune-up achieves a more comprehensive training, taking into
account not only the technical skills, but also attitudes and values such as honesty,
responsibility among others valued at these times. The competencies can be seen
as 'roadmaps' allowing to achieve the defined profile in each degree as well as the
ability to provide to the individual a satisfactory response to the demands of a real
context, triggering behaviors including the cognitive dimension, procedures, atti-
tudes, values and emotions [20]. The social competencies (SC) development is
highlighted as well as favoring expression of social or ethical commitments.
Another learning method currently used for skills development is the Service
Learning. It is very useful in the context of the European Higher Education Area
as the student develops his/her professional skills and curriculum through a com-
munity service [21-25]. This kind of initiative can be included in a collaborative
economy movement [26] such as time banks [27], exchange of services, etc., with-
in the win-win strategy: all parties involved in the process make a profit [28]. The
University’s Service Learning methodology allows integrating a learning based on
a real need, with service to the community. That is how universities discharge so-
cial responsibility and have the opportunity to contribute to the development of
sustainable economy by creating a university committed to society.
The experience presented in this paper connects the methodology of learning
based on real cases and service learning, raising projects conducted by students
based on entrepreneurs’ real needs and providing them a free of charge solutions
within the University context. This work presents projects developed in IDPDED
at the University of Zaragoza in the subject of Graphic Design and Communica-
tion (GD&C) in the 2nd course, in collaboration with the "Semillero de Ideas"
business incubator in which students work for real clients. This experience has
been accepted as PIIDUZ_15_163 Educational Innovation Project of the Universi-
ty of Zaragoza. This initiative has a double purpose: on the one hand, the applica-
tion of an active learning methodology based on a project with real customer
(PRC) and on the other hand to respond to a social need by a service learning (SL)
in accordance with the values of the University of Zaragoza such as:
• The participation of all stakeholders (students, employees, society, companies
and public administration, among others).
• The open and universal character and commitment to the local community and
its human, cultural, technological and economic development.
This paper is structured as follows:
In the methodology section there are described the objectives and the context in
which the experience has been developed, as well as the phases of its implementa-
tion and applied assessment tools. It is followed by the results of the experience’s
954 A. Biedermann et al.

evaluation using surveys conducted by students and clients. Finally, the most im-
portant conclusions drawn from the experience are presented.

2 Methodology

The implementation of active methodologies such as practical learning through


PRC involves collaboration between companies, teachers and students. It allows
students to become the main characters of their own learning process, teachers be-
come their advisors and companies define the problem to be solved.

The objectives of the presented methodology are as follows:


(a) To remove barriers between the academia and the professional world as a key
to develop profiles of graduates that meet the companies´ demand.
(b) To develop transversal skills, such as interpersonal skills and social skills re-
quired in the workplace.
(c) To achieve greater motivation and students involvement.
(d) To provide first steps in the labor market by enabling networks of students and
future clients.
(e) To equip students with the knowledge and experience in dealing with custom-
ers.
(f) To create a portfolio of professional experiences of newly graduates.
(g) To implement university-enterprise cooperation projects.

The structure of the work within this methodology is presented in the flowchart
in figure 1. Following this strategy the experience has been developed as follows:
Firstly it has been the search for a real client, interested in the collaboration. This
exploration was limited to the entrepreneurship projects participating in the
“Semillero de Ideas” business incubator. Once the research of the clients sector
has been done by students, the meeting with the client was organized to have an
accurate idea of the client's need and knowledge of the company he/she represents.
The concepts were developed based on students’ analysis and reflection of all
the information collected, leading them to design conclusions and its formal and
functional evolution tutorized to fulfill the client's expectations as well as the sub-
ject objectives. Finally the presentation to the customer and the viability evalua-
tion of the project were done and the teachers assessed the projects.
This paper describes the experience undertaken in the subject of GD&C that
consisted in the development of a graphic image (imagotype, symbol or an appro-
priate logo, basic brand manual and a business card) for business projects. An
overall number of 59 out of 62 students, enrolled in Graphic Design and Commu-
nication (subject taught in the second course of IDPDED), have been involved in
the experience. In order to gather students´ and companies´ feedback regarding
their perception of PRC and SL, a questionnaire with the use of Likert scale was
Developing Students’ Skills Through Real Projects … 955

designed. The participation in the survey was voluntary and anonymous. It was
done using an online tool for data gathering: Google Forms, which was active dur-
ing two weeks after the project was finished, and was distributed to students and
businesses through email.

Fig.1. The project strategy flowchart.

3 Results

The students` questionnaire consists of 23 statements divided into the following


sections: competencies, collaboration, activity and motivation. Questions were in
the form of Likert scale statements, with responses ranging from strongly disagree
(1) to strongly agree (5). 58 students participated in the survey. The results ob-
tained in each of the sections are presented in the Fig.2. In the case of Competenc-
es developed by performing a PRC, students perceive an increased ability of crea-
tive thinking, generating innovative ideas and the acquisition of basic skills for
their profession valued with the average close to 4, and the development of social
skills, leadership and communication in dealing with the client is observed as the
worst perceived competence with an average of 3.42. In the Collaboration section
it is observed that students perceive that the PRC is a good way to approach com-
panies and collaborate with them, valued with 4.38. In the Collaboration section it
is observed that students perceive that the PRC is a good way to approach compa-
956 A. Biedermann et al.

nies and collaborate with them, valued with 4.38. On the other hand, the capacity
that the company had to provide the key aspects to be applied to the design of the
brand has been evaluated by students with the lowest average value in this section
(3.08).

Fig.2. The students perception of different aspects developed by PRC and SL. M = average. D = stand-
ard deviation.
Developing Students’ Skills Through Real Projects … 957

Fig.3. The companies perception of different aspects developed by PRC and SL. M = average. D =
standard deviation.

Students value positively the Activity section. They perceive it to be an enrich-


ing activity, responding to possible social demands. It was valued with 4.05 in av-
erage. As far as the Motivation is concerned the students highlighted the possibil-
ity that their own design could become the image of a company as the most
important and they value it with an average of 4.47. Another important issue was
to experience situations similar to professional life, being able to include in their
CV a project with a real company. These aspects were valued with averages of
4.35 and 4.27 respectively. However, the possibility of establishing in the future
their own company is what motivates them less, valued with an average of 3.46.
The companies questionnaire consisted of 16 statements divided in the follow-
ing categories: competencies, collaboration and activity. The companies indicated
their degree of agreement (1-5 scale where 5 is the highest level of agreement).
The questionnaire was sent to the 10 companies participating in the experience 8
of them replied. The results obtained in each of the sections are presented in the
Fig.3. In the Competencies section, companies claim that the activity helps to de-
velop the skill of decisions making valued with 4.13. The aspect valued with the
lowest grade (3.5) was the capacity for analysis and synthesis. In the Collaboration
section, companies revealed that they would like to have more meetings with stu-
dents valued with an average of 4. In addition, companies believe that students
958 A. Biedermann et al.

should have contacted them more often to be able to solve the doubts, receiving a
score of 2.88. In the section Activity companies value with an average of 4.25 that
this activity provides an open image of the degree and that it responds to potential
social needs. The fact that the activity helps to show the needs of companies is
evaluated positively with 4.13. This section also reveals that this type of activity is
a good way of bringing companies and university closer and make them both to
cooperate (4.38) and they would like to see the approach of this activity to be re-
peated in other subjects of the degree to continue working with students (4.13).

4 Conclusions

This paper presents the experience conducted in the subjects of GD&C at the In-
dustrial Design and Product Development Engineering Degree at the University of
Zaragoza that consisted in the development of a graphic image (imagotype, sym-
bol or an appropriate logo, basic brand manual and a business card) for business
projects. The experience is a part of the implementation of an active methodology
that combines the project with a real client (PRC) and service learning (SL), re-
sulting in the development of technical and soft skills, as well as social compe-
tences, required currently by companies inside employees’ profiles.
The practice realized in the second and fourth course reveals positive results;
students develop competences defined by the Grade and demanded in the work-
place. It increases student motivation and involvement. Through this collaborative
approach, companies can obtain fresh and innovative solutions, it may serve as a
selection process to choose apprentices, and to establish first contact with the uni-
versity that may result in future collaborations and knowledge transfer.
The results encourage us to continue with this activity, since it was valued with
an average score close to 4 out of 5 in all sections. The initiative can be transfera-
ble to other degrees as it promotes not only the development of skills demanded
by businesses, but provides to the university an image of socially commitment.

Acknowledgments The research work reported here was made possible by Educational Innova-
tion Project of the University of Zaragoza. Acknowledge to Zaragoza Activa the possibility of
collaboration in "Semillero de ideas" program.

References
1. Pita, C. & Pizarro, E. (Coords). Cómo ser competente. Competencias profesionales demanda-
das en el mercado laboral, 2013, Cátedra de Inserción Profesional Caja Rural Salamanca –
Universidad de Salamanca, Salamanca.
Developing Students’ Skills Through Real Projects … 959

2. Aldoory, L., & Wrigley, B. Exploring the use of real clients in the PR campaigns course.
Journalism & Mass Communication Educator, 2000, 54(4), 47.
3. Bruce Davis, M. N., & Chancey, J. M. Connecting students to the real world: Developing
gifted behaviors through service learning. Psychology in the Schools, 2012, 49(7), 716-723.
4. Aizpun, M, Sandino, D & Merideno, I. Developing students’ aptitudes through University-
Industry collaboration.Ingeniería e investigación, 2015, 35 (3), 121-128.
5. Serrano, A., & Biedermann, A. M. Roles and Groups Dynamic as a Systematic Approach to
Improve Collaborative Learning in Classroom. Creative Education, 2015, 6(19), 2105-2116.
6. Atienza, J. Aprendizaje basado en problemas. Metodologías activas, 2008, Valencia: Univer-
sidad Politécnica de Valencia.
7. Hansen, J. Using Problem-Based Learning in Accounting. Journal of Education for Business,
2006, 81, 221-224.
8. Johnstone, K. M., & Biggs, S. F. Problem-Based Learning: Introduction, Analysis, and Ac-
counting Curricula Implications. Journal of Accounting Education, 1998, 16, 407-427.
9. Kanet, J. J., & Barut, M. Problem-Based Learning for Production and Operations Manage-
ment. Decision Sciences Journal of Innovative Education, 2003, 1, 99-118.
10. Milne, M. J., & McConnell, P. J. Problem-Based Learning: A Pedagogy for Using Case Mate-
rial in Accounting Education. Accounting Education, 2001, 10, 61-82.
11. Saiz, C., & Fernández, S. Pensamiento crítico y aprendizaje basado en problemas cotidianos.
REDU. Revista de Docencia Universitaria, 2012, 10, 325-346.
12. http://www.forbes.com/sites/sebastienturbot/2016/01/28/education-employment-skills-
gap/#6ce6acbd73cf
13. ABET. Criteria for Accrediting Engineering Programs. Engineering Accreditation Commis-
sion of the Accreditation Board of Engineering and Technology, Baltimore. Available on-line
at http://www.abet.org/wp-content/uploads/2015/10/E001-16-17-EAC-Criteria-10-20-15.pdf.
14. Kumar, S. and Hsiao, J. "Engineers Learn “Soft Skills the Hard Way”: Planting a Seed of
Leadership in Engineering Classes." Leadership Manage. Eng., 10.1061/(ASCE) 1532-6748,
2007, 7:1(18), 18-23.
15. Lopes, D.C., Gerolamo, M.C, Del Prette, Z.A.P., Musetti, M.A., Del Prette, A. Social Skills:
A Key for Engineering Students to Develop Interpersonal Skills. International journal of en-
gineering education, 2015, 31 (1), 405-413.
16. Taajamaa. V., Sjoman, H., Kirjayainen, S., Utriainen, T., Repokari, L., Salakoski, T. Dancing
with Ambiguity Design thinking in interdisciplinary engineering education, 2013, IEEE
Tshingua International Design Management Symposium (TIDMS).
17. Bogo, A. M, Henning, E., Schmitt, A.C., De Marco, R.G. The Effectiveness of Junior Com-
panies from the Viewpoint of Engineering Students at a Brazilian University, 2014, IEEE
Global Engineering Education Conference (EDUCON), 745-75.
18. Cano, J.L., Lidón, I., y Rebollar, R. Students groups solving real-life projects. A case study of
experiental learning. International journal of engineering education, 2006, 22 (6), 1252-1260.
19. Martínez, G. F., Garza, J. Á., Báez, E., y Treviño, A. Implementación y evaluación del Currí-
culo Basado en Competencias para la formación de ingenieros. REDU. Revista de Docencia
Universitaria, 2013, 11(extra), 141-174.
20. Serrano, R. M. La controvertida aplicación de las competencias en la formación docente uni-
versitaria. REDU: Revista de Docencia Universitaria, 2013, 11(1), 185-212.
21. Amat, A. F., & Miravet, L. M. El Aprendizaje Servicio en la Universidad: una estrategia en la
formación de ciudadanía crítica. Revista electrónica interuniversitaria de formación del
profesorado, 2010, 13(4), 69-78.
22. Domínguez, B. M., Domínguez, I. M., Sáez, I. A., & Amundarain, M. G. El aprendizaje-
servicio, una oportunidad para avanzar en la innovación educativa dentro de la Universidad
del País Vasco. Tendencias pedagógicas, 2015, (21), 99-118.
23. Marta, L., & González, P. El aprendizaje-servicio, una herramienta para el desarrollo profe-
sional de la responsabilidad social del periodista. Estudios sobre el mensaje periodístico,
2012, (18), 577-588.
960 A. Biedermann et al.

24. Rodríguez, J. P., & Rovira, J. M. P. Rasgos pedagógicos del aprendizaje-servicio. Cuadernos
de pedagogía, 2006, (357), 60-63.
25. Tapia, M. N. Calidad académica y responsabilidad social: el aprendizaje servicio como puente
entre dos culturas universitarias. Aprendizaje servicio y responsabilidad social de las
universidades, 2008, 27-56.
26. Ballesteros Martínez, N. Estudio del consumo colaborativo en España. Aplicación práctica a
través de la puesta en marcha de un plan de empresa en el sector de la enseñanza, 2015.
27. Recio, C., Méndez, E., & Altés, J. Los bancos de tiempo. Experiencias de intercambio no mo-
netario, 2009, Ed. Graó.
28. Xia, J., Caulfield, C., & Ferns, S. Work-integrated learning: linking research and teaching for
a win-win situation. Studies in Higher Education, 2015, 40(9), 1560-1572.
Integration of marketing activities in the me-
chanical design process

Cristina Martin-Doñatea*, Fermín Lucena-Muñoz b, Fco. Javier


Gallego-Alvareza
a
Department of Engineering Graphics Design and Projects. University of Jaen. Spain. Campus Las
Lagunillas, s/n. 23071. Jaen (Spain)
b
Department of Management, Marketing and Sociology. University of Jaen. Spain. Campus Las
Lagunillas, s/n. 23071. Jaen (Spain)
* Corresponding author Tel.: +34 953212821; fax: +34 953212334. E-mail address: cdonate@ujaen.es

Abstract

Industrial design is today, a key factor for business success, as well as an essential
formula to compete in the market. This is one of the reasons why today, there is a
great demand for professionals with knowledge in mechanical engineering and in-
dustrial design. By contrast, in engineering studies, much time is spent on training
in CAD, in order to have an advanced knowledge of design tools, leaving aside the
analysis of whether the designs will be good or not from the point of view of the
customer. Many of these engineering students will be professional designers in the
future designing their own products and being able to create a company that made
designs that would have to be customer focused.
In this context there has been carry out, several activities in the course of graphic
engineering techniques in the degree of mechanical engineering in third year of
this studies. The subject has been raised with an applied approach to the profes-
sional market, making creative products, including the functional aspect and cus-
tomer focus, issues of vital importance but neglected by educational programs. In
this research, a professor of marketing, with a great professional experience in pa-
tent pending, has taught several classes at the beginning of the subject related to
how to conduct interviews with the client to identify their needs, with the aim to
give the initial design requirements by the customer. The result of the experience
has resulted in a set of designs of higher quality than those made in previous years.

Keywords: Design, CAD, Creativity, Marketing, Collaborative work

© Springer International Publishing AG 2017 961


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_96
962 C. Martin-Doñate et al.

1 Introduction

Globalization causes that many companies have their competition around the
world. Industries take advantage of different formulas to compete in this scenario.
Using a comparative advantage, the firm produces goods at a lower cost than its
competitors, by contrast using a differential advantage in innovation the firm's
products differ from its competitors (1, 2).
The customer today is extremely important; there is nothing more inefficient than
launch products to a market that has no demand. Highly trained designers in CAD,
invest a lot of time in generating ideas forgetting its functionality and usability.
They don’t think about what they are demanding customers, carrying away by
what they know and falling in love with the solution leaving aside its function. In
order to properly orient the customer design, designers have to think of customer
as a person, using empathy and creativity in identifying the problem, finding solu-
tions and working together. The business market demands professionals trained in
these skills. The integration of design in the company is seen as a competitive ne-
cessity. The designers suggest that marketing is necessary for success because it
provides a continuous interface with the customer and ensures that design innova-
tion will provide a value that customers will find attractive (3); however there is
little research about the interface between marketing and design business (4).
In the same way, it is also necessary to integrate marketing activities with design
activities, at an educational level. Learning CAD tools or methodology of indus-
trial design detached from its customer orientation leads to the realization of non-
functional designs, causing the impossibility to practice the skills that the students
will need for their professional performance.
Marketing activities are not incorporated into the program of the subjects of Engi-
neering Graphics and, if found, it is provided collaterally and not by experts in
marketing. For these reasons it has been developed a serie of educational activi-
ties, focused on training students for their interaction with potential customers, in
order to properly understand the design problem to solve, establishing good foun-
dation for product development. The integration of marketing activities with the
mechanical design has been taught by an expert in marketing teacher at the Uni-
versity of Jaen. These activities have helped students to understand and empathize
with the customer, using the information he gave them, improving their initial de-
sign, and detecting problems that had not been taken into account in the design.
Activities have been conducted in teams; this fact has allowed performing them in
a collaborative manner.

2 Methodology

In Spain for Mechanical Engineering Degrees, learning of CAD tools is uncoupled


from activities that involve customer contact. Students learn to use these design
Integration of marketing activities in the … 963

tools, in a repetitive manner reproducing a couple of exercises with a closed


statement. To avoid these problems it has been raised a methodology based on the
integration of several areas of knowledge and some services of the University of
Jaen, more specifically between the area of Industrial Marketing, Engineering
Graphics and the Office for Transfer of Research Results (OTRI that is engaged of
patent pending).An expert teacher has contributed with their knowledge, imparting
theoretical classes and workshops for students.
These additional activities have been not included in the initial syllabus. They
have been served as a basis for implementing a new teaching methodology cen-
tered on the students, who have been to work actively by learning process, apply-
ing criteria of creativity, innovation and customer contact.
For a period of seven weeks; students have designed creative products working in
teams. Throughout this period it has been provided theoretical and practical design
lessons using the CAD software Catia V5. In previous years the course was taught
by a single teacher with experience in CAD design. During the course 2015/16 an
expert of industrial marketing and patents has collaborated imparting several clas-
ses on how to obtain customer information about design, optimal criteria for an in-
terview and how to get empathy from the customer. Activities have been per-
formed using a methodology of active work by students. Students by teams must:
• Identifying the client for a new design. Students have to go to locate it in their
environment to know their needs.
• Defining the challenge; determining exactly what it is solved by the design
problem.
• Designing an interview, posing a series of questions to end customers, trying
not only to ask the questions but also to observing their reactions.
• Learning to empathize with the customer, since it often evokes stories around
the problem
Results of surveys, students comments and design requirements have been deliv-
ered to marketing expert for evaluation. Afterwards he discussed with each team
the guidelines to obtain more efficient ways of getting customer information. He
provided a feedback session where students were able to ask him about marketing.
3 Results
In order to validate the results as well as how interesting was the activity for stu-
dents, it has been conducted a survey among students who have performed all the
activities during the course 2015/16. The number of students enrolled in the sub-
ject amounts to 68. Finally 60 students have participated in the survey. The group
was composed of 45 men and 15 women. Students were asked about the useful-
ness of the activities performed to obtain design requirements. The 95% of stu-
dents answered that it was very useful conducting these activities for improving
the developed design, fig 1.
964 C. Martin-Doñate et al.

Fig 1. Survey conducted on the platform ILIAS


In order to evaluate the involvement of students in this activity, each of them was
asked about the number of customers interviewed to obtain information about the
design. 25% of the students interviewed more than 10 potential customers Fig 3.
Some groups could not reach a large number of interviews due to specific design
features (designs for the recollection of olive fruit, etc.) which made difficult to
find end customers due to the reduced period of design development. Most groups
have visited end customers in their environment, in hospitals, nursing homes, or-
thopedics, workshops etc. Fig 2a,b.

Fig 2.a Visit of students to hospital and nursing home

Fig 2.b.- Students asking customers


Integration of marketing activities in the … 965

Fig 3. Results about the number of clients interviewed by teams


When the students were asked about the valuation of positive aspects of the expe-
rience of obtaining customer information from specialists, 100% of the students,
answered that the experience was positive for both customers and designers. Cus-
tomers helped all teams improving their design, providing different perspectives
that members hadn’t initially been raised.
It has been summarized the advantages that students found, to contact customers:
• Better understanding of the real problems and customer needs
• Discarding some ideas
• Removing requirements which are not interesting for the customer
• Obtaining additional ideas, getting help on how to make the final product
• Introduction in the commercial world to understand the customer and trans-
late it into design
• Looking at the design to perform from another perspective
• Increasing confidence in the relationship with the customer
• Motivating about the design usefulness
• Posing new problems that had not been thought
• Paying more attention on some aspects that seemed obvious
• Interviewing as a best way to discover the problem
• Conducting customer focused projects

As an example, some comments of the students about the satisfaction on teaching


activity are attached:
"Learning from the personal experiences of customers in my opinion has been one
of the most interesting parts of the work and we never thought that these activities
would help us to get the design for our project. We have also been able to learn
the most common customer problems"
966 C. Martin-Doñate et al.

"It was a great experience; we could get the features that people looked for the
product seeing different views. Search requirements has been much easier, the
customer give us new ideas that we do not beforehand taked into account”

Concerning areas on improvement, students mostly agreed (90%), upon the short
time for developing activities. The entire design had to be made in seven weeks
(from searching the idea by each team to detailed design). Despite this, all the re-
sults obtained by the groups have been very good, given the short period of time to
completion.
Another issue that we asked to students was about the possible improvements in
the activities performed. 70% of the students commented that the activities have
been performed correctly, although they could also be completed with customer
surveys by e-mail because many customers got nervous when answering.

In Fig 4 it has been attached some examples about the designs carried out on the
subject during the course 2014/2015, prior to conducting educational activities and
marketing integration CAD. In fig 5 some examples of course 2015-16 where ac-
tivities were implemented.

Fig 4. Designs made on the subject during the course 2014/2015


Integration of marketing activities in the … 967

Fig 5. Some designs conducted during course 2015 / 2016


Students from the last year were motivated and engaged to perform creative de-
signs in teams. This has been reflected in performing more elaborate and customer
tailored designs. Finally, teams presented the end design to a tribunal formed by
several professors of design and marketing fig 6.

Fig 6. Presentation of students work


968 C. Martin-Doñate et al.

Participation of students in these activities has resulted in a more efficient learning


that has been able in:
- Making new designs by applying creativity
- Contacting with customers getting information and performing more func-
tional and useful designs
- Presenting designs to the call for prototypes of the Jaen University, compet-
ing with Research Projects
- Getting patents of designs
The methodology involved in the paper has allowed the coordination between are-
as of Jaen University (Marketing Industrial and Graphic Engineering) as well as
the Office for Transfer of Research Results. At this moment, four designs made
according to the methodology of the paper, have been submitted to the call of pro-
totypes at University of Jaen for implementation and further development as well
as for the processing of its patent by the OTRI Office.

4 Conclusions

The integration of marketing activities in the mechanical design process have ena-
bled students of the subject graphic engineering techniques in mechanical engi-
neering, to make creative design projects, applying their knowledge in CAD tools,
and designing functional products fully customer oriented. Training and teaching
methodology used in the subject will mark their professional performance on is-
sues related to the design of new products. They have learned how to design estab-
lishing direct contact with the customer and adapting design to their needs.

Design teams, have been conducted several interviews with customers in order to
validate their hypotheses about the design problem. They have been contacted the
end customers in the right circumstances. The interviews have been conducted in
pairs interviewing each team several customers. Finally they have been delivered a
report which stated: information about each interview (interviewer profile, place,
time and circumstances), a photograph of each interview, questions raised, main
findings and a map of empathy with included the main conclusions. Once the cus-
tomer needs were identified the students should classify them in terms of the qual-
ity required by the customer.

Students talked with customers on-site, this fact has been allowing to obtain de-
sign requirements more appropriated since it were based on information from in-
terviews. The experience has resulted in four designs that have been submitted to
the call of prototypes at University of Jaen for implementation and further devel-
opment as well as for the processing of its patent by the OTRI Office. In addition,
students developed their capacity for teamwork, that was beneficial for the entire
team involved, teaching students to respect the ideas of other colleagues as well as
Integration of marketing activities in the … 969

those interviewed, helping your teammates when they needed it. The students
were engaged with the designs and the creativity, increasing the motivation for the
subject and been motivated by designing creative products in the future. This pro-
ject has set the imagination of students in motion, which has helped them acquire
a useful experience for the future.

Acknowledgments This work has been supported by the University of Jaen through the project
titled ” Fomento de la creatividad e innovación en asignaturas del ámbito de la expresión gráfica
en la ingeniería , aplicadas al diseño y desarrollo de nuevos productos ” (Project Code
PID20_201416)

5 References

1. Homburg, C; Schwemmle, M.; Kuehnl, C. New product design: Concept, measurement, and
consequences. Journal of Marketing, 2015, vol. 79, no 3, p. 41-56.
2. Ullman, D. The mechanical design process. McGraw-Hill Higher Education, 2015.
3.Blaszczyk. Imagining Consumers: Design and Innovation from Wedgewood toCorning (Baltim
ore: Johns Hopkins University Press, 2000.
4. Beverland, M. B. Managing the Design Innovation–Brand Marketing Interface: Resolving the
Tension between Artistic Creation and Commercial Imperatives. Journal of Product Innova-
tion Management, 2005, 22: 193–207
Section 6.3
Representation Techniques
Geometric locus associated with thriedra
axonometric projections. Intrinsic curve
associated with the ellipse generated

Pedro GONZAGA1, Faustino GIMENA1*, Lázaro Gimena1 and Mikel GOÑI1


1
Department of Projects Engineering. Public University of Navarre
* Corresponding author. Tel.: +34 948 169 225. E-mail: faustino@unavarra.es

Abstract. In previous work on the axonometric perspective, the authors presented


some graphic constructions that allowed a single and joint invariant description of
the relations between an orthogonal axonometric system, its related orthogonal
views, and oblique axonometric systems associated with it. Continuing this work
and using only the items drawn on the frame plane, in this communication we start
from the three segments, representing trirectangular unitary thriedra, joined in the
origin and defining an axonometric perspective. Each is projected onto any direc-
tion and the square root of the summa of the squares of these projections is deter-
mined. We call this magnitude, orthoedro diagonal whose sides would be formed
by the three projections axonometric unit segments. If the diagonal size is built
from the origin of coordinates and onto the direction used, this describes a locus
here called intrinsic curve associated with the ellipse. When the starting three
segments represent an orthogonal axonometric perspective, the intrinsic curve as-
sociated with the ellipse is a circle.

Keywords: Axonometric system, Descriptive geometry, Intrinsic curve the ellipse.

1 Introduction

This work is part of a research associated with graphical representation issues.


In the article Main axonometric systemrelated views as tilt of the coordinate plans
[1] orthogonal views related to the orthogonal axonometry is determined. In New
constructions in axonometric fundamentals system [2] this study was extended,
presenting new construction operations perspective from the peculiar arrangement
of the ortogonal views. In the communication Intrinsic relations between the or-
thogonal axonometric system and Its Associated obliques. Analytical proposal and
graphic operations [3] was pretended to extend this approach, that puts parallel
analytical and constructive aspects of axonométric system, from the orthogonal to

© Springer International Publishing AG 2017 973


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_97
974 P. Gonzaga et al.

the oblique, and synthesize as much as possible their algebraic expression and its
trace. The intrinsic axonometric triangle was defined and strengthened their geo-
metric properties. projective and metric relations on studied figures were used and
new axonometric constructions were developed.
In this communication, a geometric locus here called intrinsic associated curve
with the ellipse, defined by the projection of the three segments that concrets an
axonometric perspective on any direction contained in the picture plane is present-
ed.

2 Ortogonal and oblique axonometric system

An ortogonal thriedra of reference is chosen Oxyz . Taken the vertex O on the


straight lines x , y and z respectively, a unit magnitud u , points I , J and K
are determined. Coordinates of these points are:

O >0, 0, 0@ ; I >u, 0, 0@; J >0, u, 0@ ; K >0, 0, u@ (1)

The normalized equation of the chosen projection plane S p , that contains the
vertex O , is:

S p {ax x ay y az z 0 (2)

Where the ortogonal projected direction is dS {>ax , ay , az @, being

ax 2  a y 2  az 2 u .
The ortogonal projection of the points I , J and K on the frame plane, deter-
mines points I , J and K which coordinates are:

> @
I u 2  ax 2 ,  ax a y ,  ax az u
J > a a , u  a ,  a a @ u
x y
2
y
2
y z (3)
K > a a ,  a a , u  a @ u
x z y z
2
z
2

If the axis of the reference thriedra are projected Oxyz over the frame plane
S p on whatever direction do an oblique axonometric perspective is obtained.
This oblique projective direction is do {>axo, ayo, azo @ , verifying
axaxo  ay ayo  az azo u .
Geometric locus associated with thriedra … 975

Fig. 1. Axonometric system.

The oblique projection of the points I , J and K are I o , Jo and Ko . Its co-
ordinates are noted as:

> @
I o u 2  ax axo ,  ax a yo ,  ax azo u
J o > a y axo , u  a y a yo ,  a y aoz @ u
2
(4)
Ko > az axo ,  az a yo , u 2  az azo @ u
976 P. Gonzaga et al.

Axonometric oblique scales can be written as:

OIo u xo u 2  ax 2eo 2  2ax axo


OJo u yo u 2  a y 2eo 2  2a y a yo (5)

OKo u xo u 2  az 2eo 2  2az azo

Next relation between axonometric scales defined the fundamental longitude:

l uxo2 u yo2 uzo2 u eo 2 1 (6)

Here, it has been noted some geometric elements that forms the axonometric
view (figure 1.). To broaden this info, see the paper Gimena et al. 2015 [3].

3 Diagonal magnitude

In this section, we start from three segments, which defines the axonometric
perspective. Obtained from the joint in the origin, of the coordinates of the points
I o , Jo and Ko . Each of the segments are projected on whatever direction b ,
contained in the frame plane and the square root is determined from the addition
>
of the square exponential of these projections. The direction is b { bx , by , bz u , @
satisfying bx 2 by 2 bz 2 u .
The projection of the axonometric unit segments can be noted as:

> @
OBxo I o ˜b bxu 2  ax axobx  a yoby  azobz u 2 bxo
OByo J o ˜b >byu 2  a y axobx  a yoby  azobz @ u 2 byo (7)
OBzo Ko ˜b >bz u 2  az axobx  a yoby  azobz @ u 2 bzo

Here, we called this magnitude, orthohedral diagonal which axis would be


formed by the projections of the three axonometric unit segments. From the origin
coordinates and on the used direction the orthohedral diagonal is constructed and
annotated as:

OBo bxo2 byo2 bzo2 bo (8)


Geometric locus associated with thriedra … 977

In next figure 2 are plotted: the projection direction, the axonometric unit seg-
ments projections and the orthohedral diagonal associated.
Whatever projected direction can be expressed in function of two principal direc-
tions which in this paper are, first b S 2 perpendicular to dS and do , secondly
b 0 perpendicular to the former direction.

Fig. 2. Direction of projection and diagonal magnitud.

The projection direction in function of these ones chosen, can be expressed


analitically as:

b {b D b 0 cosD b S 2 sinD (10)

Also, the diagonal magnitude can be annotated as:

OBo OBo D l1 u eo 2 cos2 D sin 2 D (11)

When the three starting segments represents an orthogonal axonometric per-


spective, the diagonal magnitude fits u .
978 P. Gonzaga et al.

4 Intrinsic curve associated to the ellipse

Once the diagonal magnitude is constructed from the origin of coordinates, this
measure describes a geometric locus here noted as intrinsic curve associated to the
ellipse. In Fig. 3 this geometric locus is represented.

Fig. 3. Intrinsic curve associated to the ellipse.

When the three starting segments represents an orthogonal axonometric per-


spective, the intrinsic curve associated to the ellipse is a circle.
Geometric locus associated with thriedra … 979

4 Conclusions

It has been defined diagonal magnitude as the length of the diagonal segment of
an orthohedral whose sides are formed by projections of the three unitary seg-
ments over an axonometric unit coplanar to their directions. Intrinsic curve to the
ellipse has been defined as the geometric locus describing the diagonal magnitude
if it is built from the origin and its associated directions. When the starting three
segments represent an axonometric orthogonal perspective, the intrinsic curve as-
sociated with the ellipse is a circle.

References

1. L. Gimena, P. Gonzaga and F.N. Gimena. Main axonometric system related views as tilt of the
coordinate planes. Proceedings of IMProVe 2011, Venice, June 2011, pp 748-752.
2. L. Gimena, F.N. Gimena and P. Gonzaga. New Constructions in Axonometric System Funda-
mentals. Journal of Civil Engineering and Architecture 2012, 6(5), 620-626.
3. P. Gonzaga, L. Gimena, F.N. Gimena, M. Intrinsic relations between the orthogonal axono-
metric system and its associated obliques. Analytical proposal and graphic operations. Pro-
ceedings of XXV International Conference on Graphics Engineering, San Sebastián, June
2015, pp 297-306.
Pohlke Theorem: Demonstration and Graphical
Solution

Faustino GIMENA1*, Lázaro Gimena1, Mikel GOÑI1 and Pedro GONZAGA1


1
Department of Projects Engineering. Public University of Navarre
* Corresponding author. Tel.: +34 948 169 225. E-mail: faustino@unavarra.es

Abstract. It is known that the axonometric defined by Pohlke, is geometrically


known as a means of representing the figures of space using a cylindrical projec-
tion and proportions. His theorem says that the three unit vectors orthogonal axes
of the basis in the space can be transformed into three arbitrary vectors with com-
mon origin located in the frame plane. Another way of expressing this theorem is
given in three segments mismatched and incidents at one point in a plane, there is
a trirectangular unitary thriedra in the space that can be transformed in these three
segments. This paper presents a graphical procedure to demonstrate a solution of
Pohlke’s theorem. To do this, we start from previous work by the authors on the
axonometric perspective. Graphic constructions that allow a single joint invariant
description of relationships between an orthogonal axonometric oblique axono-
metric system and systems associated thereby. At a same time of the geometric lo-
cus generated by the diagonal magnitude positioned at any direction in the plane
of the picture. This magnitude is the square root of the sum of the squares of the
projection of the three segments representing axonometric on arbitrary magnitude.

Keywords: Axonometric system, Descriptive geometry, Pohlke theorem.

1 Introduction

This research deals with the oblique cylindrical projection in order to express
graphically both Pohlke´s theorem as well as the more general axonometric
stablishment [1]. Main axonometric system related views as tilt of the coordinate
planes [2] was employed as a starting point and was stablished one arbitrary orthogo-
nal projection to project. In New constructions in axonometric system fundamentals [3]
the former study was extended, presenting new operations of construction of the per-
spective starting from the singular position of the ortogonal views that are used in this
work. In Intrinsic relations between the orthogonal axonometric system and its asso-
ciated obliques. Analytical proposal and graphic operations [4] was pretended to ex-
tend the approach of constructive and analytical aspects of the axonometric system,
from the orthogonal to the oblique. The intrinsic triangle of the axonometry was de-
fined and its geometrical properties were enforced. Projective and metric relationships

© Springer International Publishing AG 2017 981


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_98
982 F. Gimena et al.

were used on the studied figures and this focus permitted, the development of new ax-
onometric constructions. In this paper, authors demonstrate and solute graphically the
Pholke´s Theorem and give a graphical sense operative procedure to the general axo-
nometric approach. This procedure enables the comprenhension and application of
graphical engineering techniques.

2 Orthogonal and oblique axonometric system

An ortogonal trihedral is chosen as a reference system Oxyz . Taken from the


vertex O >0, 0, 0@ on the lines x , y y z respectively a unitary magnitude u ,
points I >u, 0, 0@, J >0, u, 0@ y K >0, 0, u@ are determined. The frame plane is
S p {ax x ay y az z 0 whose direction dS {>ax , ay , az @, verifying

ax 2  a y 2  az 2 u . The ortogonal projection of the points I , J y K on the


frame plane determines the points > @
I u 2 ax 2 , ax a y , ax az u ,
> @ > @
J a x a y , u 2 a y 2 , a y a z u y K ax a z , a y a z , u 2 a z 2 u .

Fig. 1. Axonometric system.


Pohlke Theorem: Demonstration and Graphical Solution 983

If axis of the trihedra Oxyz are projected on the plane S p on an arbitrary di-
rection do , the axonometric oblique perspective is obtained. This oblique direc-
> @
tion is do { axo, ayo, azo , verfying ax axo  ay ayo  az azo u . The final point as-
sociated to the project direction is Oo , and its ortogonal projection onto the frame
> @
plane is Oo axo ax , ayo ay , azo az . Figure 1 shows that the measure of this
point is allways the unity. The distance from this point to the origin defines the
first fundamental lenght la OOo u eo 2 1 . The obliqual projection of the
points I , J and K onto the frame plane determines the the points
>
I o u 2 ax axo, ax ayo, ax azo u , @ >
J o ay axo, u 2 ay ayo, ay aoz u and @
Ko >a a
z xo , az a yo , u
2
a a @ u .
z zo The obliqual axonometric scales are
OIo uxo , OJo u yo y OKo uxo . Next expression between axonometric scales

defines the second fundamental lenght lb uxo2 u yo2 uzo2 u eo2 1 . Here
some geometric elements are noted configuring the axonometric perspective. To
broaden this study see the reference Gimena et al. 2015 [4].

3 Construction of the obliqual axonometric perspective taken as


a starting point the orthogonal axonometry

Here we show how to build an obliqual axonometric perspective from the or-
thogonal one. Obliqual straight line do is presented as a project direction and its
point Oo associated to the unitary measure. A radiation of straight lines is built
which relates this limit point Oo with the intrinsic points I x , J y and Kz .

Fig. 2. Construction of the obliqual axonometry.


984 F. Gimena et al.

This radiation of straight lines is cutted by the parallel to the direction do that
intersec throught the unitary points of the ortogonal axonometry I , J and K in
the unitary point of the obliqual axonometry I o , Jo and Ko . In this manner axis
of the obliqual axonometric perspective are determined xo , yo and zo . In Figure
2 it is also presented the geometric properties that are necessary for the graphical
deduction of Pohlke Theorem.

4 Diagonal magnitude

In this section we start from three segments of the points I o , Jo and Ko joint
with the origin. Its segment is projected onto an arbitrary direction
b {>bx , by , bz @ u (verifying bx 2 by 2 bz 2 u ) in the frame plane and the square
root of the sum of the second exponential of this projections or diagonal magni-
tude is determined. The projection of the unit axonometric segments is
OBxo I o ˜b bxo , OByo J o ˜b byo and OBzo Ko ˜b bzo .

Fig. 3. Project direction and diagonal magnitude.


Pohlke Theorem: Demonstration and Graphical Solution 985

From the coordinate origin and in the direction used, the diagonal magnitude is
built and its expression is as follows OBo bxo2 byo2 bzo2 bo .In the Figure 3
project direction is express graphically: three axonometric unitary segments and
its diagonal magnitude associated.
Whatever project direction can be expressed in function of the principal direc-
tions, which in this work are, first b S 2 perpendicular to dS and do , secondly
b 0 perpendicular to the former direction. Diagonal magnitudes associated to
these two principal directions are OBo 0 eou and OBo S 2 u .
The project direction in function of these two principal directions can be an
noted as b{b D b 0 cosD b S 2 sinD . Its diagonal magnitude can be also ex-

pressed as OBo OBo D l1 u eo cos2 D sin 2 D . Besides the diagonal magni-


2

tude associated to the perpendicular direction b D S 2 can be also obtained


OBo D S 2 l2 . From these diagonal magnitudes l1 and l2 the second funda-
mental length can be obtained:

l12 l2 2 u eo 2 1 lb (1)

The diagonal magnitude associated to the projection direction b D S 4 is

OBo D S 4 l3 u
2
2
e 1 2 e 1 sinD cosD .
o
2
o
2

With this distance and the equation (1) it can be determined the value of the
magnitude associated to the project direction b D 3S 4 which expression is

OBo D 3S 4 l4 lb 2 l32 .

5 Pohlke’s solution theorem

In this section it is presented an analytical and graphical procedure to solve the


Pohlke´s Theorem (given three arbitrary segments and coincidents at an origin
point, there is a unit tri-rectangular trihedra in the space that proceed from the pro-
jection of these segments in a cylindrical projection).
Given three arbitrary segments in the plane in a point OIo , OJo and OKo ,
diagonal magnitudes are determined, l1 , and l3 associated respectively to the pro-
ject direction b D and b D S 4 (Figure 3).
986 F. Gimena et al.

With this two magnitudes and using the equation (1) it can be deduced the di-
agonals l2 and l4 associated respectively to the projection direction b D S 2
and b D 3S 4 .
From next relation between the four diagonal magnitudes it can be deduced the

first fundamental length la 4 l2


1 l2
2 2
 l32 l4 2 2
u eo 2 1 OOo . In the
figure 4 it is presented the graphical obtention of the fundamental magnitudes
starting from the four diagonal length.
Besides, from the fundamental lengths la , lb it can be determined the values

that define the principal direction of projection OBo 0 eou l a


2

lb 2 2 y
OBo S 2 u lb
2

la 2 2 .

Fig. 4. Graphical operations to determine the fundamental length.

In the Figure 5 it is exposed a way to proceed of obtaining the diagonal magni-


tudes OBo 0 , OBo S 2 associated to these two principal directions.
Pohlke Theorem: Demonstration and Graphical Solution 987

Fig. 5. Graphical procedure to determine the principal diagonal magnitudes.

Fig. 6. Graphical solution of Pohlke´s Theorem.

In the Figure 6 it is presented the graphical solution of Pohlke´s Theroem con-


sisting in determining a point associated to the oblicual project direction Oo .
988 F. Gimena et al.

The relations presented in the section of the construction of an oblicual


axonometry starting from an orthogonal can be applied to obtain all the elements
that define the trihedra reference system Oxyz . With this presented procedure it
has been demonstrated and solved the Pohlke´s Theorem analytically as well as
graphically.

6 Conclusions

Pohlke´s theorem can be stated as follows: given three arbitrary segments and
coincidents at an origin point, there is a unit tri-rectangular trihedra in the space
that proceed from the projection of these segments in a cylindrical projection. This
paper presents the graphical demonstration and solves the theorem. No graphical
complexity determining the diagonal magnitude associated with any direction in
the frame plane. There is not complexity determining the fundamental lengths and
obtention of the values that define the principal projection directions. From these
graphs measures and constructions through on the invariants relations between or-
thogonal axonometric system and its associated oblique axonometric systems
found a simple and practical way to solve Pohlke´s theorem.

References

1. M. Pémová. Theory and practice of the representation of space objects in the school mathe-
matics. Acta Didactica Universitatis Comenianae: Mathematics, 2008, 8, 79-101.
2. L. Gimena, P. Gonzaga and F.N. Gimena. Main axonometric system related views as tilt of the
coordinate planes. Proceedings of IMProVe 2011, Venice, June 2011, pp 748-752.
3. L. Gimena, F.N. Gimena and P. Gonzaga. New Constructions in Axonometric System Funda-
mentals. Journal of Civil Engineering and Architecture 2012, 6(5), 620-626.
4. P. Gonzaga, L. Gimena, F.N. Gimena, M. Intrinsic relations between the orthogonal axono-
metric system and its associated obliques. Analytical proposal and graphic operations. Pro-
ceedings of XXV International Conference on Graphics Engineering, San Sebastián, June
2015, pp 297-306.
Part VII
Geometric Product Characteristics

GPS and GD&T are acrony ms more and more co mmon , in many cases mandatory,
in the actual design and manufacturing of goods whose higher complexity, in
terms of assembly and shape, requires specific approaches to better characterize
their geo metry as well as the functionality. The p roper geometric specification and
tolerancing definit ion together with geometric and functional characterization of
products lead a better understanding of both product and manufacturing processes
addressing the correct way to get higher quality products and, at the same time,
cost production saving. Presented researches range from the proposition of a new
method that offer a complete, consistent and efficient description of the geomet-
rical and topological parameters of a part, and it includes information, such as d i-
mensions and tolerances, to the introduction of a sound mathematical description
of tolerances variability, able to significantly improve the computational effort for
the cumulative stack-ups of geometrical deviations in tolerance analysis.
Control and inspection tasks, that usually need a big investment in equipment
and training activit ies of technical staff, can be carry out through virtual measur-
ing labs.
Other papers deal with researches on the advantages coming fro m the use of
CAT systems in the early design stages, and on the possibility to integrate the
classic ISO specification approach for rigid parts to the elastic co mpliant behavior
of hyperstatic mechanis m aimed to guarantee the conformity of the functional re-
quirements and parts’ assemblability.
As the characterization of part’s shape defects coming fro m the manufacturing
processes is important for many assembly processes, the comparison of several
shape decomposition methods seems to be the starting point to get accurate info r-
mat ion on the outcomes of the manufacturing system.
9 90 Part VII : Geometic Product Characteristics

The geometrical characterizat ion of products is today often carried out by using
reverse engineering techniques able to virtualize with automatic or user-guided
approaches the acquired physical part, but only the latter offer the efficient way to
generate a parametric CAD model of the part. This also passes through advanced
segmentation techniques for the correct feature recognition and its reconstruction.
Researches in different industries are also shown, fro m the characterization of
vulcanized rubber products to porcelain dishware specificat ion.

Jean-Yves Dantan – ENSAM

Marìa L. Martinez-Muneta - Univ. Politecnica de Madrid

Salvatore Gerbino - Univ. Molise


Section 7.1
Geometric Product Specification and
Tolerancing
ISO Tolerancing of hyperstatic mechanical
systems with deformation control

Oussama ROUETBI1,2*, Laurent PIERRE1, Bernard ANSELMETTI1 and


Henri DENOIX2
1
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 61 avenue du président
Wilson, F-94235 Cachan France
2
Etudes et Productions Schlumberger, 1 rue Henri Becquerel, 92140 Clamart, France
* Corresponding author. Tel.: +33147402757; fax: +0-000-000-0000. E-mail address:
oussama.rouetbi@ens-cachan.fr

Abstract
The functional tolerancing of hyperstatic mechanisms provides contractual docu-
ments established following the ISO tolerancing. The tolerancing methodologies
consider that the mechanism is infinitely rigid. These mechanisms impose tight
clearances to ensure the functional requirements and parts fittability. The proposed
methodology consists in developing a mechanical model relating the tolerances
obtained by traditional methods of geometrical tolerancing and the parts deform-
ability to define the tolerance values of the geometrical specifications. The first
step is to define the geometrical specifications with ISO tolerancing. The fittability
between two parts in contact requires maximum material conditions. The func-
tional requirements employ least material condition. The second step consists in
defining the capacity of parts to deform taking the tolerance values into account. A
mechanical model is described relating the parts deformability to the tolerances to
guaranty the conformity of the functional requirements and assembly parts fittabil-
ity. As a validation example, the proposed methodology is used on a hyperstatic
mechanism composed of two subassemblies: an outer tube and a shaft made of
several assembled sections.
Keywords: ISO tolerancing; hyperstatic mechanism; deformation; maximum and
least material requirements; mechanical model.

1 Introduction

In a classic tolerancing approach, the parts’ defects are compensated by the clear-
ances to guaranty the assembly requirements. Hyperstatic assemblies impose tight
clearances. The defects are then compensated by certain parts’ deformations. In
the academic works involving geometrical tolerancing and deformation, the devia-
tions are calculated at nominal dimensions of the CAD model. They are then
added to the accumulated tolerances to verify the functional requirements (1).

© Springer International Publishing AG 2017 993


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_99
994 O. Rouetbi et al.

In the case of hyperstatic mechanisms, in order to fit assembly parts, the deforma-
tions depend on the geometrical parts’ defects. The determination of the accumu-
lated tolerances is therefore necessary to calculate the forces to be applied and the
deformation of the parts in the worst case. It is on the assembly so deformed that
the different functional requirements can be verified.
This methodology is developed in this paper. Firstly, a defined mechanism is tol-
eranced as an assembly of rigid parts to understand the assembly process difficul-
ties and to guaranty the functional requirements. The next step relies on the parts’
capacity to deform to integrate it into the tolerancing approach by building a me-
chanical model (finite elements analysis, beam…) as done in (2), (3) and (4).
To validate the proposed methodology, it is applied on an assembly inspired from
a mechanism used by petroleum industries.

2 Proposed methodology

The logogram in Figure 1 shows the proposed methodology. This methodology


exploits the parts’ deformability composing a hyperstatic assembly to ensure the
assembly and functional requirements. In order to optimise the tolerance values,
the main idea is to determine a mechanical model. The latter gives a mathematical
relation between necessary forces to fit all the assembly parts and the tolerance
values considering interferences between parts. Several studies have been consid-
ering contact interaction in tolerancing analysis (5), (6) and (7).

Define the CAD model and functional


requirements

Tolerancing of parts and choice of tolerances

Determination of interferences between parts

Choice of tolerance values

Determination of forces to apply in order to fit Optimise a set of


all the assembly parts tolerance values

Functional Non guaranteed


requirements?
Guaranteed

Fig. 1. Proposed methodology logogram.


ISO Tolerancing of hyperstatic mechanical ... 995

2.1 Tolerancing approach of rigid parts

Generally, a designer tends to initially create a CAD model composed by parts


with perfect shape at nominal dimensions. This model does not take into account
the manufacturing or the assembly processes that introduce geometrical defects on
the mechanism. The geometrical tolerancing ensures the fittability of parts and the
functional requirements. In that purpose, the designer should specify geometrical
specifications related to the assembly process and the functional requirements.
The tolerancing approach is divided into three steps: tolerancing the parts in con-
tact, tolerancing the assembly of parts and the transfer of functional requirements.
To ensure the assembly requirements of parts in contact, the designer should de-
fine the assembly process giving the order of the parts positioning and the adjust-
ments operations. The assembly requirements can be performed using CLIC
method (8) (French acronym for "Location Tolerancing with Contact Influence")
developed by B. Anselmetti according to ISO tolerancing (9) with maximum ma-
terial requirement (10). The geometric specifications are defined with respect to
the assembly requirements guarantying the fittability of parts in contact.
In the case of positioning subassemblies in contact, a transfer of the assembly re-
quirements must be applied to each part. Various mathematical models allow cal-
culating dimensioning chains like analysis lines method (8), the behaviour laws
(11) and assembly conditions with polytopes (12).
Finally, to define the functional tolerancing, the designer needs to specify the op-
erating conditions of the mechanism. The transfer of functional requirements is
carried out using the same tools as for the transfer of assembly requirements. The
tolerancing analysis will provide mathematical relations resolved by the analysis
lines in Quick GPS (13), polytopes (14), deviation domains (15), T-map (16)...
These conditions generate geometrical specifications with reference systems at
least material requirements (10).
During these different steps, the designer can adjust the tolerance values respect-
ing various constraints including manufacturability, costs, etc.

2.2 Tolerancing approach considering deformable parts

If the various requirements (assembly, functional requirements) cannot be simul-


taneously respected, the proposed method can be applied taking the parts deforma-
tions into account. This method consists in determining a mechanical model giv-
ing a mathematical relation between necessary forces to fit all the assembly parts
and the tolerance values.
The designer studies the capacity of the different parts to be deformed. For each
part, the deformability depends on the size, the shape and the used material. The
deformability behaviour can be different in one direction and not in the others.
The next step is to build a mechanical model at nominal dimensions, respecting
the shapes and the functional or assembly requirements. For example, a beam
model can be used rather than finite elements model for slim parts. In boundary
996 O. Rouetbi et al.

conditions, the effect of geometrical variations of manufacturing and assembly


processes is introduced at the beam model as displacements. For a finite element
model, some displacements of nodes are imposed. Finally, the designer proposes a
geometrical model, taking the effect of geometrical defects and the parts deform-
ability into account.
The last step consists in optimising a set of tolerance values, in particular minimiz-
ing the associated costs by increasing their values while respecting the functional
requirements and the maximum mechanical stress in and between parts, etc.

3 Application

3.1 Description of mechanism

The studied mechanism is a hyperstatic assembly composed of two subassemblies:


an outer tube and a shaft made of several assembled sections (see Figure 2). The
shaft is composed mainly of 4 slim parts separated by seals supports.

Elastic seals Outer tube

S0 P1 S1 P2 S2 P3 S3 P4 S4

Seals supports Slim parts

Fig. 2. Hyperstatic assembly of a multi-components shaft in an outer tube

For the proper functioning of the mechanism, contacts must not occur between
metallic parts of the shaft and the tube. To guarantee the non-contact metal to
metal, the shaft guidance is ensured using elastic seals, which should be mounted
with some squeeze.
Since the mechanism is symmetric, the assembly process is described as follow:
the shaft assembly can start from any of the two end-seal supports (S0 or S4 shown
in Figure 2). Each seal support or slim part is positioned with the previous part by
planar contact and short cylinder, and maintained in position using four screws.
Once the shaft is assembled, it is inserted into the outer tube by pulling it.
The difficulty at this level is to compensate the geometrical defects of the shaft to
be able to insert it into the outer tube while maintaining permanent contacts at
elastic seals levels (see Figure 3). To overcome the assembly difficulties, shaft
parts and elastic seals deformations are required. The elastic seals squeezes in-
volve contact forces and intense friction with the outer tube during insertion.
ISO Tolerancing of hyperstatic mechanical ... 997

Fig. 3. Example of straightness default by geometrical defects

3.2 Functional tolerancing of rigid parts

The functional tolerancing is performed with CLIC method (8). The assembly re-
quirement between shaft parts is described by geometrical specifications with (1)
asterisk in Figure 4. The transfer of shaft straightness is guaranteed by specifica-
tions with (2) asterisk in Figure 4. To guarantee the non-detachment of the seals,
their supporting surface is taken as reference on seals supports to locate the geo-
metrical specifications (reference F in Figure 4.b).
(2) t5 G t5 G (2)
(1) t1 600 t1 (1)

4x ‡d2 r t4/2 A 4x ‡d2 r t4/2


D
(1) ‡ 0䒁 A B 䒁 ‡ 0 䒁 D E 䒁 (1)
‡d2 േ t6/2 䑹
(1) ‡ 0 䒁 D (1)
C ‡0䒁 A F

B E

2x ‡d1 േ t2/2
(a) (2)
‡ t6 CZ 䒁

2x
25 // t5 (2)
A
4x M5 x 0.8 4x M5 x 0.8
(1) 䒄 L1 䒄 L1
‡ t3 䒄 C D 䒁 ‡ t3 䒄 E F 䒁 (1)

‡d1 േ t2/2 ‡d1 േ t2/2


‡ t4 䒀 A B (2) ‡ t4 䒀 A B (2)
(1) B (1)
‡ 0 䒁 C (1) t1 t1 ‡ 0 䒁 E (1)
D C E F
(b)
‡d2 േ t6/2 䑹 ‡d2 േ t6/2 䑹

Fig. 4. (a) Geometrical specifications of slim part (b) Geometrical specifications of seal support
998 O. Rouetbi et al.

For the case of the outer tube, three geometrical specifications (see Figure 5) are
proposed to ensure the fittability (straightness at maximum material), contact
maintaining (straightness at least material) and evenly distributed contacts at elas-
tic seals levels (cylindricity and envelope requirement).

‡ D േ t2/2
Ø t3 䒀
t1
Ø t4 䒁 / L

Fig. 5. Geometrical specifications of slim parts, seal support and the outer tube

This paper examines mainly the straightness of the shaft at the final step of inser-
tion into the outer tube to find a mathematical relation between necessary forces to
fit all the assembly parts and the tolerance values.
Firstly, the tolerancing analysis was performed by the analysis lines method (8) to
detect the impact of geometrical defects and understand the worst cases configura-
tions for the straightness of the shaft maximising the insertion forces. The result of
this study is that the straightness is mainly affected by the clearances and the an-
gular deviations ߛ௜ on the junctions between the seals supports and the slim parts
as shown in Figure 6.

Fig. 6. Worst case configuration according to straightness of the shaft

3.3 Assembly requirements of deformable parts

To guaranty the insertion according to analysis lines method, tight tolerance val-
ues are generated. For that purpose, and regarding the shape of the slim parts, a
beam model can be built. The insertion force is defined as a sum of influences of
different tolerance values in the form:

Finsertion ¦c .t i i (1)
ISO Tolerancing of hyperstatic mechanical ... 999

The shaft is assimilated to a continuous and deformable beam with different sec-
tions for each slim part. The shaft is inserted into the outer tube supposed infi-
nitely rigid with punctual elastic contacts using springs. The geometrical defects
of the outer tube will be transposed on the shaft. As a result, the studied mecha-
nism can be simplified into a beam laid on elastic unilateral contacts using springs
(see Figure 7). The deviation of each junction between each slim part and adjacent
seal supports, created by the clearances and the angular deviations, will be intro-
duced in the beam model in boundary conditions as displacements noted ߜ௜ (see
Figure 7). The maximum value of ߜ௜ is evaluated by the analysis line method and
taking as reference the axis passing through the centers of S0 and S1:

Gi max ¦k j ˜ t j (2)

The mechanism is studied as a beam laid on N elastic unilateral contacts (see Fig-
ure 7).

S0 S1 S2 Si Si+1 SN
Insertion
force
L1 L2 Li+1
L

Fig. 7. 2D Beam model

An energetic method applied on beam theory determinates a matrix T respecting


the static balance of the beam and springs. The springs release their energies to the
beam creating its deformation. The equation (3) is established giving the relation
between the vector of contact forces FS and the boundary conditions Ɂ.

FS Tuδ (3)

This methodology was applied putting the shaft in the worst case as described by
the analysis lines method in Figure 6. The insertion force has the form of equation
(4) by straighten the shaft inside the outer tube, where P is the friction coefficient
between each seal and the outer tube. f is the row matrix of friction.

Finsertion ¦P ˜ FSi f u FS f u Tu δ (4)

The worst case configuration, according to straightness of the shaft with planar
deviations, gives a reduced insertion force could be not the worst case for the in-
sertion force. For that reason, the research of the worst case configurations were
integrated to the beam model. By generating 3D geometrical deviations, the beam
1000 O. Rouetbi et al.

model, as it was described, is used to calculate the insertion force of all possible
configurations (see Figure 8).

O S0 P1 S1 P2 S2 P3 S3 P4 S4 x

y y y y
3D geometrical
deviations z z z z

Fig. 8. 3D Geometrical deviations

As shown in Figure 8, zi is the direction of the angular deviationߛ௜ . ߠ௜ is the angle


between zi and z. By this approach, the worst case configuration occurs when all
the ߛ௜ values are maximized and ߠ௜ values are alternatively between zero and ߨ
from one junction to the next. The shaft takes the shape of “W” in Oxy plane cor-
responding to the maximum moment of inertia. In this case, the insertion force is
estimated to 2.5 times the insertion force obtained with the worst case of straight-
ness of the shaft (see Figure 6).
The last step of the proposed methodology is to optimise the different tolerance
values. It aims to find a compromise between minimizing the cost by increasing
tolerance values while respecting a maximum insertion force.

f u Tu δ d Finsertionmax (5)

4 Conclusions

This paper presents a methodology that provides a mathematical model in order to


optimise the tolerance values and the assembly processes, relating them to the
necessary forces to fit all assembly parts. The encountered geometrical defects, on
a mechanism, include the variability of the manufacturing and the assembly proc-
esses, as well as the mechanical model (finite elements analysis, beam model…)
taking into account the parts’ deformability. This methodology is illustrated to es-
timate the necessary forces to assemble the mechanisms used by petroleum indus-
tries.
This methodology optimises only tolerance values rather than proposing geomet-
rical specifications. One of the future perspectives for this work is to take into ac-
count the parts’ capacity to deform at the level of the transfer of functional re-
quirements. The tolerancing analysis methodology used in this article can be
applied to other types of mechanical systems.
ISO Tolerancing of hyperstatic mechanical ... 1001

References

1. J. Stuppy and H. Meerkamm, Tolerance analysis of a crank mechanism by taking into account
different kinds of deviation, in 11thCIRP CAT Conference, Annecy, France, March 2009.
2. G. Cid, F. Thiébaut and P. Bourdet, Taking the deformation into account for components' tol-
erancing, in 5th Conference on IDMME, Bath, UK, April 2004.
3. G. Cid, Etablissement des relations de comportement de mécanismes avec prise en compte des
écarts géométriques et des souplesses des composants, PhD thesis, ENS Cachan, December
2005.
4. I. A. Manarvi and N. P. Juster, Framework of an integrated tolerance synthesis model and us-
ing FE simulation as a virtual tool for tolerance allocation in assembly design, Journal of Ma-
terials Processing Technology, vol. 150, no. 1-2, pp. 182-193, 2004.
5. K. Xie, L. Wells, J. A. Camelio and B. D. Youn, Variation Propagation Analysis on Compliant
Assemblies Considering Contact Interaction, Journal of Manufacturing Science and Engi-
neering, ASME, vol. 129, no. 5, pp. 934-942, October 2007.
6. L. Pierre, D. Teissandier and J.-P. Nadeau, Integration of thermomechanical strains into toler-
ancing analysis, International Journal on Interactive Design and Manufacturing, vol. 3, no. 4,
pp. 247-263, 2009.
7. S. J. Hu and J. Camelio, Modelling and control of compliant assembly, CIRP Annals - Manu-
facturing Technology, vol. 55, no. 1, pp. 19-22, 2006.
8. B. Anselmetti, Generation of functional tolerancing based on positioning features, Computer-
Aided Design, vol. 38, no. 8, pp. 902-919, August 2006.
9. ISO1101:2013, Geometrical Product Specifications (GPS), Geometrical tolerancing, Toler-
ances of form, orientation, location and run-out, 2013.
10. ISO2692:2014, Geometrical product specifications (GPS) – Geometrical tolerancing –
Maximum material requirement (MMR), least material requirement (LMR) and reciprocity
requirement (RPR), 2014.
11. E. Ballot, Lois de comportement géométrique des mécanismes pour le tolérancement, PhD
thesis, Université de Nancy, 1995.
12. D. Teissandier, V. Delos and Y. Couetard, Operations on polytopes: application to tolerance
analysis, Kluwer academic publisher, 1999, pp. 425-433.
13. B. Anselmetti, R. Chavanne, N. Anwer and J.-X. Yang, Quick GPS: A new CAT system for
single-part tolerancing, Computer-Aided Design, vol. 42, no. 9, pp. 768-780, 2010.
14. L. Pierre, D. Teissandier and J.-P. Nadeau, Variational tolerancing analysis taking ther-
momechanical strains into account: Application to a high pressure turbine, Elsevier, vol. 74,
pp. 82-101, 2014.
15. M. Giordano, S. Samper and J.-P. Petit, Tolerance analysis and synthesis by means of devia-
tion domains, axisymmetric cases, in Models for Computer Aided Tolerancing in Design and
Manufacturing, Springer, 2007, pp. 85-94.
16. J. K. Davidson, A. Mujezinović and J. J. Shah, A New Mathematical Model for Geometric
Tolerances as Applied to Round Faces, Journal of Mechanical Design, vol. 124, no. 4, pp.
609-622, 2002.
How to trace the significant information in tolerance
analysis with polytopes

Vincent Delos1∗ , Denis Teissandier2 and Santiago Arroyave-Tobón2


1 CNRS, I2M, UMR 5295, F-33400 Talence, France
2 Univ. Bordeaux, I2M, UMR 5295, F-33400 Talence, France
∗ Corresponding author. Tel.: +33-5-56-84-53-33. Email address: vincent.delos@u-bordeaux.fr

Abstract Our approach in tolerance analysis of a mechanical system is based on


the use of sets of constraints. These operand sets model the geometrical variations
between two surfaces of the same part or between two surfaces of distinct parts po-
tentially in contact. They are 6-dimensional polyhedra, 3 variables modelling the
translation and 3 others the rotation. In order to compute the cumulative stack-up
deviations between any couple of surfaces, we need to bound these polyhedra mak-
ing polytopes out of them to run 6d Minkowski sums. Checking the validity of the
geometrical tolerances is the result of a process involving Minkowski sums (mod-
elling the propagation of the geometrical defects in a series through the mechani-
cal system) and intersections (simulating multiple contacts between different parts).
However, adding artificial bounding facets or cap half-spaces, can turn out to be far
too expensive because their number skyrockets as a consequence of the accumula-
tion of the degrees of freedom. So we trace all the facets of the polytopes through the
computation process to be able to distinguish between the relevant information, for
the tolerance analysis, and the capped one. Keeping such a complexity under con-
trol results in a very significant saving of computation time and memory space. In
the second part of the paper, we apply our approach to an industrial case to demon-
strate its efficiency on a real-life mechanical system. As no restrictive assumption
on the contacts between parts have to be made, our method is perfectly suitable for
over-constrained mechanical systems.

Keywords: Tolerance Analysis, Polyhedra, Polytopes, Cap Half-spaces, Minkowski


Sums.

1 Introduction

Approaches for geometric tolerancing by sets of constraints allow to simulate man-


ufacturing defects in 3-dimensional tolerance chains manipulating 6-dimensional
variational models. All possible displacements of a toleranced feature inside its tol-
erance zone or the relative movements of two features of two parts mated by a con-
tact restriction are considered. Models such as Domains [1] and T-Maps [2] consider

© Springer International Publishing AG 2017 1003


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_100
1004 V. Delos et al.

non-linear constraints while others are based on linear constraints (such as Polytopes
[3]) by considering manufacturing defects as small displacements [4]. When having
parts with contacts in a serial configuration, the cumulative defect limits on parts can
be calculated summing sets of the geometric and contact constraints [5]. In the case
of parts with parallel contacts, the intersection of the respective sets of constraints
is required. Then, all possible extreme configurations between any two surfaces of
an assembly can be computed by means of these two operations.
The set of constraints coming from a toleranced feature represents mathemat-
ically an unbounded polyhedron due to the degrees of freedom that generate un-
bounded displacements. Given the complexity to operate on polyhedra in R6 , Homri
et al. [6] suggested to turn the polyhedra into polytopes introducing virtual bound-
aries, called cap half-spaces. Algorithms for computing Minkowski sums of poly-
topes are presented in [7, 8, 9, 10]. Due to the accumulation of degrees of freedom
in the toleranced mechanical systems, this strategy has to face the multiplication of
cap half-spaces during the computation of Minkowski sums. As a consequence, the
time for computing cap facets (facets associated with cap half-spaces) is in general
far greater than the one for computing facets representing real limits of bounded
displacements.
In response to this issue, this paper presents an improved method for comput-
ing cumulative stack-ups of geometrical deviations. The method is based on the
tracing of the cap half-spaces through the computation process to be able to distin-
guish them between the relevant information, i.e. the half-spaces representing real
limits in the associated tolerancing problem. Therefore, after each operation, the cal-
culated polytope can be cleaned reducing the number of cap half-spaces. Keeping
such a complexity under control saves a significant amount of computation time and
memory space and, last but not least when we face numerical accuracy problems, it
reduces the probability to trigger a bug.
The next section of the paper describes in detail the way of calculating sum of
polyhedra based on the introduction of cap half-spaces. In section 3 the new strategy
is presented by means of a case of study and compared with the traditional one to
demonstrate its efficiency. Finally, some conclusions and perspectives are discussed
in section 4.

2 Tolerance analysis with R6 -polytopes

2.1 Polytopes and polyhedra

A polytope is defined as the convex hull of a finite set of points, called the V-
description, or as the bounded intersection of a finite number of half-spaces, called
the H-description. The Minkowski-Weyl theorem states that both definitions are
equivalent.
How to trace the significant information in tolerance ... 1005

Fig. 1: Both descriptions are equivalent, on the left the V -description, on the right the H-
description

 
k 
k
l
P = x ∈ Rn /x = λi ai , λi ∈ [0, 1], λi = 1, ai ∈ Rn = H̄j− (1)
i=1 i=1 j=1

where H̄j− = {x ∈ Rn /αTj x + αj0 ≥ 0, αj ∈ Rn , αj0 ∈ R}. For the polytope


P we use the notations: HP = {H̄j− , j = 1, ..., l} is the list of half-spaces of P,
TP = {Fj , j = 1, ..., l} is the list of faces of P, LP = {Fj , j = 1, ..., l} is the list
of facets of P, VP = {ai , i = 1, ..., k} is the list of vertices of P.
Polyhedra are unbounded polytopes and can also have equivalent H-description
and V-description. The lattice TP of a polytope or polyhedron is the set of all its
faces: the vertices (0-dimensional faces), the edges (1-dimensional faces), · · · , the
facets ((n-1)-dimensional faces), the polytope or polyhedron (n-dimensional face).

2.2 Polyhedra applied to tolerance analysis

A tolerance zone, coming from a geometric, contact or functional specification, rep-


resents some limits in the position of a toleranced feature. These limits can be de-
fined over a finite set of points of the feature, and then, a finite set of constraints is
obtained. If each constraint is modeled mathematically as a half-space, the complete
set represents, in turn, a convex H-polyhedron in R6 [3, 11]:

m
Γ = H̄i− = {x ∈ R6 : bi + aTi x = bi + ai1 x1 + ... + ai6 x6 ≥ 0} (2)
i=1

where the rotation variables x1 , x2 , x3 and the translation variables x4 , x5 , x6


characterize the positioning of the toleranced feature with respect to its nominal
definition, bi is related with the size of tolerance zone and aij (1 ≤ j ≤ 6) are scalar
parameters depending on the geometry of the toleranced feature and the location of
the point where the constraints are defined.
If the constraints come from a geometric specification, the operand set is called
a geometric polyhedron. The same applies for contact and functional polyhedra.
Once the variations of each feature are represented by a polyhedra, the cumulative
1006 V. Delos et al.

deviations in a toleranced chain can be calculated intersecting and summing them.


The definition of the Minkowski sum of two sets is:

A ⊕ B = {a + b, a ∈ A, b ∈ B} (3)

According to eq. 2, the polyhedra coming from a tolerance analysis problem


are originally defined by its H-description (set of half-spaces). In addition, this
way to describe polyhedra is natural to perform intersections [14]. However, as a
Minkowski sum of polytopes is defined as the convex hull of the sum of the extreme
points of the operand, so handling the V-description (set of vertices) can be very
useful, see [10]. For solving a tolerance analysis problem we designed a method
where the operands must be available in double description (to sum HV-description
polytopes see [9]) to switch easily between sums and intersections.
A polyhedron coming from a toleranced feature is an unbounded polytope in
R6 , except for the case of those coming from most complex surfaces. This is due
to the theoretically unbounded displacements linked to the degrees of invariance
of the toleranced surfaces or the degrees of freedom of the toleranced joints. Be-
cause of the symmetry conditions of the constraints imposed by the tolerance zones,
these polyhedra are generally not pointed but prismatic. As a consequence of this,
the intersection of the half-spaces does not generate any vertex, and therefore, the
computation of the Minkowski sums becomes even more complex.

2.3 Adjunction of cap half-spaces

In order to tackle the aforementioned difficulties treating polyhedra, Homri et al.


[6] suggest to turn them into polytopes adding artificial bounding facets, called cap
half-spaces (see figure 2):
m ⎛ m+m ⎞ ⎛ ⎞


c
c
m+m
P= H̄i− ∩ ⎝ H̄j− ⎠ = Γ ∩ ⎝ H̄j− ⎠ (4)
i=1 j=m+1 j=m+1

where mc = 2.d and d is the number of degrees of invariance (or freedom) of the
toleranced feature. However if we want to handle polytopes to perform the sum
as if they were polyhedra, we have to make sure that the polyhedra lattice is not
intrinsically affected by the introduction of the cap half-spaces. So we propose the
following definition to make sure the topological structure of the polyhedra can be
retrieved from the polytope one :
Definition Let Γ be an unbounded irredundant intersection of half-spaces,
HΓcap = {H̄j− } is a list of cap half-spaces for Γ if and only if
• P = Γ ∩ (∩j H̄j− ) is a full dimension polytope
• None face of Γ is either excluded or made redundant by a half-space of {H̄j− }
How to trace the significant information in tolerance ... 1007

Fig. 2: Illustration of the introduction of cap half-spaces to virtually limit an un-


bounded displacement in a polyhedron

Even though adding cap half-spaces is necessary to run the sums, the resulting
price to pay can become too high in terms of computation time. This is due to the
fact that most of the facets of the sum rapidly become cap half-spaces. So we decide
to use a geometric property to classify all the computed half-spaces.
In a sum of polytopes, every face of the result can be decomposed in a sum of
faces from the different summands and such a decomposition is unique (see [8]):

F ∈ TP  ⊕P  ⇒ ∃!F  ∈ TP  , ∃!F  ∈ TP  /F = F  ⊕ F  (5)

Let’s assume F is a facet of the sum. Both faces F  and F  have their own
V-descriptions called VF  and VF  . We can describe each of these vertices as an
intersection of their supporting hyperplanes, coming from the operand polytopes, to
check whether one of them is actually the frontier of a cap half-space. In that case
it would mean that VF  or VF  is related to a cap half-space because it contains a
vertex created to turn a polyhedron into a polytope, which means that F has to be
considered as a cap half-space. This property allows to remove an important number
of facets that have not strictly speaking a physical meaning. Then we can rebuild a
lighter polytope in terms of faces such as depicted further in section 3.3.
1008 V. Delos et al.

3 Two strategies to compute the cumulative stack-up of


geometrical deviations

3.1 Case study

In order to illustrate the advantages of controlling the number of the cap half-spaces
of the calculated polytopes, the tolerance analysis of a sub-assembly of a piston
engine is presented. The functional condition of the studied mechanism implies the
control of the relative position of the inferior nominally cylindrical surface of the
connecting rod and the upper nominally planar surface of the piston, surfaces (1,2)
and (3,2) in figure 3. Then, the idea is to simulate with polytopes the geometrical
defects in the involved parts to determine the maximal stack-up of deviations in the
contact architecture.

Fig. 3: From the CAD model to the corresponding contact graph

In the contact graph of the mechanism depicted in figure 3, each edge represents
some deviations. By modeling these deviations with geometric and contact poly-
topes the following relation can be deduced:

P1,2/3,2 = P1,2/1,0 ⊕ P1,0/1,1 ⊕ P1,1/2,1 ⊕ P2,1/3,1 ⊕ P3,1/3,0 ⊕ P3,0/3,2 (6)

where Pi/j is the polytope describing the relative position of the surface i respect to
the surface j.
In eq. 7, homothetic polytopes can be reduced to one operand with a tolerance
zone size equal to the sum of the ones of the operands. The case of homothetic
polytopes occur when two or more operands are created from the same geometric
How to trace the significant information in tolerance ... 1009

feature, as it happens with the operands P1,0/1,1 and P1,1/2,1 and polytopes P2,1/3,1
and P3,1/3,0 . Then:

P1,2/3,2 = P1,2/1,0 ⊕ P1,0/2,1 ⊕ P2,1/3,0 ⊕ P3,0/3,2 (7)

Two pair of cap half-spaces were introduced to the operands P3,0/3,2 , P2,1/3,0
and P1,0/2,1 (coming from cylindrical surfaces oriented along x-axis) to artificially
limit the rotation and translation along x-axis (hereafter rx and tx respectively).
Similarly, the displacements tx , tz and ry of the operand P1,0/1,2 (defined from a
planar surface with normal along y-axis) were limited with three pair of cap half-
spaces. When having a set the bounded operands belonging to R6 , the Minkowski
sums of eq. (7) can be computed.

3.2 The current way: Direct computation of Minkowski sums

Fig. 4: Projection 3d of the first sum in R6 and displayed in the software PolitoCAT [13]

The actual strategy for tolerance analysis with 6-polytopes implies summing
directly the operands. Figure 4 shows a 3D representation of the first sum. The
operands were calculated at the centroid of the surface (2,1) and projected after to
the subspace spanned by rx , rz and ty just for visualization purposes. Due to the
degree of freedom in rx of the operand P1,0/2,1 , the calculated polytope is also
unbounded along the same direction. It means that all the facets bounding rx in
P1,2/2,1 are associated with cap half-spaces (facets in red). Then, although a very
low number of cap half-spaces is introduced in each operand polyhedron, this prop-
erty is lost after the first Minkowski sum and it becomes worst as more operations
are computed. The same applies to the other unbounded directions when the oper-
ation is considered in R6 . This can be noticed in the complete simulation of this
example, whose results are presented in table 1. The topology of the operands is
becoming more complex as more sums are computed, and actually, this is just due
to a multiplication of the cap facets. As a consequence of this, 99.91% of the facets
of the final calculated polytope P1,2/3,2 are cap facets.
1010 V. Delos et al.

3.3 The new approach: Minkowski sums cutting half-spaces

Fig. 5: Cutting of the cap half-spaces in a 3d projection

As presented in section 2.3, the proposed method suggests the control of the cap
half-spaces after each operation. Let’s take the example of the first sum P1,2/2,1 =
P1,2/1,0 ⊕ P1,0/2,1 to illustrate this. Technically, the sum is computed in the same
way than in the previously presented case. The main difference is the tracing strategy
which is carried out simultaneously and allows to identify between the whole set of
facets, those that are derived from cap half-spaces and those that are representative
for the tolerance analysis problem. For that sum, among the 1144 calculated facets
just 8 are not cap facets: the ones bounding rz and ty . The next step is therefore to
clean the calculated polytope resetting a minimal set of cap half-spaces (see figure
5). This allows to significantly reduce the complexity of the polytope, and thus,
to save time for the next operation. The evidence of this reduction is the time for
complete the simulation by both methods: while the time for the method based on
direct sums was 1386 [s], the time for the method cutting cap half-spaces was 19 [s].
Beyond the reduction in the computational time, the method based on projections
allows to decrease the complexity of the operands, and therefore, the probabilities
of having numerical problems during the calculations.
In order to be sure that both methods provide the same result from the tolerance
analysis point of view, the final calculated polytopes obtained by both methods were
projected to the subspace spanned by rz and ty and their equality was verified (see
figure 6).
How to trace the significant information in tolerance ... 1011

Table 1: Resume of the simulation

Polytope Facets Cap Facets Vertices Time [s]


P1,2/1,0 50 6 192 -
P1,0/2,1 20 4 256 -
Operands
P2,1/3,0 20 4 256 -
P3,0/3,2 20 4 256 -
P1,2/2,1 = P1,2/1,0 ⊕ P1,0/2,1 1144 1136 4032 9
Direct sum P1,2/3,0 = P1,2/2,1 ⊕ P2,1/3,0 5208 5196 16320 127
P1,2/3,2 = P1,2/3,0 ⊕ P3,0/3,2 19810 19794 55808 1250
P1,2/2,1 = P1,2/1,0 ⊕ P1,0/2,1 16 8 128 12
Cutting half-spaces P1,2/3,0 = P1,2/2,1 ⊕ P2,1/3,0 20 8 192 2
P1,2/3,2 = P1,2/3,0 ⊕ P3,0/3,2 24 8 256 5
Computations performed with the library politopix [13] with an Intel Core i7-3740QM.

Fig. 6: Comparison of the projection of the polytopes calculated by both methods

4 Conclusion

In this paper, an improved method for computing cumulative stack-ups of geomet-


rical deviations was proposed. The method is based on the traceability of the half-
spaces of the operands through the computation process. In a sum of polytopes,
every face of the result can be decomposed in a sum of faces from the different
summands and such a decomposition is unique. This property allows us to distin-
guish the caps (which have no meaning in geometric tolerancing and were initially
introduced just to get bounded set) from the relevant information, i.e. the half-spaces
representing real limits in the associated tolerancing problem.
Once a sum of polytopes is calculated and its cap facets are identified, it is possi-
ble to clean it cutting the set of cap facets along each unbounded direction to restore
the minimal strict number of cap facets. It allows to keep under control the com-
plexity of the operands through the simulations and reduce the time for computing
operations.
1012 V. Delos et al.

The advantages of this improved method for summing polytopes were demon-
strated in the case study. A sub-assembly of a piston engine was analyzed. The
tolerance analysis was performed following the method of direct sums in Rn and
also with the new improved method cutting the caps facets. The time when applying
the former one decreased by 98% with respect to the first one.
In the case of features with special geometry (i.e. the case of complex surfaces)
or mechanisms with special topology (i.e. mechanisms with all the movements re-
stricted), the resulting set of constraints does not require the adjunction of cap half-
spaces. When summing several operands of this kind, the traceability proposed in
this paper is not necessary. In those cases, the direct sum of the resulting polytopes
is enough to calculate the stack-up of deviations.
The traceability proposed in this paper opens interesting directions for future
work related with tolerance synthesis. Future work is required to develop a strategy
to optimize the fitting of a calculated polytope with regard to a functional polytope
(the one representing the functional conditions of the mechanism).

References

1. Giordano M. and Samper S. and Petit J.P. Tolerance Analysis and Synthesis by Means of De-
viation Domains, Axi-Symmetric Cases, Models for Computer Aided Tolerancing in Design
and Manufacturing, Springer Netherlands, 85-94, 2007
2. Davidson J.K. and Mujezinovic A. and Shah J.J. A new mathematical model for geometric
tolerances as applied to round faces, Journal of Mechanical Design, ASME, 2002, Volume
124, Number 4, 69-622
3. Teissandier D. and Delos V. and Couétard Y. Operations on polytopes: application to tolerance
analysis. Global Consistency of Tolerances, Kluwer academic (Netherlands), 1999, 425-433
4. Bourdet P. and Mathieu L. and Lartigue C. and Ballu A. The concept of the small displacement
torsor in metrology, Series on advances in Mathematics for Applied Sciences, 1996, Volume
40, 110-122
5. Fleming A. Geometric relationships between toleranced features, Artificial Intelligence, 1988,
Volume 37, Number 13, 403-412,
6. Homri L. and Teissandier D. and Ballu A. Tolerance analysis by polytopes: Taking into ac-
count degrees of freedom with cap half-spaces. Computer-Aided Design, 2015, Volume 62,
112-130
7. Weibel C. Minkowski Sums of Polytopes. PhD Thesis, EPFL, 2007
8. Fukuda K. From the zonotope construction to the Minkowski addition of convex polytopes.
Journal of Symbolic Computation. 2004, 38(4), 1261-1272
9. Delos V. and Teissandier D. Minkowski sum of HV-polytopes in Rn . In 4th Annual Int. Conf.
on Computational Mathematics, Computational Geometry and Statistics, Singapore, 2015
10. Delos V. and Teissandier D. Minkowski Sum of Polytopes Defined by Their Vertices. Journal
of Applied Mathematics and Physics, Scientific Research Publishing, 2015, 3(1), 62-67
11. Teissandier D. and Delos V. Algorithm to calculate the Minkowski sums of 3-polytopes based
on normal fans. Computer-Aided Design, 2011, Volume 43, 1567-1576
12. Ziegler G. M. Lectures on polytopes. Springer Science & Business Media, 1995, Volume 152
13. Delos V. and Teissandier D. PolitoCAT and politopix, open source softwares under the LGPL
licence, 2015, http://i2m.u-bordeaux.fr/politopix.html
14. Arroyave-Tobón S. and Teissandier D. and Delos V. Tolerance analysis with polytopes in HV-
description, Proceedings of ASME IDETC-CIE, Charlotte, NC, 2016
Integrated design method for optimal tolerance
stack evaluation for top class automotive chassis

Davide Panari1*, Cristina Renzi2, Alberto Vergnano1, Enrico Bonazzi1 and


Francesco Leali1
1
Department of Engineering "Enzo Ferrari", University of Modena and Reggio Emilia, Via P.
Vivarelli, 10, 41125 Modena, Italy
2
InterMech-MO.RE., Interdepartmental Research Centre for Applied Research and Services
in the Advanced Mechanics and Motor Sector, University of Modena and Reggio Emilia, Via
P. Vivarelli, 10, 41125 Modena, Italy
* Corresponding author. Tel.: +39-059-205-6278; fax: +39-059-205-6180. E-mail address:
davide.panari@unimore.it

Abstract The tolerances of welded chassis are usually defined and adjusted in
very expensive trials and errors on the shop floor. Computer Aided Tolerancing
(CAT) tools are capable to optimize the tolerances of given product and process.
However, the optimization is limited since the manufacturing process is already
mostly defined by the early choices of product design. Therefore, we propose an
integrated design method that considers the assembly operations before the detail
design of the chassis and the concept design of the fixture system. The method
consists in four phases, namely functional analysis in the CAD environment, as-
sembly sequence modelling in the CAT tool, Design Of Simulation Experiment on
the stack of the tolerance ranges and finally optimization of the tolerances. A case
study on a car chassis demonstrates the effectiveness of the method. The method
enables to selectively assign tight tolerances only on the main contributors in the
stack, while generally requiring cheaper assembly operations. Moreover, a virtual
fixture system is the input for the assembly equipment design as on optimized set
of specifications, thus potentially reducing the number of trials and errors on the
shop floor.

Keywords: Computer Aided Tolerancing, car chassis, 3D tolerancing, design op-


timization, tolerance allocation.

1 Introduction

A car chassis is manufactured by welding several parts together. The MIG weld-
ing process requires first to pose and block the parts against fixtures. Then, filler

© Springer International Publishing AG 2017 1013


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_101
1014 D. Panari et al.

metal is supplied by a gun, in order to fill the gap between the parts to be joined.
In order to guarantee the gap precision, practitioners follow a time-consuming trial
and error approach on the shop floor, while taking the tolerances on chassis draw-
ings as specifications, since already defined. In this sequential product and process
design, the tolerances are usually defined only in the late detail design of automo-
tive components, when redesign activities are very onerous, [1-3]. In particular,
designers mainly rely on their experience, without recurring to any systematic
method to tolerance analysis.
Nowadays, software tools as Computer Aided Tolerancing (CAT) enable ad-
vanced evaluations with statistical simulations even on 3D tolerance stack up.
However, very few authors used a 3D approach for the evaluation of the toleranc-
es in so complex cases as the welding of car chassis. Even if remarkable, these
works use CAT just for the verification of the detail design or even in the factory
acceptance test using historical data on process capability, [4]. However, greater
product and process improvements could be introduced with proper design choices
for the chassis, capable to reduce the gap variability. Therefore, the evaluation of
the gap variability should be introduced in earlier design phases, [5]. More in de-
tails, CAT models evaluate the variability of just few inputs and outputs, [1]. Also,
literature works lack of experimental validation of the simulations, [1,5].
In the present work, a design method based on the integration of CAD and
CAT is proposed with the goal of enhancing chassis design for top class cars.
CAT becomes a design tool and not just a verification one. The optimized CAT
model serves as the integration of product and process design phases. The paper
also presents and discusses the application of the method in a case study on a sub-
assembly of a car chassis. Final remarks close the paper.

2 Integrated CAD and CAT design method

The proposed method aims at introducing the analysis of the measure variations as
the integration of the embodiment design phase of a car chassis and the conceptual
design phase of the fixture system is defined. Specific tasks are expected in a sys-
tematic design approach [6], as shown in Fig. 1. The input of the process is the
CAD model of the chassis. The outputs are the tolerances to be assigned in the
production documents of the chassis and the locator requirements, taken as input
for the conceptual design of the fixture system.
Integrated design method for optimal tolerance … 1015

Fig. 1 Flow charts of the integrated chassis and fixture system design method

The method consists in four phases:


1. Functional analysis. The functional analysis is performed on the 3D CAD
assembly, to identify the measures to be controlled. In this phase, the assembly re-
quirements are listed, in terms of gaps between the edges, or distance between
contact faces, points and axes.
2. Assembly sequence modelling. The assembly is built by means of the CAT
software features, by an ordered input of the parts on the fixture system. The fix-
ture system is defined to constraint the degrees of freedom of each part in the as-
sembly and during the welding process. The three translations and the three rota-
tions about X, Y, Z axes are blocked basing on the 3-2-1 principle, [7]. The
conceptual solutions are described by a set of locators, which will be sufficient to
define and model the assembly/welding jigs. The set of interface points is defined
virtual fixture system. Finally, the tolerances are defined according to a 6sigma
criterion, [8].
3. Design Of Simulation Experiment (DOSE). Once the virtual fixture sys-
tem has been defined, a first attempt allocation of geometrical and dimensional
tolerances is carried out on the chassis parts, according to international standards
[9,10,11] and knowledge retrieved from company database, [12]. In order to eval-
uate the tolerance ranges, a DOSE is performed with geometrical and dimensional
tolerances as control factors. The variation of the exact CAD dimensions is initial-
ly defined as a normal distribution. DOSE factors are the tolerance ranges, as in-
put to the CAT model. The simulations deliver the expected standard deviations
and mean value of the controlled measures, as identified in the functional analysis,
[13].
1016 D. Panari et al.

4. Tolerances Optimization. The optimization phase is aimed at adjusting the


set of tolerance ranges that minimize the assembly cost. The tolerance ranges re-
sulting from the CAT analysis are weighted and combined into a cost function,
which is minimized by an internal algorithm. The tolerance ranges are iteratively
relaxed or tightened so as to reduce the cost function while still delivering the as-
sembly functions. The resulting optimal set of tolerance ranges is passed to the
product detail design phase.

3 Case study on a top-class automotive chassis

The integrated design method defined in Section 2 is applied to the design evalua-
tion of a sub-assembly of a top class car chassis, as shown in Fig.2. The sub-
assembly is manufactured by welding together four aluminum alloy parts. Two
parts are supplied as shell molded, while the others as extruded profiles. Welding
and shell molding technologies provide different tolerance ranges. Moreover, al-
most 100 geometrical/dimensional tolerance values need to be controlled in the
entire assembly.

Fig. 2 CAD model to the embodiment design of the sub-assembly of the chassis

3.1 Functional analysis

The functional analysis defines the measures to be controlled, that is the gap
between the surfaces to be welded, as well as the virtual fixturing system. No pen-
etration contact between faces is required for properly assemblying the parts. Sim-
ilarly, for what concerns the welding process, a gap less than 1mm is required for
the contact between the parts to be welded. Besides, this gap has to be maintained
constant in order to gain a robust design solution. The functional analysis of the
final assembly is used to individuate the set of areas defining the Reference Point
System (RPS). RPS is defined by means of the 3-2-1 locating principle. As for the
Integrated design method for optimal tolerance … 1017

shell molded parts (Fig. 3a, as parts 1 and 3 in Fig.2), the locators have been ap-
plied as follows. The combination of Pad1(Z) and Pin/Hole (X, Y) removes 3 de-
grees of freedom (DoF), the combination Pad2(Z) and Pin/Slot(Y) removes 2
DoF, and the last DoF is eliminated by Pad3 (Z). Hence, the first RPS has been
called XYZ, the second ZY, and the last one Z.

(a) (b)

Fig. 3 Locators distribution on a shell molded part of the chassis (a) and on an extruded part of
the chassis (b)

Correspondingly, the RPS is defined for the extruded parts, (as in Fig.3b, as
well as in parts 2 and 4, in Fig. 2). The locators defined above completely describe
the set of interface points of the virtual fixture system, as shown in Fig.4.

Fig. 4 Virtual fixture system defined for the chassis. The crossed points individuate the positions
of the locators.

3.2 Assembly sequence modelling

The assembly sequence is defined in the CAT environment, according to the


functional analysis carried out in the first step. The RPS of each part is referred to
the virtual fixturing system. According to the 3-2-1 locating principle, part 1 (Fig.
2) is constrained first. Part 4 is located on the z axis, to constraint 3 DOFs. Then it
1018 D. Panari et al.

is moved in the x axis, to constraint 2 DOFs and finally in the y axis, to constraint
the remaining DOF, in order to be assembled to part 1. Similarly, part 2 is referred
to part 1. Part3 is located on the z axis, to constraint 3 DOFs. Then, it is moved in
the y axis, to constraint 2 DOFs and, finally, in the x axis, to constraint the remain-
ing DOF, in order to be assembled to part 2 and 4.

3.3 Design Of Simulation Experiment

The first attempt of tolerance allocation in the functional analysis gives as an


output the set of dimensions, which are subjected to tolerance, as well as their re-
lated tolerance ranges. The tolerance ranges of the welded joints in the assembly
are used as input in the DOSE as control factors. The dimensioning can be per-
formed on the single parts in the CAT environment. The tolerance related to the
shell molded parts are selected according to [10], while the extruded part follows
the [11]. The 3DCS CAT software provided by the US Company Dimensional
Control Systems (DCS) has been used to simulate the dimensional and geomet-
rical variations. Once the tolerances have been defined on the single parts, the
GAP between the surfaces to be joined is simulated, by means of the tolerance
stack up analysis. The set of tolerance ranges found as output from DOSE are the
input variables for the stack up simulation, in the CAT environment. A 2000 runs,
Montecarlo-based CAT simulation is performed by 3DCS software, in order to
provide the input values that most influence the output measures. For each of these
measures, the CAT gives some values as the standard deviation, the maximum and
minimum values of the specific gap.

3.4 Tolerances Optimization

Among the output values deriving from the CAT analysis, the cheapest config-
uration of tolerances is chosen, by means of a cost minimization analysis, as hint-
ed in Section 2. In particular, this cost function allows to find the optimal toler-
ance configuration, within a 6sigma design criterion. As for joint 3 (located as in
Fig.5), requirements for the design for 6sigma have been accomplished, as report-
ed in Fig. 6.
Integrated design method for optimal tolerance … 1019

Fig. 5 Map of the joints related to part 1 in the assembled chassis

Fig. 6 Results related to the gap of joint3

As a first result, the CAT analysis delivers the input variables (tolerance val-
ues) that most influence the gap values of each joint. As an example of the results
gained after the application of the optimization tool in Joint3. In front of an in-
crease of cost due to a tightening of the tolerance values of the main contributor, a
significant cost reduction (20% as in Table 1.) occurs, due to a tolerance relaxa-
tion in the other contributors. Once the resulting optimized set of tolerances for the
gap has been found, the 2D drawing can be built in the product detail design
phase, with the corresponding dimensional and geometrical tolerances resulted
from the optimal set of tolerance ranges. Moreover, the virtual fixture system is
given as an input for the process in the conceptual design phase. The CAD model
of the real fixturing system derived from the conceptual virtual fixturing system
whose interface points have been depicted in Fig. 4, it was designed (Fig.7).
1020 D. Panari et al.

Table 1. Comparison between the tolerance values of the dimension for the gap in joint3, before
and after optimization.

Standard tolerance Optimized tolerance Percentage varia- Percentage cost


values (without optimi- value of Joint 3 (mm) tion of the toler- variation between
zation) of Joint 3(mm) ance value after standard and opti-
optimization mized tolerance
values
±0.35 +0.35 -0.2 +21% +10%
0 +0.1 ±0.1 -50% -10%
±0.05 ±0.1 -50% -20%

Fig. 7 Assembly of the chassis in the fixturing system derived from the designed virtual fixturing
system

4 Experimental Validation for the proposed method

Experimental data, deriving from a real automotive industrial case study, are used
to validate the proposed method. The parts of the chassis were assembled on the
assembly/welding jigs, following the assembly sequence defined in chapter 3.2.
The gap of joint 3, which has been analyzed by means of the CAT analysis, has
been measured on 50 real chassis assemblies, which have been randomly chosen.
Measures have been performed to find the actual distribution of tolerances on the
joint gap. These data have been compared with simulated data of the same joint,
with the aim of a method validation. A good approximation of simulated to exper-
imental data have been showed by the results of this comparison, in terms of
mean, minimum and maximal gap values (Fig.8, Table 2). In particular, a 5% of
error percentage results in the comparison between the experimental and simulated
mean values.
Integrated design method for optimal tolerance … 1021

occurrences of
Gap value (mm) Gap of joint 3
gap values (-)
12
0 0

occurrences of gap values (-)


10
0,1 1
0,2 3 8
0,3 6
0,4 10 6
0,5 11
4
0,6 9
0,7 6 2
0,8 3
0,9 1 0
0 0,2 0,4 0,6 0,8 1 1,2
1 0
gap values (mm)

Fig. 8 Comparison between real and simulated gap on the joint3

Table 2. Comparison between real and simulated values for the gap on the joint3.

GAP Experimental values (mm) Simulated values (mm)


Min 0.100 0.050
Max 0.900 0.894
Mean 0.498 0.472

5 Discussion and conclusions

An integrated design method that considers the assembly operations before the de-
tail design of the chassis and the concept design of the fixture system is proposed.
The method integrates the production process into the product design process.
The method has been validated by means of the tolerance stack up analysis of a
top car chassis assembly, allowing a relaxation of the tolerance values wherever
not required, already in the embodiment design phase.
This proposed method provides several significant advantages. On the one
hand, a virtual fixture system is defined, already in the early design phases, in or-
der to assembly and weld the chassis. On the other hand, for this specific virtual
fixture system and for the chosen assembly sequence, an optimal set of tolerance
is found. This set of tolerances will be used for dimensioning all the parts of the
assembly in the detail design phase. Nevertheless, this method is used only when
the embodiment design of the chassis is defined. This limitation could be over-
come by an integration of the product CAT analysis, together with the virtual
fixturing system in earlier design phases. Another limitation concerns the
Montecarlo simulation, in which neither 2000 runs nor seed one could be enough
to reach a stable result, in terms of mean value and standard deviations. Neverthe-
less, a good correlation between experimental and simulated results has been
found. Current related research is focused also on the application of Finite Ele-
ment Analysis (FEA) in the CAT analysis of flexible assemblies. Future work will
1022 D. Panari et al.

concern the simulation of this case study with the FEA compliant package includ-
ed in the 3DCS software.

Acknowledgments The authors would like to acknowledge the support and contribution of Ma-
serati S.p.A. and OMR.

References

1. Barbero B.R., Azcona J.P. and Pérez J. G. A tolerance analysis and optimization methodology.
The combined use of 3D CAT, a dimensional hierarchization matrix and an optimization al-
gorithm. The International Journal of Advanced Manufacturing Technology, 2015, 81(1-4),
371-385.
2. Di Angelo L., Di Stefano P. and Morabito A.E. Automatic evaluation of form errors in high-
density acquired surfaces. International Journal of Production Research, 2011, 49(7), 2061-
2082.
3. Di Angelo L., Di Stefano P. and Morabito A. E. The RGM data structure: a nominal interpre-
tation of an acquired high point density model for automatic tolerance inspection. Interna-
tional Journal of Production Research, 2012, 50(12), 3416-3433.
4. Wei C, Sun J, Xin-min L, 2014. Tolerance Optimization Considerations Applied to the Sheet
Metal Compliant Assembly, Computer-Aided Design and Applications, 11:sup1, S68-S76.
5. Zhao Z., Bezdecny M., Lee B., Robinson D., Bauer L., Slagle M. and Walls S. Identify/Utilize
Process Capability Information to Predict Variation in Aircraft Early Design. SAE Technical
Paper, 2007, 2007-01-3907.
6. Pahl G. and Beitz W. Engineering design: a systematic approach, 2013 (Springer Science &
Business Media).
7. Ansaloni M., Bonazzi E., Leali F., Pellicciari M. and Berselli G. Design of Fixture Systems in
Automotive Manufacturing and Assembly. Advanced Materials Research, 2013, 712, 2913-
2916.
8. Pyzdek T. and Keller P. A. The six sigma handbook, 2014 (McGraw-Hill Education).
9. ASME Y14.5M-2009. Dimensioning and Tolerancing (American Society of Mechanical En-
gineers).
10. DIN EN 12020-2 Aluminium and aluminium alloys - Extruded precision profiles in alloys
EN AW-6060 and EN AW-6063 - Part 2: Tolerances on dimensions and form; German and
English version FprEN 12020-2:2015
11. UNI EN ISO 8062-3:2009: Specifiche geometriche dei prodotti (GPS) - Tolleranze
dimensionali e geometriche dei pezzi ottenuti da fusione - Parte 3: Tolleranze dimensionali e
geometriche generali e sovrametalli di lavorazione dei getti
12. Peroni M., Vergnano A., Leali F. and Forte M. Design Archetype of Transmission Clutches
for Knowledge Based Engineering. In International Conference on Innovative Design and
Manufacturing, ICIDM, Auckland, New Zealand, January 2016.
13. Bonazzi E., Vergnano A. and Leali F. ANOVA of 3D Variational Models for Computer Aid-
ed Tolerancing with respect to the Modeling Factors. In International Conference on Innova-
tive Design and Manufacturing, ICIDM, Auckland, New Zealand, January 2016.
Development of virtual metrology laboratory
based on skin model shape simulation

Xingyu YAN1, Alex BALLU1*, Antoine BLANCHARD2, Serge MOUTON3 and


Halidou NIANDOU1*
1 Univ. Bordeaux, I2M, UMR 5295, Talence, France
2 Univ. Bordeaux, Mission Investissements d’avenir, Bordeaux, France
3 Univ. Bordeaux, Collège Sciences et Technologies, Talence, France
* Corresponding author. Tel.: +33 5 56 84 53 87, E-mail address: alex.ballu@u-
bordeaux.fr

Abstract Understanding geometrical specifications is becoming more and more diffi-


cult due to the latest developments in ISO GPS (Geometrical Product Specification)
Standards, and at the same time, students’ learning habits are evolving and theoretical
courses on standardized specifications are not attractive. Metrology laboratory work
is much more appealing and highlights the difficulties encountered in interpreting
specifications and the inherent method uncertainties. Nevertheless, metrology activi-
ties require an expensive metrology laboratory equipped with CMMs. In order to carry
out real hands-on experiments, Bordeaux University is designing a virtual laboratory
framework. It is integrated into Moodle (an L.M.S., Learning Management System) as a
new activity to establish a link with other Moodle learning activities (courses, tests,
etc.) and to ensure student tracking. A first prototype of the virtual laboratory is dedi-
cated to dimensional and geometrical metrology with simulated traditional measuring
devices (gauge, micrometer, dial indicator, etc.) and Coordinate Measuring Machines.
Measurement simulation is in a three-dimensional environment and is based on mod-
els of parts with dimensions, orientation, position and form errors (skin model
shapes) and on models of measuring devices with measurement uncertainties.

Keywords: Computer Aided Tolerancing, ISO GPS, e-learning, virtual laboratory, skin
model shape.

1 Introduction

1.1 Concerns

Classroom lessons at the University are not always fully appreciated by the
students. In traditional forms of learning, through physical attendance, both

© Springer International Publishing AG 2017 1023


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_102
1024 X. Yan et al.

student learning and teacher motivation may be affected by external stimuli,


or simply by lack of attentiveness and interest.
Furthermore, scientific and technological studies are based on the practi-
cal application of theoretical learning, through the manipulation of objects
and instruments. Practical classes planned for this purpose require a dedicat-
ed time slot and an appropriate room. The large numbers of students regis-
tered in the first years of higher education, material limitations (number of
rooms, facilities) and a lack of human resources make it difficult to learn only
by practical exercises in classroom.
One solution is to develop an educational strategy based on a Learning
Management System (LMS) integrating a Virtual Laboratory (VL), where
practical exercises can be prepared remotely to consolidate classroom learn-
ing and acquire additional knowledge.

1.2. Existing online laboratories

Online laboratories have been developed for a couple of years now. From a
technological point of view, one can distinguish remote laboratories and vir-
tual laboratories.
Remote laboratories:
The aim of remote laboratories is to conduct real experiments through in-
ternet communication from home or anyplace. Remote laboratories are very
well adapted to experiments with reduced handling[1], [2], essentially elec-
tronics experiments [3]. Experiments must fulfil two requirements for remote
and wide access: they should be automated and short. This is not the case for
dimensional metrology.
Virtual laboratories (VL):
Virtual laboratories are another way to develop online laboratories. Simu-
lation software has long been used for teaching. The real integration of virtual
laboratories, with a 3D environment, is more recent [4]–[7]. To find A review
on virtual laboratories in science has been edited [8].Nevertheless, the use of
information technology for education leads mostly to formatted activities.
The interactions between the student and the learning medium are reduced
to predefined scenarios, which are few in number.

1.3 Project and objectives

This paper presents a virtual laboratory platform under development at the


University of Bordeaux. The target audience is students in bachelor’s and
master’s science and technology degrees with most of the students concerned
being on bachelor’s degree programs. The virtual laboratory should enhance
Development of virtual metrology laboratory … 1025

the offer of online educational materials and help to motivate the younger
students in their discovery of science and technology at the university. The
founding principles of the platform are presented below.
3D environment: Considering the extensive use of CAD-CAM-CAE systems
by the students, the 3D problems encountered in dimensional and geomet-
rical metrology, the project is firmly committed to 3D.
Open experimental procedure: Virtual experiments will leave a degree
of initiative to the student, who must have some freedom of action. The ma-
nipulations should not be imposed or restricted to a predefined and format-
ted linear storyline.
Disciplinary openness: The virtual lab aims to host all kinds of experi-
mental work, in different areas of physics (mechanics, electronics, optics, etc.)
but also in other sectors such as chemistry, and perhaps biology.
Consideration of uncertainties: In most of the existing VLs [4]–[7], the
variability of parameters and measurement of uncertainties are not usually
considered. One of the main pedagogical innovations of the project is based
on considering the variability of the input parameters and uncertainties of
the measuring instruments.
Integration into an LMS: This new "virtual laboratory" is to be integrated
into an LMS (Learning Management System), Moodle initially.
WEB application: As it will be used by every student, everywhere, inde-
pendent of the operating system, the choice of application structure fell on a
web application.
The reason for developing another new VL is that these founding princi-
ples are not encountered altogether in a VL, and the objective of the project is
to meet these requirements.
A first prototype concerns dimensional metrology is under development.
Several applications exist for dimensional metrology [9] and calibration [10],
but they are not really virtual laboratories. In the following sections, the fea-
tures and functions of our VL prototype is introduced in advance. This leads
to the study in generation of the skin model shape and simulation of metrolo-
gy consequently.

2 The virtual laboratory

2.1. Student interface


To have a better idea of the concepts of a virtual laboratory, one of the first
student graphical interfaces is presented in Fig. 1(a), this is a preliminary in-
terface and will develop as the project progresses.
Work area: The main area, or work area, is the "virtual laboratory" itself, a
3D space into which students drop various virtual objects and make them in-
teract.
1026 X. Yan et al.

Library: The Library is a set of tabs from which the student can select and
drag and drop the virtual objects to the work area.
Experimental procedure: The platform should be able to record the
completed actions, i.e. the experimental procedure in place. The list of actions
is displayed in the experimental procedure area to the right of the interface.
Data area: The data area is the dashboard of the laboratory. In this inter-
face, input and output variables are displayed.

(a)

(b) (c)
Fig. 1. (a) Student graphical interface (b) nominal model (c) skin model shape

2.2. Simulation

A key feature of the laboratory is the definition and integration of the behav-
ior of the virtual objects and their mutual interactions. The teaching aim is to
point out the problem of metrology of a size on a part with form defect. Ac-
cording to the position of a caliper or micrometer on the surfaces, the meas-
urement result will varies. This needs to generate 3D models of parts with
form defects called skin model shape. Fig. 1(b) shows the normal model
while Fig. 1(c) shows the skin model shape with defects.
With skin model shapes, two aspects of the simulation are distinguished:
Development of virtual metrology laboratory … 1027

 3D simulation for viewing objects in the work area, which requires a re-
sponse in real time without the need for high precision. In this case, a physic
engine provided by the development platform is used for the calculation.
 Simulation to compute the responses from virtual instruments displayed
in the data area. This requires a high level of precision, and specific me-
trology algorithms will be used.
In the virtual environment, physical properties (like gravity, contact) of the
measurement part and instruments are also considered. These properties
could be used for positioning the 3D objects, simulate the assembly of parts,
detect the collision between part and measurement instrument, etc.
Random errors, which may follow Gaussian distribution or others, will be
used to simulate the uncertainties of the measurement tools. The combina-
tion of non-nominal part, random errors of measurement instrument, and
uncertainty of users, will make the measurement process and result more re-
alistic.

3 Generation of skin model shape


The virtual parts with shape defect, the skin model shape, introduces the un-
certain information during measurement, and helps to visualize the shape de-
fect. The Skin Model [11] is proposed by Ballu and Mathieu to express the
manufacturing defects on the part, and the skin model shape is proposed to
indicate a specific Skin Model which has decreased finite parameters [12]. In
our study, we treat each feature independently, simulation of the skin model
shape is divided into three steps, as described below:
 Segmentation. To be able to treat each feature independently and simu-
late deviations with different precision requirement, segmentation of the
feature is conducted in advance;
 Manufacturing deviation simulation. Geometric manufacturing defects on
skin model shape are simulated[13]–[15]. They are saved as deviation
data for each vertex.
 Deviation combination. When the geometric deviations are simulated,
they will be combined and added to the original nominal model.
With the widespread use of Computer Aided Engineering (CAE) software,
mesh-based discrete models can be generated easily. In the following, manu-
facturing deviation simulation and combination are highlighted.
Manufacturing deviation simulation
The one obvious difference between skin model shapes and the nominal
model is that the former contains more detailed shape defect information
than the latter. To analysis the impact of manufacturing defect on assembly
precision, many works have been done to simulate the shape deviation. We
classified these different methods into three categories, which is based on the
principles of the methods, are: random noise [15], mesh morphing [14], [15],
1028 X. Yan et al.

and modal-based methods [13], [15]–[17]. In Fig. 2, several methods are ap-
plied on the same model to evaluate their result.

(a) (b) (c)

(d) (e) (f)


Fig. 2. Simulation by different methods: (a) random noise (b) discrete cosine transform (c) ran-
dom field (d) natural vibration mode (e) random morphing (f) second order shape morphing.

Deviation combination
Due to the deviation data for each feature is simulated independently,
simply put the features with defects together will cause intersections or gaps
as shown in Fig. 3(a). To solve this problem, the Finite Element Method
(FEM) and penalty function approach are used.
In our method, the simulated manufacturing deviation on the vertices are
treated as displacement boundary conditions on original mesh model, and the
triangle edge of the mesh is used as 3D bar element. If a vertex is inside a face,
as in Fig. 3(b), it is constrained only along its normal direction (n1 direction)
and could be adjusted along tangential directions (n2 and n3). Due to the ad-
justment inside the face, there will be no intersection or gap. Considering in
skin model shape, the small adjustment is shared by all the vertices inside the
faces, the adjustment for every vertex is negligible.
If a vertex is on an edge, which means it is shared by two features, then it
will get two deviations along the normal direction of both features, as shown
in Fig. 3(c). This leads to two constraints (along direction n1 and n2) and the
final position lies at the intersection of two planes perpendicular to the nor-
mal directions. By solving the finite element problem, a skin model shape
which contains independent deviations of each feature is generated.
Fig. 4 shows the simulation result in different steps.

(a) (b) (c)


Fig. 3. (a) Gap and intersection on edge (b) constraint on vertex inside the face (c) constraint on
vertex on the edge.
Development of virtual metrology laboratory … 1029

(a) (b) (c)


Fig. 4. Results of simulation in different steps.

4 Virtual metrology applications


Virtual measurement with calipers
Fig. 5(left) presents a prototype view of the virtual laboratory produced with
Unity 3D. A nominal part is measured with a caliper. As described above,
measurement is not computed from the 3D scene but from the skin model
shape.

Fig. 5. Measurement with caliper (left) nominal model (right) skin model shape.
Instead of using a nominal part, the magnified skin model may be introduced
into the scene Fig. 5 (right). The aim is to point out the role of form defect on the
distance measured with the caliper. Other independent measuring devices like
depth and internal micrometers, height gauge maybe considered.
Virtual measurement with indicator and gauge blocks
In Fig. 6, a different part is measured with a digital indicator and gauge
blocks. In this case, the deviations are magnified. As previously, the relative
location of the caliper with respect to the part is recorded from the 3D scene.
Auxiliary equipment like: V-block, square, sine plate, bench center, metrology
fixtures, etc. can also be considered.
Coordinate measuring device
In the future, the idea is to go beyond unidimensional metrology, and move to-
wards 3D coordinate metrology and verification of complex geometrical specifi-
cations. First, coordinate measuring devices must be put in the 3D scene. Two
1030 X. Yan et al.

Fig. 6. Measurement of magnified skin model shape with digital indicator.


Coordinate measuring device
In the future, the idea is to go beyond unidimensional metrology, and move
towards 3D coordinate metrology and verification of complex geometrical speci-
fications. First, coordinate measuring devices must be put in the 3D scene. Two
main physical principles are used for coordinate measurement: mechanical and
optical. The two technologies are interesting for educational purposes.
For mechanical technology, the concepts to highlight are: measuring point,
machine and part coordinate systems, probe calibration, choice of stylus, orienta-
tion of the part, choice of measuring points, etc. For optical technology, the con-
cepts are: target position, 3D mesh alignment, mesh and CAD model alignment,
feature visibility, etc.
Once measuring points are recorded, the point coordinates are used to inspect
standardized specifications (Fig. 7(left)). Numerous points have to be addressed
with the students: GD&T, association criteria, alignment and datum, constructed
features, etc.

Fig. 7. (left) Geometrical specifications (Catia FTA) (right) virtual metrology on skin model shape.
Development of virtual metrology laboratory … 1031

shows the features used to verify the location of the hole 30 in
Fig. 7(right)
Fig. 7(left):
datum plane and datum axis, median lines and axis of a cylindrical
tolerance zone.

5 Conclusion and future work

An ambitious project has been launched by Bordeaux University on virtual


laboratories for physics or sciences in general and their technological applica-
tions. The project is based on 3D simulation in a web application to allow the
laboratory to be widely used. The platform integrates a communication mod-
ule with an LMS (Moodle) to manage the accounts and personal data, and to
provide support for the experimental exercises.
The proposed virtual laboratory may be complementary to European pro-
jects on GPS (Geometrical Product Specification) teaching [18], [19]. In con-
trast to the predefined scenarios created in Virtools by Dassault Systemes
[20] or in Adobe Flash by Humienny [21], the student will construct and de-
fine the standardized specification by himself. He will experiment verification
with different measuring instruments. The project under development will
ensure an active approach to GPS learning.
The first prototype application is under development for dimensional me-
trology classes. The next step will be to extend the prototype to geometrical
metrology. By implementing the guiding principles, it is expected that the ap-
plication can be extended to other disciplines with the same platform.

Acknowledgments This work was carried out with financial support from the French
State, managed by the French National Research Agency (ANR) in the framework of the
“Investments for the future” Program IdEx Bordeaux (ANR-10-IDEX-03-02). The authors
would like to thank the China Scholarship Council (CSC) for research funding.

References
1. W. Osten et al., “Remote laboratories for optical metrology: from the lab to the cloud,”
Optical Engineering, vol. 52, no. 10, pp. 101914–101914, 2013.
2. M. T. Restivo et al., “A remote laboratory in engineering measurement,” Industrial
Electronics, IEEE Transactions on, vol. 56, no. 12, pp. 4836–4843, 2009.
3. J. A. del Alamo et al., “An online microelectronics device characterization laboratory
with a circuit-like user interface,” ICEE, Valencia (Spain), 2003.
4. M. Abdulwahed and Z. K. Nagy, “Developing the TriLab, a triple access mode (hands-
on, virtual, remote) laboratory, of a process control rig using LabVIEW and Joomla,”
Computer Applications in Engineering Education, vol. 21, no. 4, pp. 614–626, 2013.
5. M. T. Bonde et al., “Improving biotech education through gamified laboratory
simulations,” Nature biotechnology, vol. 32, no. 7, pp. 694–697, 2014.
6. L. Dobrzaski and R. Honysz, “Development of the virtual light microscope for a
material science virtual laboratory,” Journal of Achievements in Materials and
Manufacturing Engineering, vol. 20, no. 1–2, pp. 571–574, 2007.
1032 X. Yan et al.

7. J. Liu and H. Jiang, “Development of a virtual winder for computer-aided education


using Virtools,” Computer Applications in Engineering Education, vol. 22, no. 1, pp.
120–130, 2014.
8. V. Potkonjak et al., “Virtual laboratories for education in science, technology, and
engineering: A review,” Computers & Education, vol. 95, pp. 309–327, 2016.
9. F. Al-Zahrani, “Web-Based Learning and Training for Virtual Metrology Lab,” arXiv
preprint arXiv:1003.5635, 2010.
10. E. Gomez et al., “Interactive dimensional calibration via Internet,” Computer
Applications in Engineering Education, vol. 21, no. 3, pp. 387–399, 2013.
11. N. Anwer et al., “The skin model, a comprehensive geometric model for engineering
design,” CIRP Annals-Manufacturing Technology, vol. 62, no. 1, pp. 143–146, 2013.
12. B. Schleich et al., “Skin model shapes: A new paradigm shift for geometric variations
modelling in mechanical engineering,” Computer-Aided Design, vol. 50, pp. 1–15,
2014.
13. F. Formosa and S. Samper, “Modal expression of form defects,” in Models for
Computer Aided Tolerancing in Design and Manufacturing, Springer, 2007, pp. 13–22.
14. P. Franciosa et al., “Simulation of variational compliant assemblies with shape errors
based on morphing mesh approach,” The International Journal of Advanced
Manufacturing Technology, vol. 53, no. 1–4, pp. 47–61, 2011.
15. M. Zhang et al., “Discrete shape modeling for skin model representation,” Proceedings
of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture,
vol. 227, no. 5, pp. 672–680, 2013.
16. W. Huang and D. Ceglarek, “Mode-based decomposition of part form error by
discrete-cosine-transform with implementation to assembly and stamping system
with compliant parts,” CIRP Annalsy, vol. 51, no. 1, pp. 21–26, 2002.
17. B. Schleich et al., “Skin Model Shapes: Offering New Potentials for Modelling Product
Shape Variability,” in ASME 2015 International Design Engineering Technical
Conferences and Computers and Information in Engineering Conference, 2015, pp.
V01AT02A015–V01AT02A015.
18. “Geometrical Product Specification and Verification Toolbox,” European Commission
project Erasmus+ Programme.
19. “Geometrical Product Specification Course for Technical Universities,” European
Commission project Leonardo da Vinci Programme.
20. “3D tolerancing based on ASME Y14.41-2003,” Dassault Systemes.
21. Z. Humienny and M. Berta, “Using animations to support the understanding of
geometrical tolerancing concepts,” tm-Technisches Messen, vol. 82, no. 9, pp. 422–431,
2015.
Product model for Dimensioning, Tolerancing
and Inspection

DI ANGELO L.1 , DI STEFANO P.1 and MORAB ITO A.E.2 *


1
Department of Industrial Engineering, University of L'Aquila, L'Aquila, Italy
2
Department of Engineering for Innovation, University of Salento, Lecce, Italy
* Corresponding author. Anna Eva Morabito T el.: +390832297772; E-mail address:
annaeva.morabito@unisalento.it

Abstract Th is paper presents a new methodology whose goals are on the one hand
the formulation of a tolerance specification that is consistent with the functional,
technological and control needs and, on the other, the automatic control of tole r-
ance. The key aspect of the methodology is the digital model of the product, re-
ferred to as GMT (Geo metric Model of Tolerancing), wh ich gives a complete,
consistent and efficient description of its geometrical and dimensional properties
with the aim of being able to specify, simulate, manufacture and inspect them. By
means a real test case, the potentialit ies of a first implementation of the proposed
methodology are crit ically discussed.

Keywords: GD&T (Geo metric Dimensioning and Tolerancing); GPS (Geo metric
Product Specification), Geo metric inspection, CAT (Co mputer-Aided
Tolerancing)

1 Introduction and related works

Geo metric inspection has actually become critical in the last few years due, on
the one hand, to the evermore higher demands for product quality and cost redu c-
tion and, on the other hand, to the increase in geometric comp lexity [1]. Automatic
inspection can be the answer to solve these critical aspects. Now, the ISO and the
ANSI/ASM E normative does not allow the automatic inspection process. This is
due to the fact this normative is main ly based on the inspection capabilit ies of
CMM (Coordinates Measurement Machine), gauges, dial gauges, etc., and on a
2D representation obtained by projecting the objects orthogonally onto a plane.
With the advent of high-resolution optical dig itizers and modern geometric
modelers, new prospects are offered for auto matic geo metric inspection [2] and
for tolerance specification [3]. These high-resolution digitizers allow for a 3D ac-
quisition of the real object with a resolution up to 10 μm [4]. This is a new kind

© Springer International Publishing AG 2017 1033


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_103
1034 L. Di Angelo et al.

form of inspection that, in accordance with the duality principle, allows defining
both new tolerance categories and new procedures in order to verify the tradition al
ones [5, 6 and 7]. Furthermore, the digit izat ion with high density of points allows
to replacing, for the purposes of control, the physical reference with a virtual ref-
erence [8, 9 and 10]. For these reasons, for the last few years, and as far as re-
search is concerned, a considerable amount of effort has been directed toward the
development of methodologies aiming at performing an effective and reliable a u-
tomated geometric tolerance verification [2, 5, 6, 7, 9, 11, 12, 13 and 14]. A great
number of these methodologies require knowing the computer-aided design
(CA D) model for the workpiece under inspection [2, 7, 9, 12]. The CAD model
provides us with the nominal references in the form of parameterized equations
describing the surfaces of the geometric model and it often contains an explicit
coding of the orientation and location relationships between the surfaces.
Since tolerance specifications usually refer to some features of the workpiece,
the mapping between one surface (or feature) of the CAD model and the corre-
sponding scanned point sub-cloud needs to be performed. A great number of these
methodologies require knowing the CAD model fo r the workpiece under inspe c-
tion. The approaches presented in literature carry out the form inspection verifying
that each point of the sub-cloud to be inspected is within the tolerance zone of
amp litude t. No high-level information is extracted fro m the acquired geometric
data so that these approaches are constrained to use specification “languages”
which are poorer than those traditionally defined by ISO o r ANSI/ASM E [15, 16
and 17]. When the specifications are considered, the tolerance is a mere label a s-
sociated with the elementary entities of the 3D CAD model (faces and edges of the
B-rep representation), as currently done in 2D drawings. These labels are passive
elements, not suitable for the development of simu lation applications [19] or au-
tomated control of tolerances for the analysis of the effects of variability on the
product functionalities.
This paper aims at developing a new methodology for an automatic control of
dimensional and geometric tolerance. This methodology is based on a digital
model of the product, hereinafter referred to as GMT (Recognized Geo metric
Model), which gives a co mplete, consis tent and efficient description of its geomet-
rical and dimensional properties with the aim of being able to specify, simu late,
manufacture and inspect them. GMT is a representation of the industrial product,
built by specific conceptual forms that are useful to interpret the deviations be-
tween the real object and its nominal counterpart. It is a representation of general
validity, which can be built with the data deduced from any CAD model or fro m
an acquired point cloud that reproduces the real object being acquired. In this pa-
per, a new phase, in the following referred as Computer Aided Gauge (CAG), is
introduced in addition to the methods proposed in previous works ([5], [6] and
[18]). CA G performs the functionalities of a virtual gauge, which operates on
high-density point model o f the object to be inspected.
Product model for Dimensioning, T olerancing and Inspection 1035

2. The proposed methodology and its implementation

In the figure 1, the flow-chart summarizing the proposed methodology for an


automatic control on real workpiece of d imensional and geometric tolerance spec i-
fications, is shown.

Fig. 1. The flowchart of the proposed methodology.

The key co mponent of the proposed methodology is the GMT (Geo metric
Model for Tolerancing). The GMT model gives a complete, consistent and effi-
cient description of the geometrical and topological parameters, and it includes
some info rmation, usually resident in the 2D drawing, such as dimension s, toler-
ances, and other non-geometric attributes (surface fin ish, revision nu mber, etc.). In
previous works [5, 6 and 18], the authors presented methods to construct GMT.
GM T is generated automatically through a process of features recognition, by us-
ing specific ru les, applied both the 3D CAD and the 3D scanning models. More
details concerning the method used to recognize geometric features are reported in
[20]. A GM T contains:
x the nominal fo rm propert ies of the features, with their dimensional intrinsic
characteristics (e.g. diameter fo r a cylinder) and intrinsic localization refer-
ences (e.g. axes) or references of other type (planes or axes of sy mmetry);
x the orientation relationships between features (perpendicularity, parallelis m
and special angular orientations);
x the localizat ion relationships (e.g. coincidence between planes o r axes of dis-
tinct features);
1036 L. Di Angelo et al.

x the relationships which a dimension can be associated to (typically geometric


entities parallel to each other).
The generation of GMT involves the formu lation of tolerance specification s
that are consistent with the ISO standards and the functional, technological and
control needs. So that, risk of errors and misunderstandings are excluded.
The control of the specification on acquired model is carried out by means of a
virtual gauge, called CA G (Co mputer Aided Gauge), wh ich imp lements the rules
for the control of all the tolerances specified in agreement with GMT structure.
The process for product inspection and control is performed through a query to the
measured model, previously subjected to a "non-ideal features" recognition pro-
cess. The CA G implements the various rules for tolerance control that may be pre-
scribed during specification. These rules include the definitions of the possible d a-
tum types and the ways of assessing the deviation fro m the nominal reference.

3. Implementation and case study

The proposed methodology has been implemented in an original software co d-


ed in MatLab. In order to verify its performance, the test case depicted in the fig-
ure 2 is analyzed. The real object (figure 2.a) has been scanned by a calibrated
FARO® Edge, 9 ft (2.7m) laser scanner (where single point repeatability varies
fro m 0.024mm to 0.064mm and the average point spacing of the point cloud has
been set to 0.1 mm). By means a typical reverse engineering process, a manifo ld
tessellated surface is obtained (figure 2.b). To avoid the errors due to the registra-
tion of the points clouds, the object was acquired with a single scan. To introduce
specifications, a CAD model is needed (figure 2.c).
MODELS FOR INSPECTION MODELS FOR SPECIFICATION

a) b) c)

Fig. 2. The test case.

In the figure 3 the main interface of the implemented software is shown. In this
interface three key elements can be highlighted:
- the control panel with the tools to apply the proposed methodology;
Product model for Dimensioning, T olerancing and Inspection 1037

- the plot area in which the reference and 3D scanned models are depicted;
in particular the reference surfaces subject to specification and the respec-
tive ones of the scanned model to be controlled are colored congruently;
- the semaphore that visually provides the control result.
not conform

conform

Plot of the reference model Plot of the scanned model

Fig. 3. The main interface of the implemented software.

In the control panel the following tools are available:


- rules;
- construct reference;
- save reference: to save a GMT model containing also the dimensional and
geometric tolerance specificat ions ;
- load reference: to load a GMT model containing also the dimensional and
geometric tolerance specificat ions;
- load model;
- C.A.G.;
- view report;
- exit: to terminate the program.
The rules tool (figure 4) permits to specify all ru les used both in the two GMT
construction (GMT fro m B-Rep and GMT fro m discrete model) and the C.A.G.
application.
The construct reference tool (figure 5) allo ws, first of all, to make the GMT
fro m the CAD model by selecting the segmented surfaces which have to be reco g-
nized (figure 5a). In GMT construction, the model is oriented so that the z-axis is
the principal axis of inertia, associated to lower inertia mo ment, the y-axis is asso-
ciated to the higher inertia mo ment and the x-axis is perpendicular to the previous
two ones. The specifications are inserted on the recognized surfaces (figure 5b) by
means dialog bo xes (figure 5c) wh ich guide the operator to respect both ISO
standards and the functional and control needs.
The load model tool imports the 3D scanning model and orients it accordin g
both principal co mponent analysis and GMT isomorphis m. In the case of object
characterized by some kind of bilateral symmetries, this approach can produce
1038 L. Di Angelo et al.

more than one align ment between the reference and the model. In this case, more
than one alignment is analyzed. By means of the rules chosen by the operator, the
software segments in patches the model and recognizes only the surfaces that are
specified on the reference.

Fig. 4. The rules tools.

a)

b)

c)

Fig. 5. The construct reference tool.

The C.A.G. tool imp lements an innovative way to inspect the specification. The
control of the specification requires an operation equivalent to that of registration,
which is realized by the C.A.G. between the GMT coming fro m B-Rep (GMT
CAD) and GMT obtained fro m the measured model (GMT recognized). As a re-
sult of this registration, the features of the GMT recognized to be controlled are
Product model for Dimensioning, T olerancing and Inspection 1039

identified. In other words, they are identified the features of GMT recognized co r-
responding to those features of GMT CAD which are subjected to specification.
The process for product inspection and control is carried out through a query to
the GMT recognized. The C.A.G. implements the various rules for tolerance con-
trol that are prescribed during the specification phase.
The view report permits to visualize, for each surface subject to the control, re-
sults in terms of specificat ion and error (figure 6).

Fig. 6. The view report tool.

4. Conclusions

In this paper a new methodology for a fully automatic control of dimensional


and geometrical tolerances are proposed. The key component of the proposed
methodology is the GMT that gives a complete, consistent and efficient descrip-
tion of the geometrical and topological parameters, and it includes informat ion,
such as dimensions, tolerances. When using the proposed methodology, the for-
mu lation of tolerance specification is consistent with the ISO standards and the
functional, technological and control needs.
Future work is addressed to connect functional needs and process requirements,
a module for variability simu lation of the product will be imp lemented. The analy-
sis of the tolerance specification by a model of the geometric and dimensional
variability can be useful to optimize the prescription to obtain a robust product at
the minimu m cost as in the application described in [21]. Th is cost can also de-
pend on the selection of the most appropriate type of tolerance to be assigned.
1040 L. Di Angelo et al.

References

1. Di Stefano P. T olerance analysis and cost evaluation for product life cycle. International
Journal of Production Research, 44 (10), 2006, 1943 -1961.
2. Gao J., Gindy, N. and Chen, X. An automated GD&T inspection system based on non -
contact 3D digitization. International Journal of Production Research, 2006, 44(1), 117 –134.
3. Srinivasan V. An integrated view of geometrical product specification and verification. In
Proceedings of the 7th CIRP seminar on computer-aided tolerancing, specification and verifi-
cation, 24–25 April 2001, Cachan, France, CIRP.
4. Barone S., Paoli A., and Razionale A. V. Multiple alignments of range maps by active stereo
imaging and global marker framing. Optics and Lasers in Engineering, 2013, 51(2), 116 -127.
5. Di Angelo L., Di Stefano P. and Morabito A.E. Automatic evaluation of form errors in high -
density acquire d surfaces. Int. Journal of Production Research, 2011, 49(7), 2061-2082.
6. Di Angelo L., Di Stefano P, Morabito A. E. T he RGM data structure: a nominal interpretation
of an acquired high point density model for automatic tolerance inspection. International
Journal of Production Research, 2012, 50(12), 3416-3433.
7. Germani M, F. Mandorli, M. Mengoni, Raffaeli R. CAD- Based Environment to Bridge the
Gap bet ween Product Design, T olerance Control. Precision Engineering, 2010, 34(1), 7 -15.
8. Bonisoli E., T ornincasa S., Delprete C. and Rosso C.. Integrated CAD/CAE Functional De-
sign for Engine Components and Assembly. In: SAE World Congress. Detroit, MI, USA,
April 12-14 2011.
9. Raffaeli R., Men goni M., Germani M., Mandorli F. Off-line view planning for the inspection
of mechanical parts. International J. on Inter. Des. and Manufac., 2013, 7 (1). 2012, 1-12.
10. Cerardi A., Meneghello R., Concheri G. and Savio G. Form errors estimation in free-form 2D
and 3D geometries. Advanced characterization of free-form surfaces in high precision ma-
chining. Proceedings of the 11th International Conference of the European Society for Prec i-
sion Engineering and Nanotechnology, EUSPEN 2011, 2011, pp. 30 9-312.
11. Prieto F., Redarce T ., Lepage R. and Boulan ger P. An automated inspection system. Int. J.
Adv. Manuf. T echnol., 2002, 19(12), 917–925.
12. Li Y. and Gu, P. Inspection of free-form shaped parts. Robotics Computer Integr. Manuf.,
2005, 21(4–5), 421–430.
13. Wong F.S.Y., Chuah K.B., and Venuvinod P.K. Automated inspection process planning: a l-
gorithmic inspection feature recognition, and case representation for CBR. Robotics an d
Computer-Integrated Manufacturing, 2006, 2, 56–68.
14. Li Y. and Gu P. Free-form surface inspection techniques—state of the art review. Computer-
Aide d Des, 2004, 36(13), 1395–1417.
15. ISO/T S 17450-1: 2005. Geometrical product specification (GPS) – general concept – part 1:
model for geometric specification and verification. Geneva, Switzerland: International Or-
ganization for Standardization.
16. ISO 1101. Geometrical product specifications (GPS). Geometrical tolerancing. T olerances of
form, orientation, location and run-out.
17. ASME Y 14.5 Dimensioning and T olerancing.
18. Di Angelo L., Di Stefano P. and Morabito A. E. Recognition of intrinsic quality properties for
automatic geometric inspection. Int. J. on Inter. Des. and Manufact., 2013, 7 (4), 203-215.
19. Bonazzi E., Ver gnano A. and Leali F. ANOVA of 3D Variational Models for Computer Aid-
ed T olerancing with respect to the Modeling Factors. In International Conference on Innov a-
tive Design and Manufacturing, ICIDM, Auckland, New Zealan d, January 2016.
20. Di Angelo L., Di Stefano P., “Geometric segmentation of 3D scanned surfaces”. Computer –
Aide d Design, vol. 62, 2015, p. 44-56
21. Governi, L., Furferi, R. and Volpe, Y. A genetic algorithms-based procedure for automatic
tolerance allocation integrated in a commercial variation analysis software. Jo urnal of Artif i-
cial Intelligence, 2012, 5 (3), pp. 99-112.
Section 7.2
Geometric and Functional Characterization
of Products
Segmentation of secondary features from high-
density acquired surfaces

DI ANGELO L.1 , DI STEFANO P.1 and MORAB ITO A.E.2 *


1
Department of Industrial Engineering, University of L'Aquila, L'Aquila, Italy
2
Department of Engineering for Innovation, University of Salento, Lecce, Italy
* Corresponding author. Anna Eva Morabito T el.: +390832297772; E-mail address:
annaeva.morabito@unisalento.it

Abstract A new method for secondary features segmentation, performed in high-


density acquired geometric models, is proposed. Four types of secondary features
are considered: fillets, rounds, grooves and sharp edges. The method is based on
an algorith m that analyzes the principal curvatures. The nodes, potentially a t-
tributable to a fillet of given geometry, are those with a certain value for maxi-
mu m principal curvature. Since the deterministic application of this simp le wor k-
ing princip le shows several problems due to the uncertainties in the curvature
estimation, a fuzzy approach is proposed. In order to segment the nodes of a tes-
sellated model that pertain to the same secondary features, proper membership
functions are evaluated as function of some parameters, which affect the quality of
the curvature estimation. A region growing algorithm connects the nodes pertain-
ing to the same secondary feature. The method is applied and verified for some
test cases.

Keywords: Region growing algorith m, co mputational geometry, features extrac-


tions, mechanical engineering co mputing, fu zzy logic .

1 Introduction

Mechanical co mponents require that some features, usually called secondary fe a-


tures (fillets, rounds, chamfers and grooves), must be used to satisfy proper tec h-
nological and functional reasons. Secondary features generally occur in the transi-
tion between two intersecting surfaces (primary geometric features) of the object.
Although secondary features are necessary from a functional point of view, in the
semantic evaluation of a mechanical co mponent, the secondary features are not
significant, since their presence, usually, do not affect on the engineering evidence
of the object. From a geo metrical po int of view, secondary features are cylinders,
planar surfaces or tori, but they have to be distinguished from primary features of

© Springer International Publishing AG 2017 1043


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_104
1044 L. Di Angelo et al.

the same object that could have the same geometric properties . During CAD mod-
elling, these secondary features are added, at a later stage, to dress -up the object
by using dedicated modelling features, relying on designers’ experience or on
Knowledge Based Engineering applicat ions [1]. The importance of distinguish
correctly the secondary geometric features from primary ones is particularly i m-
portant in many applications. It is the case of the features recognition finalized to
tolerance inspection, which, usually, does not have the aim of investigating sec-
ondary features [13].
Despite a wide literature is available on feature recognition, investigation on se c-
ondary features is a relatively new topic. A ll the pertinent literature [ 2, 3, 4, 5, 6]
concerns only the secondary features recognition fro m B-Rep models, which is a
relatively simp le case of recognition problem. In discrete models the recognition
process appears more complex. The main cause is the limited extension of the
secondary features with respect to the sampling of the mesh.
In this paper, a new methodology, suitable to segment the fillets, the rounds and
the grooves from mechanical co mponents described in the form of tessellated
models, is proposed. Furthermore, sharp edges are recognized. The recognition
method is presented and tested for some critical cases.

2 Definitions

Some defin itions that are specifically suited to describe the methods presented in
the following of the paper are g iven.
Constant Radius Secondary Feature (CRSF) is a feature of the surface model, ob-
tained by the sweeping of a profile along a path (the sweep line of the feature),
whose transverse section (performed by a principal normal section orthogonal to
the sweep line) is circu lar with radius RA . RA is associated to the greater principal
curvature. The characteristic radius RA of CRSF is an apparent value of the radius
since it is not a measure performed on CRSF but its evaluation is affected by the
method used to detect it from the tessellated model. The method used in this work
to evaluate curvatures is based on the paraboloid approximat ion, which, generally,
underestimates the real value of the feature radius [9]. RA is not an absolute value
of the CRSF radius, but it is a relat ive estimat ion useful to grouping the nodes of
the tessellated model based on the curvatures value.
Secondary features, which frequently occur in mechanical co mponents that are
characterized by a CRSF are fillets, rounds and grooves. CRSF is typically located
in the transition between two primary features and it may occur either in a smooth
or in a sharp transition (figure 1.a). In fillets or rounds, CRSF has a smooth trans i-
tion with both the adjacent primary features (figure 1.b). Grooves can be identified
by one or both transitions of the CRSF which are sharp (figure 1.b). Fillets may
occur in convex transition whereas rounds occur in concave transition (figure 1.b).
The sweep line of a CRSF in real-world mechanical parts may be mostly straight
Se gmentation of secondary features … 1045

or circular line, so that, in most of cases, the geometry of CRSF are cylinders or
tori.
smooth/sharp smooth/sharp
CRSF transition
transition

PF PF
a)
convex/concave
transition
b)
Primary geometric Fillet Round Groove
features Smooth convex transition Smooth concave transition Sharp transition
plane - plane

plane - cylinder

plane - cone

cylinder - cone

Fig. 1. Examples of Constant Radius Secondary Features (CRSF)

3 Recognition of Constant Radius Secondary Features

The method here proposed for the segmentation of secondary features consists
of the following principal phases:
- Curvatures estimation in all the nodes of the tessellated model;
- Identificat ion of nodes that are potentially attributable to CRSF;
- Identificat ion of the single CRSF;
- Sharp edges identification;
- Secondary features type attribution.
The first phase of the method consists in the estimation of geo metric differen-
tial properties (such as normal unit vector and principal curvatures) at all nodes of
the model of the triangular mesh. According to the results reported in [9], in this
paper the normal vector and principal curvatures are evaluated at each node, re-
spectively, by using the medial quadric method [10] and the 5 coefficients
paraboloid fitting method [11].
1046 L. Di Angelo et al.

3.1 Identification of nodes potentially attributable to CRSF

This phase is based on the analysis of the recurrence frequencies of the princ i-
pal curvatures values. The nodes, potentially attributable to CRSF, are those with
maximu m principal curvature (in absolute value) falling within a certain range of
values. Since the values of the CRSF rad ii are unknown, a frequency analysis is
carried out so that the most recurrent values for maximu m cu rvatures are identi-
fied. In figure 2 the histogram of the frequency of RA values, detected in all the
nodes of the model, is reported. The peaks of the histogram identify sets of nodes
pertaining to CRSF having specific values of RAi . For each peak, a more frequent
radius value RAi is identified, so that n values of the radius are classified. One or
more CRSF can have the same RAi (Figure 2) and it is possible that nodes , with a
recognized value of RAi , really do not pertain to any CRSF.

a)

radius

b)
RA1 RA2 RA3

Fig. 2. The synthetic mesh of a shaft (a) and the corresponding RA histogram (b).

Due to the reduction of a continuous regular surface in a tessellated model, the


curvature values, estimated at nodes belonging to the same CRSF, are dispersed.
This dispersion is greater the coarser is the mesh. Due to the little size of secon d-
ary features, they are typically coarsely acquired. This is true for synthetic meshes
but it is much more evident in meshes experimentally acquired, where, the effects
of noise and of the manufacturing errors have to be added too to the difficulty to
acquire the secondary features .
The nodes that pertain to the same CRSF must satisfy two conditions: they are
adjacent each other and they have the same radius RA . The first condition is veri-
Se gmentation of secondary features … 1047

fied by a region-growing algorith m. The second condition is analyzed by properly


assuming a membership level of a node to the set of nodes having the same radius
RA , by a fu zzy logic approach.

3.2 Identification of each CRSF

The nodes of the tessellated model can be grouped into those pertaining to
sharp edges or into regular nodes. Some of the regular nodes can belong to CRSF
other than primary features. In the CRSF category n subsets are identified, each of
them containing nodes with a specific values of the RAi (1≤ i ≤ n); n is the number
of peaks in the histogram of the frequency of RA values. The grouping of the
nodes in the previously mentioned sets is performed by a fuzzy-sets approach, so
that, for each node, a membership value is auto matically assigned. The membe r-
ship function are denoted with Pe (membership to sharp edges), Pr R Ai (member-
ship to CRSF having radius RAi ) and P™( r R Ai ) (generic membership to primary
features). Since the generic node can belong unequivocally only to those three cat-
egories, these functions, must satisfy, for each node of the tessellated model, the
following condition:
n n
Pe  ¦ P~r R Ai  ¦ P~™CRSF 1 (1)
i 1 i 1

The symbol a denotes that functions satisfy the normality condition.


In order to determine the value of the membership functions a suitable set of
parameters has been identified, which affect on uncertainties in assigning the type
of category. This set of parameters must meet some requirements: to be calculated
regardless of the knowledge of the radius of curvature, to be independent between
each other and to be dimensionless. The parameters considered in this work are
suited to evaluate, in each node, the smoothness of the surface (sharpness indica-
tor SHI(P) [7]) and the quality of the tessellation (factor of curvature approxima-
tion J [8]).
The membership function Pe{0,1} is assumed as a linear function of the esti-
mated value on the sharpness indicator SHI(P) (figure 3). Pe is defined by two
characteristic parameters:
x a e is the value of SHI(P) belo w which the node can be surely recognized to
be smooth (not sharp edge).
x b e is the lowest value of SHI(P) above which the node is non-regular (i.e. it
belongs to a sharp edge).
The values of a e and b e are determined in accordance with the results obtained
in [8].
In order to take into account the quality of the surface tessellation, the factor of
curvature approximation J [8] is used. This factor is defined as the maximu m val-
ue of the tangent of the dihedral angle between two ad jacent triangular faces be-
1048 L. Di Angelo et al.

longing to the 1-neighbourhood of the node. The membership function P~r R Ai is a


trapezoidal function (figure 4) that is 1 between RAi -t/2 and RAi +t/2 and 0 for r <
RAi -t/2-4V and r > RAi +t/2+4V. The values of t and V are determined by a specific
experimentation, in which several test cases have been analyzed and the uncertain-
ties, in the rad ius estimation, tabled as a function of J values and level of neigh-
borhood. The figure 5 su mmarizes this phase.
~
P
μe r R

1 1

0 0
ae be R-t/2-4σ R-t/2 R+t/2 R-t/2+4σ

Fig. 3 The membership function μe Fig. 4 The membership function P~r R Ai

Once the membership functions have been determined, each node is associated
to the previously mentioned category. This phase gives rise to a fuzzy aggregation
of the nodes based on RA without distinguish the different secondary features (one
or more secondary feature can have the same RA ).

r (level o f nei g hb o urho o d )

~
P r R

Fig. 5 The construction of the membership function P~r R Ai

In order to aggregate the adjacent nodes, which are recognized to be similar, in


a single secondary feature, a growing algorith m is applied. This algorithm wo rks
based on the fuzzy concepts of dissimilarity or similarity of two linguistic varia-
bles of identical type [12]. The region-gro wing algorith m starts at the node (seed
node ps ) of the mesh where the maximu m membership degree is reached for an
assigned category. Nodes recognised to be similar are aggregated in the same sec-
ondary feature. Once that all the nodes in the 1-ring neighbourhood of ps have
been examined the procedure continues considering the 1-ring neighbourhood of
Se gmentation of secondary features … 1049

those nodes that have been recognized as “similar” to ps node. The region growing
algorith m stops when dissimilar nodes have been met or all the nodes have been
analysed.

3.3 CRSF type attribution

The recognition of the type of secondary feature is carried out based on the n a-
ture of the transition with the adjacent primary features. In particular, if the CRSF
is bounded by at least one sharp edge, it is recognized as groove (figure 6) other-
wise it is a fillet or a round. The fillets are distinguished from rounds based on the
CRSF is convex or concave.

Fillet recognition
CRSF
smooth transition smooth transition

PF PF

Groove recognition
CRSF
sharp edge sharp edge

PF PF

Fig. 6 Analysis of the edges delimiting the CRSF.

4 Test case and discussion

In order to check the method here proposed some case studies have been
examinated. Firstly a shaft, tessellated with a random d istribution of nodes on its
surface, has been analyzed. The figure 7 shows the results of the segmentation
process of the secondary features; sharp edges are also identified.
In order to test the method in cases where different characteristic of the mesh have
been met, a second case is presented (figure 8). In this test case there are portions
of the secondary features roughly tessellated. In these portions, the value of the
radius at the nodes is estimated with a higher level of uncertainties, but the pro-
posed fuzzy approach performs a correct aggregation of the nodes.
1050 L. Di Angelo et al.

5 Conclusions

A new method to recognize secondary features in tessellated models is presented.


The proposed method executes the secondary feature segmentation by performing
an adaptive process suited to resolve the uncertainties that typically affect the
geometric recognition process in tessellated models. Further work is required to
implement the recognition of chamfers, which are other important secondary
features typically met in mechanical co mponents.

1 2 CRSF # RA
8
7 1 4.017
3
2 1.591
6 3 2.528
4 4 2.526
5 1.130
6 1.152
5 7 1.127
8 1.139
Sharp edge Regular points Concave CRSF Convex CRSF Groove

Fig. 7. Secondary Features recognition: test case 1.

γ map recognition

1
CRSF # RA
1 2.025
2 2 2.519
3 3 2.097

Sharp edge
Regular points
Convex CRSF
1 Groove

2
3 CRSF # Ra
1 2.027
2 2.52
3 2.097

Fig. 8. Secondary Features recognition: test case 2.


Se gmentation of secondary features … 1051

References

1. Peroni M., Ver gnano A., Leali F. and Forte M. Design Archetype of T ransmission Clutches
for Knowledge Based En gineering. In International Conference on Inno vative Design and
Manufacturing, ICIDM, Auckland, New Zealan d, January 2016.
2. Bianconi F. and Di Stefano P. An intermediate level representation scheme for secondary
features recognition and B-rep model simplification. In Shape Modeling International, 2003
(pp. 99-108). IEEE.
3. Di Stefano, P., Bianconi, F., & Di Angelo, L. An approach for feature semantics recognition
in geometric models. Computer-Aided Design, 36(10), 2004, 993-1009.
4. Sheen, D. P., Son, T . G., Myung, D. K., Ry u, C., Lee, S. H., Lee, K., & Yeo, T . J. Transfor-
mation of a thin-walled solid model into a surface model via solid deflation. Computer-
Aide d Design, 42(8), 2010, 720-730.
5. Zhao, L., Tong, R., Dong, T ., & Dong, J. (2005, May). Brep model simplification for feature
suppressin g using local error evaluation. In CSCWD (2) (pp. 772-776).
6. Hariya, M., Nonaka, N., Shimizu, Y., Konishi, K., & Iwasaka, T . (2010, July). T echnique for
checking design rules for three-dimensional CAD data. In 2010 3rd International Conference
on Computer Science and Information T echnology
7. Di Angelo L. and Di Stefano P. C1 continuities detection in triangular meshes, Computer –
Aide d Design, vol. 42 (9), 2010, p. 828-839.
8. Di Angelo L., Di Stefano P., “Geometric segmentation of 3D scanned surfaces”. Computer –
Aide d Design, vol. 62, 2015, p. 44-56, ISSN: 0010-4485.
9. Di Angelo L, Di Stefano P. Experimental comparison of methods for differential geometric
properties evaluation in triangular meshes. Computer – Aided Design an d Applications, vol.
8 (2), 2011, pp. 193-210.
10. Jiao X, Alexander PJ. Parallel feature-preserving mesh smoothing. In International Confer-
ence on Computational Science and Its Applications (4), 2005, pp. 1180 –1189.
11. Petitjean S. A Survey of Methods for Recovering Quadrics in T riangle Meshes. ACM Co m-
puting Surveys, vol. 2 (34), 2002, pp. 1-61.
12. E. Cox, 1994. The Fuzzy Systems Handbook, Cambridge, MA: AP Professional.
13. Di Angelo L., Di Stefano P., Morabito A. E., “Automatic evaluation of form errors in high -
density acquire d surfaces“, International Journal of Production Research, vol. 49 (7), 2011 p.
2061-2082, ISSN: 0020-7543.
Comparison of mode decomposition methods
tested on simulated surfaces

Alex BALLU1*, Rui GOMES2, Pedro MIMOSO2, Claudia CRISTOVAO2 and


Nuno CORREIA2
1
Univ. Bordeaux, I2M, UMR 5295, Talence, France
2
INEGI, Porto, Portugal
* Corresponding author. Tel.: +33 5 56 84 53 87; E-mail address: alex.ballu@u-bordeaux.fr

Abstract Multiple modal decomposition of surfaces methods are increasingly


used to analyse the typical geometric defects of manufactured surfaces. According
to the context, this decomposition can either be done on a base which is known a
priori (e.g. Discrete Cosine Transform, natural vibration modes etc.) or on a base
that is identified from a set of measured surfaces (i.e. manufacturing dependent
“technological modes” using the Principal Components Analysis or Independent
components analysis). In this paper, a set of simulated surfaces are generated by
linear combination of a given typical defect set in order to compare the efficiency
the two different techniques. The compared techniques are, 1) methods founded
on an a priori base and, 2) multivariate analysis methods (The key modes are
identified for each method and compared to the technological modes used to gen-
erate the trial surfaces. From this study it may be concluded that while the first
method does not allow the identification of the technological modes, the second
does provide possible insight into the production technologies.

Keywords: Geometric defect, Mode decomposition, Multivariate analysis, Princi-


pal Components Analysis, Independent Components Analysis

1 Introduction

Tolerancing is a wide area of research, where many different subjects are devel-
oped, particularly tolerance analysis and synthesis, as well as metrology of parts
and surfaces. Nevertheless, many more efforts must be done to develop several
forgotten topics. One of these topics concerns process capability. For tolerance
analysis and synthesis, one needs data to feed numerical simulation. Currently, the
datasets used are essentially 1D process capability, even for 3D simulations. To
reduce the uncertainty for 3D simulations, 3D process capability data must be

© Springer International Publishing AG 2017 1053


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_105
1054 A. Ballu et al.

completed. Some works already present analysis of the geometry of the manufac-
tured surfaces in 3D. One way to analyse the geometrical defects consists in iden-
tifying the surfaces by decompositions in linear combination of basic surfaces, or
modal decomposition. Three main approaches can be distinguished according to
the method to generate the surfaces of the basis:
(1) the surfaces are predefined, without any process or surface knowledge,
(2) the surfaces are identified by experience, from the process knowledge,
(3) the surfaces are identified from a sample of manufactured surfaces.
The first approach (1) used as example Discrete Cosine Transform (DCT) or
natural vibrations modes of the surface. This technique is presented in section 3.
The principal disadvantage of this kind of approach is the disconnection between
the results and the technological modes since these correspond to the surface of
defects due to each of the physical phenomena induced by the process.
The two others approaches take into account these physical phenomena either
by experience or by measure. An example of the second family of methods (2) is
the use of quadratic surfaces which are typically considered to be a good model
for machined surfaces.
The third type of methods, based on measurements of the resulting compo-
nents, uses multivariate analyses to retrieve technological modes. Two approaches
are presented in section 4: Principal Component Analysis (PCA) and Independent
Component Analysis (ICA).
The paper intends to point out the pros and cons of these different methods. For
this purpose, a sample of surfaces is generated by computation from a known basis
(section 2). The different methods are applied to this sample. As the basis is ex-
perimentally known, it is possible to compare the results to this basis and to con-
clude on the behavior of each method.

2 Simulated surfaces

2.1 Technological basis

To illustrate and compare the different methods of surface analysis, a sample of


simulated surfaces is generated from a technological basis. The technological ba-
sis is a set of surfaces which represent different types of defects that are imagined
to simulate the manufactured surfaces. The considered surfaces are a twisted sur-
face, a surface with a step and a curved surface (Figure 1) respectively denoted T1,
T2 and T3. The surfaces are squares with a size of 150mm x 150mm and are de-
fined by a regular square mesh of 31x31 points. The equations of the surfaces are:
Comparison of mode decomposition methods ... 1055

௫௬ ଵ଴଴଴ିඥଵ଴଴଴మ ି௬ మ
‫்ݖ‬ଵ ൌ ; ‫்ݖ‬ଶ ൌ ʹ‫ܪ‬ሺ‫ݔ‬ሻ െ ͳ ; ‫்ݖ‬ଷ ൌ ͳ െ ʹ (1)
ଵହ଴మ ଵ଴଴଴ିඥଵ଴଴଴మ ିଵହ଴మ
where H(x) is the Heaviside step function.
In the paper, all the surfaces are normalized such as the maximum of the abso-
lute values of z is equal to 1 and the figures represent the surfaces of the different
bases with a scale of 50 according to z-coordinates.

T1 T2

T3
Fig. 1. Technological basis: (T1) twisted surface, (T2) surface with a step, (T3) curved surface

2.2 Surface generation by linear combination

From the technological basis, surfaces are generated by linear combination:

‫ ݖ‬ൌ ܽଵ ‫்ݖ‬ଵ ൅ ܽଶ ‫்ݖ‬ଶ ൅ ܽଷ ‫்ݖ‬ଷ (2)

with ai, random variables with a Gaussian distribution centred on 0. The stan-
dard deviations of the variables are respectively: 0.02, 0.01 and 0.015. A Gaussian
noise with a standard deviation of 0.01 is added to every point to simulate uncer-
tainties. A sample of 60 surfaces is generated; two surfaces are presented in Figure
2 (with a scale of 1000 according to z-coordinates).

Fig. 2. Generated surfaces.


1056 A. Ballu et al.

Before the study of the decomposition of the simulated surfaces by different


methods on different bases, the sample is decomposed on the technological basis
for verification. Each simulated surface is decomposed on T1,T2 and T3, thus, the
coordinates a1, a2 and a3 are retrieved and the standard deviations of these coordi-
nates can be computed. The values obtained are respectively: 0.0195, 0.01 and
0.014. They are different from the theoretical values: 0.02, 0.01 and 0.015. The
difference is due to the fact that the standard deviations are computed on a sample
of surfaces, not the population.
By recomposition from the coordinate and the basis, it is possible de recon-
struct the surfaces. However, these surfaces are not identical to the original ones,
because the noise is not taken into account. After the computation of the standard
residual values between the original and the reconstructed surfaces, we can con-
clude that the technological basis “explains” 65% of the observed surface defects.

3 Modal decomposition on a predefined basis

The most often used method to analyse form defect of manufactured surfaces is
modal decomposition. The common principal is to define an a priori basis of sur-
faces and to decompose the actual surfaces on this basis.
Many different bases may be considered. The more often used ones are based
on Fourier series [1, 2], Discrete Cosine Transform [3, 4] or natural vibration
modes of the surface [5, 6]. Additional modal decompositions by DCT and natural
vibration modes are presented in the following sections.

3.1 Discrete Cosine Transform

The Discrete Cosine Transform converts a signal from the spatial domain to the
frequency domain. It is widely used, for example, for image compression.
Applied to a plane, the z-coordinates of the DCT surface basis are cosine func-
tions with different frequencies according to x and y axes. Each surface of the ba-
sis is identified by two indexes: u and v related to the frequencies in x and y. The
surfaces are denoted DCTu,v and a number are represented in Figure 3.
Figure 4 presents the standard deviation of the coordinates of the 60 simulated
surfaces in the DCT basis according to the indexes u and v. The figure clearly
shows several preponderant modes: (1,2), (2,2), (3,1), (1,1), and a series of modes
(1,v), and (u,1). The mode (1,1) corresponds to a variation in translation. The
mode (2,2) closely approximates T1. Mode (3,1) approximates the opposite of T3
while the other modes (u,1) allow approach T 3. T2. is approximated by the mode
(1,2) and the following modes (1,v). T1 is directly corresponding to DCT 2,2, T3 and
particularly T2 need two series of DCT modes to be well approximated.
Comparison of mode decomposition methods ... 1057

DCT1,1 DCT1,2 DCT1,4

DCT2,1 DCT2,2 DCT2,4

DCT3,1 DCT3,2 DCT3,4


Fig. 3. DCT basis.

Fig. 4. DCT: Standard deviations.

3.2 Natural vibration modes

Decomposition according to natural vibration modes has been deeply investigated


by Samper et al [5, 6]. This decomposition corresponds to the natural vibration
modes of the considered surfaces; they are computed using Finite Element Analy-
sis. For the plane, the first three modes correspond to a translation and two rota-
tions of the surface (Figure 5). The next modes present form and texture defects
with higher and higher frequencies.
1058 A. Ballu et al.

1 2 3

4 5 6

4
9 510 6

23 24 44

Fig. 5. Natural vibration basis.

Figure 6 presents the standard deviations of the coordinates for the 60 surfaces
of the sample. The principal modes are the modes 1, 2, 3, 4, 5, 6, 9, 10, 23, 24 and
44. The mode 4 corresponds to T1. The sum of the modes 5 and 6 corresponds to
T3. The interpretation of the other principal modes is much more difficult, they
must correspond to the step surface T2. Because the technological basis doesn’t
correspond to the natural vibration modes, this decomposition is unable to retrieve
the technological modes directly. The computation of the standard residual values
permits to conclude that the first three natural vibration modes explain 18% of the
observed surface defects, and the first ten ones 60%.

V1
V2
V3,4,5,6
V9,10 V23,24 V44

Fig. 6. Natural vibration modes: Standard deviations.


Comparison of mode decomposition methods ... 1059

By nature, decomposition on a predefined basis cannot retrieve a technological


basis and need numerous modes to retrieve the measured surfaces. To improve
these methods, some technological modes may be introduced in the basis when
they are known [7]. In any case, these methods allow us to have information about
the frequency decomposition. Natural vibration mode has a particular interest in
the fact that it can be applied on every kind of surface (plane, cylinder, etc.).

4 Multivariate analyses

The second approach is more in accordance with our objective; it is based on a


statistical method to analyse a series of shapes and to extract a basis from them.
These methods belong to the domain of multivariate analysis. The most well-
known and used method is Principal Component Analysis (PCA). Numerous stud-
ies are based on PCA and, among other applications, PCA is used for shape [8]
and manufactured part [9, 10, 11, 12] analyses. Nevertheless, while PCA is very
well adapted to build a compact model of a large set of data it does not allow one
to find the principal deformations of a shape [13].
Several others techniques are grouped under the term of Factor Analysis (FA)
(while some authors include PCA in FA, we distinguish the two groups). Among
these methods, we retained Independent Component Analysis (ICA) [14]. ICA has
been used for shape analysis and compared to PCA [15] and has been applied in
mechanical engineering to inspect tire canvas [16]. We apply in section 4.2 the al-
gorithm Fast-ICA of Hyvärinen and Oja [14] to manufactured surface analysis.

4.1 Principal Component Analysis

The principle component analysis (PCA) method leads to as many components to


the base as there are surfaces in the sample. They are classified according to their
influence. The first modes are presented in Figure 7. Among the standard devia-
tions (Figure 8), the first three modes are preponderant; the following modes are
negligible. The first three modes explain 66% of the defects, to be compared to the
65% explained by the technological modes in section 2.2. The result is in accor-
dance with the fundamental principle of PCA which is to search for the principal
components. In comparison with DCT or natural vibration modes, the preponder-
ant modes are reduced in number.
The mode 3 corresponds to T1 (twist). The modes 1 and 2 are a mix of T 2 (step)
and the opposite of T3 (curved). The following modes correspond to the noise
added to the surfaces. PCA makes it possible to identify in a reduced number of
modes the technological basis, nevertheless, the modes are generally not sepa-
rated, but mixed as for the case studied.
1060 A. Ballu et al.

1 2 3

4 5 6
Fig. 7. PCA basis.

Fig. 8. PCA Standard deviations.

4.2 Independent Component Analysis

The aim of the Independent Component Analysis is not to look for the principal
components but for independent components between themselves. For our prob-
lem, it corresponds to the search of independent geometrical defects due to tech-
nological reasons.
The application of fast ICA algorithm [14] on the studied sample leads to noisy
modes except one mode similar to T2. The result is not satisfying and is due to the
very noisy surfaces. One has to know that ICA is sensible to noisy signals. To cir-
cumvent this problem, a solution consists in filtering the surfaces before the appli-
cation of ICA. One manner to filter the surface is to consider the preponderant
modes of PCA. Thus, a new sample is reconstructed by combination of these three
modes and the corresponding coordinates. Figure 9 presents the two surfaces of
figure 2 after the application of the PCA filter.
When applied to this filtered sample, ICA produces only three modes (figure
10) because the data are linked together. One can recognize without difficulties
the original technological basis. The mode 3 is just is opposite of T3. The standard
deviations of the coordinates are respectively: 0.0192, 0.0123 and 0.0152. They
are to be compared to the values obtained in section 2.2: 0.0195, 0.01 and 0.014,
Comparison of mode decomposition methods ... 1061

which is demonstrably a good consistency between the results. The three modes
explain 66% of the defects as the first three PCA modes because the three ICA
modes correspond to a combination of the first three PCA modes.

Fig. 9. Filtered surfaces.

1 2

3
Fig. 10. ICA basis.

7 Conclusion

This study highlights the impact of the adoption of a specific decomposition


method on the result of the analysis of defects. If the goal is to determine the tech-
nological modes of manufactured surfaces, it is clear that the decomposition on a
predefined basis complicates the analysis if the technological modes are not in-
cluded in the predefined basis. These methods can provide answers about the fre-
quencies of the measured shape defects, but not beyond. If the goal is to specify
and verify the allowable surface defects, without a priori knowledge of the signa-
ture of the process, then these methods are adapted. In that case, decomposition in
natural vibration modes is the most suitable method, since it is generalizable to
different types of surfaces.
At the opposite, multivariate analysis methods open up new ways of analysing
the modes of the defects generated by the production process. Specifically, the In-
dependent Components Analysis (ICA) method could allow the discovery of the
1062 A. Ballu et al.

technological modes and accurate information on the outcomes of the manufactur-


ing system.
This work remains a preliminary work and is limited to a study of simulated
surfaces. We must analyse how these methods are applicable on real surfaces with
"signals" less well identified, noisier, with random local defects and measurement
uncertainties.

References

1. R. P. Henke, K. D. Summerhays, J. M. Baldwin, R. M. Cassou, C. W. Brown, "Methods for


Evaluation of Systematic Geometric Deviations in Machined Parts and Their Relationships to
Process Variables", Precision Engineering, 23(1999): 273-292
2. M. T. Desta, H. Y. Feng, D. O. Yang, "Characterization of General Systematic for Errors for
Circular Features", Int. Journal of Machine Tools & Manufacture, 43: 1069-1078, 2003
3. W. Huang, D. Ceglarek, "Mode-based Decomposition of Part Form Error by Discrete-Cosine-
Transform with Implementation to Assembly and Stamping System with Compliant
Parts",CIRP, 2002, 21–26
4. J. Lecompte, O. Legoff, J.-Y. Hascoet, "Technological form defects identification using dis-
crete cosine transform method", Int J Adv Manuf Technol, 51:1033–1044, 2010.
5. S. Samper, F. Formosa, "Form Defects Tolerancing by Natural Modes Analysis", Journal of
Computing and Information Science in Engineering, 7(2007):44-51.
6. G. Le Goic, H. Favrelière, S. Samper, F. Formosa, "Multi scale modal decomposition of pri-
mary form, waviness and roughness of surfaces", Scanning, vol. 33, pp. 1-10, 2011.
7. P.-A. Adragna, S. Samper, M. Pillet, H. Favreliere, "Analysis Of Shape Deviations Of Meas-
ured Geometries With A Modal Basis", Journal of Machine Engineering : Manufacturing Ac-
curacy Increasing Problems - Optimization, Vol. 6, No. 1, pp. 134-143, 2006
8. R. Harshman, P. Ladefoged, and L. Goldstein, "Factor analysis of tongue shapes", J. Acoust.
Soc. Am. Volume 62, Issue 3, pp. 693-707, 1977.
9. B. M. Colosimo, M. Pacella, "On the use of principal component analysis to identify systemat-
ic patterns in roundness profiles", Quality and reliability engineering int., 2007, 23:707-725
10. B. Schleich, N. Anwer, L. Mathieu, M. Walter, S. Wartzack, "A Comprehensive Framework
for Skin Model Simulation", Proceedings of the ASME 11th Biennial Conference On Engi-
neering Systems Design And Analysis, 2012.
11. M. Zhang, "Discrete shape modeling for geometrical product specifications: Contributions
and applications to skin model simulation". PhD thesis, ENS Cachan, 2011.
12. N. Anwer, A. Ballu, L. Mathieu, "The skin model, a comprehensive geometric model for en-
gineering design", Annals of the CIRP, vol. 62, pp143–146, 2013.
13. M. R. Aguirre, M. G. Linguraru, K. Marias, N. Ayache, L.-P. Nolte, M. Á. González Balles-
ter, "Statistical shape analysis via principal factor analysis", 4th IEEE International Sympo-
sium on Biomedical Imaging: From Nano to Macro, Arlington, VA, pp.1216-1219, 2007.
14. A. Hyvärinen, E. Oja, "Independent Component Analysis: Algorithms and Applications",
Neural Networks, 13(4–5), pp.411–430, 2000.
15. A. Ruto, M. Lee, B. Buxton, "Comparing principal and independent modes of variation in 3-
D human torso shape using PCA and ICA". Proceedings of ICA Research Network Interna-
tional Workshop, University of Liverpool, pp. 101–104, 2006.
16. A. T. Puga, J. C. Gavilan, "Unsupervised Calibration for Tire Canvas Inspection by means of
Independent Component Analysis", 9th IEEE International Conference on Emerging Tech-
nologies and Factory Automation, ETFA 2003, Lisbon, 2003.
Analysis of deformations induced by
manufacturing processes of fine porcelain
whiteware

Luca PUGGELLI1*, Yary VOLPE1 and Stefano GIURGOLA 2


1
Department of Industrial Engineering, via di Santa Marta, 3, 50139 Firenze (Italy)
2
GRG s.r.l. – Richard Ginori, Viale Giulio Cesare 50, 50019, Sesto Fiorentino Firenze (Italy)
* Corresponding author. Tel.: +39-055- 2758687; fax: + 0552758755. E-mail address:
luca.puggelli@unifi.it

Abstract During sintering, porcelain changes its phase composition as well as its
physical and mechanical properties. The most evident effect of these transfor-
mations is a significant change of shape, which is a combination of shrinkage and
pyroplastic deformations, caused by softening. Both of these phenomena are in-
duced by temperature, which is on its turn influenced by several variable factors
that are difficult to predict. Especially for products manufactured in large scale,
the resulting shape of artefacts may significantly vary even among the same batch.
Consequently, for companies demanding high quality standard, this variability en-
tails a high number of rejected products. For this reason, the present work aims at
investigating the amount of variation introduced by firing process for an actual in-
dustrial product, independently from other (more or less) known variation sources
such as the ones related to materials and forming processes. This could help pro-
cess engineers to focus their attention when trying to improve the quality of final
products.

Keywords: Geometric characterization; porcelain manufacturing; Reverse Engi-


neering; scattering analysis.

1 Introduction

In the last century, porcelain products have received wide application in a vari-
ety of fields, ranging from electrical insulators to dinnerware, due to their unique
properties such as, for instance, low permeability, high strength, hardness, white-
ness, translucence. Dealing with ceramic products used for whiteware the raw ma-

© Springer International Publishing AG 2017 1063


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_106
1064 L. Puggelli et al.

terial is a mixture typically composed by fine-grained clay (usually kaolin - 50


wt%), flux (usually feldspar – 25wt%) and filler (usually quartz – 25wt%).
Commonly, such a raw material is processed to create the so called “green
body” [1-3] using different forming methods, depending on the geometric com-
plexity of the object to be manufactured. The two most common processes are the
slip casting (usually adopted for shapes not easily made on a wheel) and the
isostatic pressing (mostly used for tableware i.e. for simple and plane geometries).
The green body is fired a first time (biscuit firing), which entails heating it at a
relatively low temperature (< 1000°C) to vaporize volatile contaminants and start
the sintering process. During this firing, all the chemical and physical reactions
occur in solid state and a small shrinkage (around 1-2%) occurs. Then, the artefact
is glazed and subjected to a second heating process (firing), performed in a differ-
ent kiln, reaching a maximum temperature in the range between 1390°C and
1420°C. The fine porcelain, obtained after firing, has much more mechanical
strength and results to be refractory. These changes are the results of sintering re-
actions that happen at high temperatures. Moreover, the artefact is subjected to an
average shrinkage of 11-12% [4].
During the entire manufacturing process described above, the mechanical char-
acteristics of the green body change significantly. The evolution of the porcelain
microstructure towards the final state is known in literature [5-10].
The final composition of fired porcelain consist of 20% - 40% mullite, 5%-25%
undissoluted α-quartz and 50%-70% amorphous phase mainly composed by potas-
sium alumino-silicate glass. During the mullite and glass phase formation, the ma-
terial gradually becomes so soft and viscous that the artefact deforms under its
own weight. In synthesis, during the whole process from the green body to the fi-
nal product both an average shrinkage of 12-14% and a number of structural de-
formations (due to the own weight) occur. These well-known changes are current-
ly compensated by counter deformations in green, evaluated by specialized staff.
The modifications required by the raw product are estimated solely on the basis of
staff experience and are applied using a trial and error approach [13, 14].
Unfortunately, especially for products manufactured in large scale, the resulting
shape of artefacts may significantly vary even among the same batch. Consequent-
ly, for companies demanding high quality standard, this variability may entail a
significant number of rejected products with consequent production costs. To
solve this issue, the best option would be to assess a deep investigation on how
process parameters influence the final product shape. However, the high number
of factors (mechanical, chemical, thermal, etc.) affecting the process makes such a
kind of comprehensive analysis of the entire process practically unfeasible.
In fact, because of the complex interplay among (1) raw materials, (2) pro-
cessing routes and approaches, (3) kinetics of the firing process, porcelains is un-
doubtedly among the most complicate ceramic systems [10]. A number of at-
tempts are in literature; some authors [15] claim that the principal causes of the
shape variation are, in fact, due to the arrangement of powder particles and porosi-
ty, which can lead to density variations of the green body and anisotropy in the
Analysis of deformations induced by manufacturing processes … 1065

mechanical behaviour. A model to study the influence of main process variables


(e.g. powder moisture, maximum compaction pressure and maximum firing tem-
perature) on the intermediate variables (mass, dry bulk density, size, and thick-
ness) and the final dimensions of porcelain tiles is proposed in [16]. This effective
work provides an estimation of the behavior of a number of variables at lab scale
within a high confidence level. However, the efficiency of obtained results has not
been verified to reproduce industrial data. An attempt in simulating the state of the
firing process of household porcelain by using simulation software packages is
provided in [17]. Although interesting results are reached in the mentioned work,
strong assumptions are made by authors (e.g. kiln thermal and radiation losses are
considered negligible and high temperature flue gas temperature fluctuations are
ignored) thus limiting the work practical validity in industrial applications.
On the basis of above considerations, the present work aims at investigating the
amount of variation introduced by firing process for an actual industrial product,
independently from other (more or less) known variation sources such as the ones
related to materials and forming processes. This could help process engineers to
focus their attention when trying to improve the quality of final products. If the
observed variability in a given porcelain production process is comparable to the
one observed in this work, it is plausible that the kiln is actually the main respon-
sible; on the contrary if the variation of variability results considerably higher,
other processes may be involved in decreasing the final quality of the product.
More in detail, the work investigates the manufacturing process of Richard
Ginori, a well-known Italian Company producing porcelain whiteware since 1735
with particular reference to the dinner plate of the collection “Antico Doccia”(see
Figure 1).

Fig. 1. Dinner plate "Antico Doccia" by Richard Ginori

2 Sample preparation

With the aim of investigating the amount of variation introduced by firing pro-
cess, a preliminary step consisting in preparing a set of green bodies obtained us-
1066 L. Puggelli et al.

ing a given composition and the same isostatic pressing is required. Accordingly, a
set of 6 samples has been manufactured in the form of green body. To assure that
the isostatic pressing process is far to induce a noticeable variability of the final
shape of the green body, each sample has been 3D scanned. Though scanning
techniques particularly well suited for free form geometries requiring multiple ac-
quisition and substantial lack of overlapping regions such as the ones targeted in
this work could have been used [18, 19], a conventional 3D scanner has been pre-
ferred due to equipment availability. More in detail a laser stripe triangulation
scanner “RS1” mounted on the anthropomorphic "Romer Absolute" arm have been
selected. Such a scanner provides a volumetric accuracy in the range ±0.079 mm
within the measurement range of 1.2 m and a point repeatability lower than 0.044
mm, according to the ASME B89.4.22 certification. To ensure measurement re-
peatability, samples are locked in a leveling table calibrated to ± 1 mm/10 m and
are manually acquired using the scanner. The resulting scanned models have been
compared according to a 4-step procedure:
Region segmentation: most of the Reverse Engineering software usually pro-
vide region segmentation tools, which are able to group sets of poly-faces that be-
long to one feature. Regions can be automatically generated by analysing the cur-
vature on the mesh [20-22].
Revolution axis alignment: after segmentation, it is possible to retrieve the rev-
olution axis ‫ݖ‬ோா௏ by analysing the geometrical differential properties of the main
revolution regions [23]: upper and lower surface of the rim, upper and lower sur-
face of the well. The model is, then, aligned referring to the global coordinates
(within the CAD environment), so that ‫ݖ‬ோா௏ coincides with ‫( ݖ‬see Figure 2a).

(a) (b)

Fig. 2. (a) Original position (green) and z-axis aligned position (cyan); (b) Alignment of the
base: original position (orange) and final position (cyan).

Base plane alignment: the base plane is defined as the bottom plane of the
bounding box that encloses the 3D model, whose normal is parallel to ‫ݖ‬ோா௏ . Once
such a plane is detected, the 3D model can be translated in order to make the base
plane coincident with the ‫ ݕݔ‬plane (from now on called horizontal plane, see Fig-
ure 3).
Analysis of deformations induced by manufacturing processes … 1067

Yaw orientation. After steps 1-3 are accomplished for the entire set of samples,
these have the same position and orientation except for a rotation around ‫ ݖ‬axis
(i.e. the yaw angle). Even though this rotation can be solved by aligning the planes
passing, for each sample, through the z axis and a selected relevant point (e.g. on
the garnish), in the present work three planes (for each sample) are used and a best
fitting alignment is carried out. This allows to minimize possible errors in select-
ing the same point in different dishes (see Figure 3). Finally, it is possible to pro-
ceed with the shape comparison. This has been done by measuring the mesh de-
viation among the reconstructed models, i.e. the minimum distance between two
surfaces, point by point.

Fig. 3. Yaw orientation: the three reference planes and their respective points.

In Figure 4 two examples of comparison between 3 aligned scans are depicted.

(a) (b)
Fig. 4. (a) Geometric deviation of green bodies - sample #1 vs sample #2; (b) Geometric devia-
tion of green bodies - sample #1 vs sample #3.

Table 1. Green bodies comparison: AVG and STD of geometric deviation [mm].
#2 #3 #4 #5 #6
#1 0.150 | 0.127 0.156 | 0.125 0.152 | 0.124 0.165 | 0.120 0.148 | 0.120
#2 0.165 | 0.123 0.160 | 0.122 0.155 | 0.123 0.145 | 0.112
#3 0.135 | 0.119 0.147 | 0.124 0.144 | 0.114
#4 0.150 | 0.128 0.134 | 0.121
#5 0.149 | 0.117
1068 L. Puggelli et al.

As visually deduced, and further demonstrated in Table 1 the 6 samples have


almost the same identical shape i.e. no relevant variations can be ascribed to the
forming process.

3 Analysis of the biscuiting process

To exclude possible influence of the biscuiting process in the final product


shape, the same analysis performed in Section 2 (i.e. scanning plus comparison)
has been performed on the set of 6 biscuits. An example is shown in Figure 5a.

(a) (b)

Fig. 5. (a) Geometric deviation of biscuits - sample #2 vs sample #3. (b) Geometric deviation
– green body vs biscuit - sample #2.

Also in this case, as confirmed by Table 2, the maximum AVG and STD val-
ues are quite small (even if higher than the case of green bodies); this confirm that
also the biscuiting process practically does not influence the shape variation.
Since the CAD model of 6 green bodies and their respective 6 biscuits are
available, it is also possible to demonstrate that no pyroplastic deformation occurs
during the biscuiting process (i.e. only shrinkage happens) and that the shrinkage
value obtained using standard dilatometer tests (equal to 1.1%) is quite the same
retrievable from the scans comparison. To demonstrate these statements, the green
body of each sample has been scaled by a factor equal to the shrinkage value and
the resulting model has been compared with its corresponding biscuit.
In Figure 5b the geometrical deviation of the two corresponding polygonal
models referred to sample#2 is depicted. The median values for AVG and STD are
respectively equal to 0.156 mm and 0.124 mm, thus demonstrating that the biscuit
is subjected only to shrinkage.
This can be also visually deduced by analysing the overlap between the two
models in the areas where the most relevant deformations occur during the manu-
facturing of porcelain dishes (i.e. the drop of the well in Figure 6 - and the bend of
Analysis of deformations induced by manufacturing processes … 1069

the rim – Figure 6). Also in these areas, the scaled green body and the biscuit are
almost identical.

Table 2. Biscuits comparison: AVG and STD of geometric deviation [mm].

#2 #3 #4 #5 #6
#1 0.169 | 0.128 0.196 | 0.175 0.183 | 0.155 0.188 | 0.146 0.135 | 0.082
#2 0.168 | 0.143 0.175 | 0.132 0.190 | 0.171 0.142 | 0.103
#3 0.168 | 0.129 0.156 | 0.134 0.152 | 0.110
#4 0,159 | 0,135 0.154 | 0.116
#5 0.158 | 0.114

(a) (b)
Fig. 6. (a) Drop of the well: green body (green) and biscuit (yellow); (b) Bend of the rim: green
body (green) and biscuit (yellow).

4 Variability induced by firing process

On the basis of above considerations (Sections 3 and 4), no significant varia-


tions have been detected among both the green body and the biscuits. Consequent-
ly, since each green bodies has been obtained using the same composition and the
same isostatic pressing, it is possible to assume that all the dishes, before the firing
process, had almost the same characteristics.
Conversely, the comparison between final products shows significant geomet-
ric differences – noticeable by eye – that cannot be caused solely by reconstruc-
tion errors. In fact, the comparison between such products shows significant geo-
metric differences as depicted in Figure 7.

Fig. 7. Final shape comparison: sample #1 vs sample #2.


1070 L. Puggelli et al.

In particular, the most relevant deformations are visible on the drop of the well
(Figure 8a) and on the bend of the rim (Figure 8b).

(a) (b)
Fig. 8. (a) Drop of the well: sample #1 vs sample #2; (b) Bend of the rim: sample #1 vs sam-
ple #2.

These visually deduced differences are further demonstrated in Table 3 where a


significant scattering among production pieces after the firing process have been
measured. For these data, the median values for AVG and STD are respectively
0.551 mm and 0.468 mm.

Table 3. Final products comparison: AVG and STD of geometric deviation [mm].

#2 #3 #4 #5 #6
#1 0.456 | 0.341 0.564 | 0.397 0.633 | 0.456 0.498 | 0.407 0.455 | 0.400
#2 0.344 | 0.299 0.615 | 0.501 0.576 | 0.468 0.717 | 0.596
#3 0.713 | 0.577 0.599 | 0.482 0.767 | 0.599
#4 0.672 | 0.554 0.647 | 0.519
#5 0.473 | 0.395

Figure 9 illustrates the radar chart of the AVG values evaluated during this
study.

Fig. 9. Scattering among green bodies (orange), biscuits (blue) and final products (green).
Analysis of deformations induced by manufacturing processes … 1071

Looking at the chart it is particularly evident that biscuits and green bodies data
are confined between 0.1 and 0.2 mm, while the final product data are spread in a
significantly larger range (0.3 - 0.8 mm). In the light of these considerations, it is
possible to affirm that the most influent process on the generation of scattering is
firing.

5 Conclusions

The present paper investigated the amount of variation in terms of shape intro-
duced by firing process for an actual industrial production of porcelain whiteware
with the final aim of demonstrating that the main responsible of quality loss in
production is actually the kiln. Therefore, given a porcelain production similar to
the one examined in this paper, in case the observed variability is comparable to
the one measured in this paper, it is plausible that the kiln is actually the main re-
sponsible. On the other hand, in case the variability in the final shape results to be
considerably higher, also other parameters (e.g. composition, granulometry, hu-
midity) or processes (e.g. isostatic pressing and biscuiting) may be responsible for
decreasing the final quality of the product.
Future work will be addressed to analyse the causes of the shape variability
during firing such as, for instance, the position of the product inside the kiln, the
number of products processed at the same time, the flame temperature variation,
etc. so as to provide a better process control thus limiting the number of rejected
products.
The complete comprehension of variation sources will eventually pave the
road for the implementation of an automatic design tool (similar to the ones pro-
posed in [24] for different applications) capable of identifying the most appropri-
ate geometric and process parameters in order to minimize and/or compensate fi-
nal deformation.

References

1. Takao Y. and Hotta T., 2002, “Microstructure of alumina compact body made by slip cast-
ing”, Journal of the European Ceramic Society, pp. 397-401.
2. Bitterlich B., Lutz C. and Roosen A., 2002, “Rheological characterization of water-based
slurries for the tape casting process”, Ceramics International, pp. 675-683.
3. Young A.C., Omatete O.O., Janney M.A. and Menchhofer P.A., 1991, “Gelcasting of Alumi-
na”, Journal of the America Ceramic Society, pp. 612-618.
4. Carfagni M., Governi L., Meiattini D. and Volpe Y., 2008, “A new methodology for computer
aided design of fine porcelain whiteware”, Proceedings of the ASME International Design
Engineering Technical Conferences and Computer and Information in Engineering Confer-
ence 2008. pp. 151-158.
1072 L. Puggelli et al.

5. Klein, A. A., 1916, “Constitution and Microstructure of Porcelain”, National Bureau of


Standards Tech. Paper No. 3–38.
6. Rado, P., 1971, “The Strange Case of Hard Porcelain”, Trans. J. Br. Ceram. Soc (70), pp.
131-139.
7. S. B. Vazquez, J. C. M. Velazquez, J. R. Gasga., 1998, “Alumina Additions Affect Elastic
Properties of Electrical Porcelains”, Bull. Am. Ceram. Soc., 77 [4] , pp. 81–85.
8. Y. Kobayashi, E. Kato., 1998, “Lightening of Alumina-Strengthened Porcelain by Controlling
Porosity”, J. Jpn. Ceram. Soc., 106 [9] , pp. 938-941.
9. Iqbal Y., Lee W.E., 2000, “Microstructural Evolution in Triaxial Porcelain”, Journal of the
American Ceramic Society, 83, pp. 3121–3127.
10. Carty W. M., Senapati U., 1998, “Porcelain Raw Materials, Processing, Phase Evolution,
and Mechanical Behaviour”, J. Am. Ceram. Soc. 81 [1], pp. 3-20.
11. Lundin, S.T., 1954, “Electron Microscopy on Whiteware Bodies”, Florence : s.n., Transac-
tions of the IVth International Ceramics Congress.
12. Shullen K. H., 1964, “Reactions between Mullite and Glassy Phase in Porcelains”, Trans.
Br. Ceramic Society.
13. Martìn-Marquez J., Rincòn J. M., Romero M., 2010, “Mullite development on firing in
porcelain stoneware bodies”, J. of the European Ceramic Society 30, pp. 1599-1607.
14. T., Emiliani T., 1971, “La tecnologia della ceramica”, F.lli Lega.
15. Hendersona R.J., Chandlera H.W. , Akisanyaa A.R., Barbera H., Moriartyb B., 2000, “Finite
element modelling of cold isostatic pressing”, Journal of the European Ceramic Society 20 ,
pp. 1121-1128.
16. Santos-Barbosa D., Hotza D., Boix J. and Mallol G., 2013, “Modelling the Influence of Man-
ufacturing Process Variables on Dimensional Changes of Porcelain Tiles”, Advances in Ma-
terials Science and Engineering, Vol. 2013.
17. Zhang Z.Z., Feng J.H. and Liu W.G., 2015, “Firing Simulation Studies of Household Porce-
lain in Shuttle Kilns”, Advances in Computer Science Research.
18. Barone, S., Paoli, A., Razionale, A.V., 2012, “3D Reconstruction and Restoration Monitor-
ing of Sculptural Artworks by a Multi-Sensor Framework. Sensors”, 12, no. 12, 16785-
16801.
19. Barone, S., Paoli, A., Razionale, A. V., 2013, “Multiple alignments of range maps by active
stereo imaging and global marker framing”, Optics and Lasers in Engineering, Volume 51,
Issue 2, Pages 116-127.
20. Di Angelo L., Di Stefano P., 2015, “Geometric segmentation of 3D scanned surfaces”, Com-
puter – Aided Design, vol. 62, p. 44-56, ISSN: 0010-4485.
21. Governi, L., Furferi, R., Puggelli, L., Volpe, Y. Improving surface reconstruction in shape
from shading using easy-to-set boundary conditions (2013) International Journal of Computa-
tional Vision and Robotics, 3 (3), pp. 225-247.
22. Governi, L., Furferi, R., Palai, M., Volpe, Y. 3D geometry reconstruction from orthographic
views: A method based on 3D image processing and data fitting (2013). Computers in Indus-
try, 64 (9), pp. 1290-1300.
23. Di Angelo L, Di Stefano P, Morabito A E., 2015, “A robust method for axis identification”,
Precision Engineering, vol. 39, pp. 194-203.
24. Volpe, Y., L. Governi and R. Furferi, 2015, “A computational model for early assessment of
padded furniture comfort performance”, Human Factors Ergonomics Manufacturing, 25, pp.
90-105.
Characterization of a Composite Material
Reinforced with Vulcanized Rubber

Tobalina, D.1 ; Sanz-Adan, F.1* ; Lostado-Lorza, R.1 ; Martí nez-Cal vo, M.1 ;
Santamarí a-Peña, J.1* , Sanz-Peña, I.1 ; Somovilla-Gómez, F.1
1
University of La Rioja, Mechanical Engineering Department, Logroño, 26004. La Rioja,
Spain.
* Corresponding author. T el.: +0034 941299533; fax: ++0034 941299727. E-mail address:
felix.sanz@unirioja.es

Abstract The paper is intended to propose a method to characterize the adhesion


of a thermoplastic matrix co mposite material that is reinforced with continuous
fibers and over-in jected vulcanized rubber. The behaviour of the material based on
the thermoplastic matrix and the adhesive is studied. In addition, the combination
of factors that provides the greatest possible adhesion of the rubber to the
composite is analyzed. Test methods are also analysed and suggested to
characterize the adhesion force of the vulcanized rubber to the thermoplastic
composite.

Keywords: Continuous Fiber Thermoplastic, vulcanized rubber, adhesion,


adhesiveness.

1 Introduction

Co mbatting emissions has become a priority of The European Commission [1]. In


recent years, fiber-reinforced thermoplastic composites have been developed
(CFT, CFTR or TPFC) becoming more attractive due to their advantages over
their mo re conventional thermoset counterparts . The advantages include superior
chemical resistance, improved da mage tolerance, mo re flexib le storage conditions,
and recyclability. Currently, thermoplastics are used primarily with discontinuous
fiber reinforcements, such as chopped glass or carbon/graphite” [2, 3 and 4]. They
have the advantage of being manufactured in automated industrial processes and
maintaining the same mechanical properties and lightness. However, their use is
still beginning. There is little data and most of it is due to the manufacturers’ focus
on very specific personal studies. No generic scientific studies have been found,
either involving the evaluation of different materials or fro m the investigations of
any manufacturer.
In a first approach, it was thought that the most suitable materials for the

© Springer International Publishing AG 2017 1073


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_107
1074 D. Tobalina et al.

matrix of the composite would be polyamide and polypropylene. However, several


tests with both of these materials are necessary before choosing the final material.
Future cars probably will have a large number of co mposite components [5].
To achieve that, it is necessary to classify in a reliable way the technological
properties of these materials, as their estimated values currently lack consistency
and reliability. Figure 1 shows the Manufacturer’s properties of Tepex products

[7]:
Fig. 1. T EPEX products range

The properties of different thermoplastic composites that the manufacturers


provide depend on many factors , including the percentage of fibers, applied load
and direction of the fibers. Therefo re, these data is used to obtain an initial, overall
idea of the material’s possible behaviour. However not all of its technological
properties are characterized and performance changes after the forming process
have been determined.
Therefore, there is a need to conduct specific characterizat ion tests for each
particular use situation, according to the requirements of the final product.
The current paper describes a study for characterizing the behavio ur of two
types of composite materials that have been reinforced with vulcanized rubber
(PA6 and PA 66). The characterization of the adhesion of rubber/composite is
described in the point no. 2, which follows. The results of different tests of
adhesion characterizat ion of rubber/composite are co mpared in point no. 3, and
also which test is best to use when calculating the adhesion rubber/composite is
suggested.
Characterization of a Composit e Material ... 1075

2 Characterization of adherence between rubber/composite

One of the most significant properties of the composite materials with a


thermoplastic matrix is their vibrat ion isolation by energy absorption. This
characteristic makes it ideally suited to different applications, shock absorbers and
anti-vibration systems such as Engine mounts or Silentblocks (Fig.2) [8].

Fig. 2. Vibration insulators in the automotive industry – CMP Automotive Group© [8]

For this reason, it is necessary to know the static and dynamic properties,
fatigue performance, allo wable t ightening torque, minimu m permanent strain,
minimu m breaking/adherence load and types of failure [9].
This paper addresses only the specific adhesion of Continuous Glass Fiber and
Carbon Fiber thermoplastic that has been reinforced with vulcanized rubber. The
results of appropriate tests were analyzed to determine the adhesion and the
procedure to follow to ensure that the results will be reliab le and consistent [10].

2.1 Adherence test: General aspects.

For anti-vibration and shock absorbing products, the rubber is vulcanized over a
stiff component (steel or aluminum). A perfect rigid bond substrate/rubber is
necessary so that the fatigue performance and dynamic behaviour is suitable.
This imp lies that the tested part does not fail during its performance before the
predicted time due to the release of the rubber fro m the metal.
Adhesion failure terminology [1, 12, 13, 14, 15]:
R indicates that the failure is in the rubber.
RC indicates a failure at the rubber-cover cement interface.
CP indicates a failure at the cover cement-prime cement interface.
M indicates a failure at the metal-prime cement interface.
The percentages of various types of failure may be estimated as in the
example:
R-50 and RC-50; It means that roughly one 50% of the area showed failure in
the rubber and the other 50% showed failure at the rubber cover cement interface.
To achieve proper adhesion, three factors are necessary: proper adhesive
selection, correct surface finish of base material, and process to apply the
adhesive.
1076 D. Tobalina et al.

2.2 Adhesion test: Contributions

Given the importance of the adhesion test for this type of material and application,
special test conditions have been established to achieve optimal results that meet
the requirements. The tests have been executed with a new composite product
family that is based on a reinforced thermoplastic matrix with bidirectional woven
continuous fibers.
This type of configuration is the most appropriate one to replace the metal
parts due to its mechanical propert ies and the mass production. Fiberglass with
PA6 and carbon fiber with PA6.6 (TEPEX) are the materials that were used in the
tests described in this paper (Fig.1) [7]. Different bonding agents have been
sprayed over the composite and steel test samples prior to the vulcanization.
Two rubbers with different co mpositions and ultimate strengths were tested.
However, in order to avoid uncertainty, the test was conducted with two industrial
processed rubbers that are currently used in automotive co mponents.
Based on the rheometry of the rubber to be used and the injected volume for
the types of test sample, the vulcanizat ion time and temperature were set to
optimal values to avoid filling failures and unwanted rubber behaviour. Single
cavity mould was used to vulcanize the test specimens . The test machines that
were used came fro m the manufacturer Zwick. Rubber formu lation, adhes ive,
vulcanizat ion of parts, as well as the tests, were done by the technical centre
department of CMP Automotive Group [8].
The composites manufacturers provide no information about the minimu m
interlaminar stress of the sheets, composite/rubber adhesion properties, pre-
processor method or recommended adhesives. The tests that were conducted show
the composite behaviour that was introduced into the cast, injecting the rubber
vulcanizing and adhering over the composite because of the industrial adhesives
previously that were applied. These tests indicate whether an adequate union
between the vulcanized rubber and the composite can be achieved. They also
provide informat ion about the composite’s adhesion and interlayer values. All of
these tests give us data that are not currently available and are essential when
designing parts for this product range.

3 Adhesion test: Results

There are different test methods to determine the adhesive properties of th e rubber
with rig id substrates. Some of these are indicated in the ASTM D429 [11] and EN
ISO 14130 [13] standards. These methods cover different procedures for testing
the static adhesion strength of the rubber and rigid materials (in most of the
metals). Since there is no specific standard that defines test methods to determine
tensile adhesion properties of reinforced thermoplastic composites to continuous
fibers, ASTM D429 has been used as a reference. However, none of the methods
that are described in the standard can be applied directly to composite sheets ,
because threaded elements cannot be made of these composite materials.
Characterization of a Composit e Material ... 1077

Therefore, a new method has been developed for the same test conditions that are
specified in the method A, but modifying the test sample to adapt the composite
sheet’s limitations.

3.1 First test

To mount the vulcanized test specimen in the test machines, a threaded s teel insert
[16] was init ially mounted in the composite sheets. The insert cylinder base was
Ø18 mm d iameter (Fig. 3). Fiberglass and carbon fiber test samples with the
dimensions in Fig.4 were perfo rmed. Thermop lastic reinforced co mposite with
continuous bidirectional fibers samples were treated in itially by a
tetrachloroethylene bath to clean and prepare the surface before applying the
adhesive. In the first test, the applied adhesive was a double layer of Cilbond 24.
This type of adhesive is the one used for polyamide inserts where no primer is
used. Instead, a double black layer (cover) is applied.
In order to eliminate the influence of the steel insert in the bonding load, no
adhesive and no previous treatment were applied to the former. Thus, the
registered value was due only to the adhesion of the composite to the rubber. The
test specimens were vulcanized at 165ºC for 6´ with NR600014 rubber (CMP
Automotive Group nomenclature). As the objective of this test was to determine
the bonding limits of the co mposite/rubber, NR600014 rubber was chosen because
of its high ultimate strength (26MPa). Thus, during tests, the composite or the
adhesive layer would break before the rubber would. Init ially, a preload of 50N
was applied to tighten the sample before starting the test. Once it was preloaded,
an axial d isplacement at constant speed was applied.

Fig. 3. Specimen A (First test)

In Figure 4 (and Fig.10, 1st row), the behavior of a composite test specimen is
compared to that off a shot-blasted steel specimen. The ultimate strength of the
shot blasted steel sample is not as high as it should be. This is because a “primer”
and a “cover” are necessary to achieve an appropriate bonding load between steel
and rubber. In this case, the adhesion process that was used was the same as that
1078 D. Tobalina et al.

for the composite samples (double layer C24). The three samples have the same
dimensions, rubber and vulcanized parameters.

kN

Displac. mm

Fig. 4. Results of the first test

The metal insert caused the composite upper layer breakage before debonding
or rubber breakage. Although the shotblasted steel specimen was not properly
glued, it has a higher bonding load than the composite specimens. It proved that
this test is not valid for analy zing the bonding properties.

3.2 Second test

The concept of the second test is the same as for the previous one. However, the
insert diameter was increased to 34mm diameter to avoid breakage of the
composite layer. The same materials were used as in the first test. No adhesive
was applied to the metallic insert to avoid affecting the bonding of the
rubber/composite.
This test allows us to identify the limitations of the adhesion of the
composite/rubber and also to obtain an approximate value of the interlayer
Characterization of a Composit e Material ... 1079

composite force; especially for the carbon fiber. As can be seen in Fig. 5 (and
Fig.10, 2nd row), the sample breakage was 75% caused by the rupture of the
subsequent composite layers.

kN

mm

Fig. 5. Re sults of the second test.

This spacing between layers was the primary cause of the break. However, the
adhesive continued to fulfill its purpose. The fiberglass performed better and most
of the rupture occurred in the rubber part.

3.3 Third test

The test specimen was mounted in the machine by using a special tooling (Fig. 6)
fixed to the outer area of the upper and lower co mposite sheets. All the rubber area
is in contact with or bonded to the composite without a metal insert between .

Fig. 6. Tooling for test samples in the third test

Parameters and test conditions were identical to those in the previous tests .
1080 D. Tobalina et al.

kN

mm
Fig. 7. Results of the third test (type 1 parameters)

After analyzing the test results (Fig.7 & Fig. 10, ro w 3.1), it could be seen that
the third test was more reliable than the previous tests. Therefore, more samples
were tested using different adhesives (Fig.8a, Fig.9a & Fig.10, ro w 3.2). Other
parameters and treat ments were identical to those used in the previous tests.

kN

mm
Fig. 8a. Results of the third test. T ype 2 parameters. B) T ype 3 parameters.

However, there was still an interlayer failure at appro ximately 10kN. It was
decided to test specimens with the same bonding agent combination , co mposite
materials and sample treat ment, but changing the rubber type to NR+BR (natural
rubber (NR) and polybutadiene rubber (BR)) with a lower hardness and a lower
tensile strength (Fig. 10, row 3.3).
Characterization of a Composit e Material ... 1081

kN

Fig. 8b. Results of the third test. Type 3 parameters.

In this case, although the load value was similar to the test results with type1
and type2 parameters, the breakage differed co mpletely. The adhesion failures
were “R” and “RC”. (Fig.8b; Fig.9b; Fig.10, row 3.3). This result was expected
since the rubber 650500 tensile strength is lower than that of rubber 600014.

Fig. 9. Results of the third test. a) Type 2 breakage . b) Type 3 breakage.

Fig. 10. Te st results


1082 D. Tobalina et al.

4 Conclusions

The thermoplastic matrix PA 6.6 that was reinforced with carbon fiber material
showed in most of the cases an interlaminate failure. This occurred in all of the
previous situations in a higher percentage than with the Fiberg lass PA6 material.
By keeping the same comb ination of composite/adhesive, but changing the
type of rubber, which involves lower tensile strength, the maximu m breakage load
does not change. However, the displacements increase and the failure mode is
completely different. Based on the results, it is concluded that the third test
method is the appropriate one to use to determinate the adhesion values of
composite material/rubber. Nevertheless, it seems that the maximu m load will
never exceed 11kN fo r both composite materials because it is the load that causes
interlaminate failure. This showed it is desirable to treat the bonding surface to
degrease it before the bonding process and to apply a double layer of C24
adhesive. A higher surface roughness improves the maximu m load, but in this
case, we will never exceed 11kN.

References

1. Reducin g CO2 emissions from passenger cars. UE. (http://ec.europa.eu)


2. Materials group. University Cambrigde. (http://www-materials.eng.cam.ac.uk)
3. Martin Alberto. Introduction of Fibre-Reinforced Polymers–Polymers & Composites:
Concepts, Properties and Processes. INTECH Science (2013) Chap.1
4. R. T hije, R. Akkerman. A multi-layer triangular membrane finite element for the forming
simulation of laminated composit. Composites: Part A (2009) 739–753.
5. Opportunities in Global T hermoplastic Composites Market 2012 -17: T rends, Forecast and
Opportunity Analysis - lucintel group. (http://www.lucintel.com)
6. Composites avanzados y aplicación a elastómeros (2014). (http://igestek.com)
7. TEPEX®: automotive applications-bond laminates. (http://bond-laminates.com)
8. CMP Automotive Group. (http://www.cauchometal.com/)
9. William VM, Endurica LLC, David Ostberg. US Army T ARDEC. Fatigue Damage Analysis
of an Elastomeric T ank T rack Component. SIMULIA Community Conference (2012) 1-14.
10. Sivaraman R., Roseenid T ., Siddanth S. Reinforcement of Elastomeric Rubber Using Carbon
Fiber Laminates. International Journal of Innovative Research in Science, Engineering and
T echnology 2.7 (2013) 3123-3130.
11. AST M D 429-03_2006: Standard T est Methods for Rubber Property – Adhesion to Rigid
Substrates. (http://www.astm.org/).
12. AST M D3309/D3309M-08_2008: Standard T est Method for T ensile Properties of Polymer
Matrix Composite Materials. (http://www.astm.org/)
13. EN ISO 14130_1997: Fibre-reinforced plastic composites. Determination of apparent
interlaminar shear strength by short -beam method. (http://www.iso.corg/)
14. EN ISO 527-04_1997: Plastics. Determination of tensile properties. Part 4: T ext conditions
for isotropic and orthotropic fibre-reinforced plastic composites.
15. EN ISO 14126_2001: Fibre-reinforced plastic composites. Determination of compressive
properties in-plane- direction.
16. Plastic inserts. Spirol International Corp. ©2015. (http://www. spirol.com.mx)
Definition of geometry and graphics
applications on existing cosmetic packaging

1 1
Anna Maria BIEDERMANN *, Aranzazu FERNÁNDEZ-VÁZQUEZ ,
1
María ELIPE
1
Department of Design and Manufacturing Engineering, María Luna 3, Zaragoza, 50018, Spain.
* Tel.:+ 34 976 76 00 00; fax: +34 976 76 22 35; E-mail anna@unizar.es

Abstract The paper presents a study defining the geometry of product packaging
and its graphics applications. This methodology is based on the analysis and
segmentation of existing products present in the market. The case presented
focuses on eye contour creams packaging present in the Spanish market, but the
study methodology can be transferred to any other product packaging, both in the
field of cosmetics and in any other sector. The segmentation has been made based
on product range, and has led to detecting types of packaging, color, opacity,
graphic applications with typographies etc. characteristic for each range. The
results show that multiple variables differs products packaging belonging to
different ranges and that it is possible to design packaging type characteristic for
each price segment. The conclusions drawn from the application of this
methodology could be used by cosmetic companies for adjusting the presentation
of their products according to their market positioning.

Keywords: packaging; geometric variables; graphics applications; consumer;


market research.

1 Introduction

In such a competitive market like the cosmetics, product packaging and aesthetics
takes a leading role in the marketing and interaction between brand and consumer
processes [1]. Packaging and graphic elements turn into effective ways for brands
to get to the consumer their product and the brand values associated with it [2]. In
a context where packaging can become an element that decisively influences the
consumer to make a purchase, it must communicate the right message, fulfilling
the needs and expectations of the user to encourage the purchase of the product [3-
6]. There are multiples variables that influence consumer perception such as the
colors with the impact on the construal level [7], alignment of text information [8],
material surface properties [9] and shape [10]. The visual stimuli affect
consumer´s buying behavior [11]. That is why packaging, apart from its utility

© Springer International Publishing AG 2017 1083


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_108
1084 A.M. Biedermann et al.

function of protection of the product in the logistic and commercialization stages,


which is already taken for granted [12], serves to attract the attention of potential
consumers, influencing their willingness to buy and even increasing product
acceptance once purchased [13]. To increase the selling possibilities, the
companies structure the market using different criteria such as gender [14] or age
[15]. The packaging influence can be measured taking into consideration its
impact on consumers’ memory [16], its attention [17] and processing fluency at
the time of judgment [18].
The definition of package, following Ampuero and Vila [19], is a container that
is in direct contact with the product itself, facilitating handling and
commercialism, and protecting, preserving and identifying the product. The
packaging that is in a direct contact with the product is called primary packaging,
and the one that contains primary packaging with the aim of facilitating storage
and distribution is called secondary packaging.
Focusing on cosmetics, it is important to point that this market generates more
than fifteen thousand direct jobs [20] and it is an important part of many countries
economy [21]. Cosmetics for skin care, where eye contour creams -object of study
of this paper- must be included, have had the largest market share throughout last
four years, increasing from 31 up to 35,3%. However, the market share has
decreased or remained unchanged in the last year in the remaining categories of
cosmetics [22]. Taking into account the great importance of this industry in
economy, it can be supposed that any factor that might encourage purchase shall
be considered. Being packaging an important factor in the purchase process, as it
has been exposed, it has been considered interesting to analyze and systematize
some of the characteristics belonging to different ranges.
The research featured in this paper analyzes existing packaging depending on
variables that might influence on consumers, on the basis of parameters already
known such as price, brand, type, consumer genre or age, but also exploring the
existence of other unknown aspects that could also influence during the purchase
process. This paper is structured as follows: objectives and the context in which
the experience has been developed, analyzed variables and applied tools are
described in the methodology section. It is followed by the most significant results
of the research. Finally, the most important conclusions drawn from the research
are presented.

2 Methods

The objective of this paper is to structure and systematize the characteristics of


cosmetic packaging present in the Spanish market, in order to create packaging
types that reflect the variables belonging to different product ranges.
Firstly, it was necessary to establish the parameters for the study of the
characteristics of the packaging and its distinguishing features. To this end a
number of variables were selected and grouped into four broad categories:
• Packaging: dimensions, materials, forms and aesthetics.
Definition of geometry and graphics … 1085

• Product: price, content, point of sale.


• Graphics: fonts, its color, claims and language.
• User: gender and age.
The results of the study of the variables have been structured depending on the
range to which the studied product belonged. Different ranges for the purpose of
this study were established taking into consideration the brand positioning and the
characteristics of the point of sale.
Secondly, a market survey was conducted, with the following objectives:
• Acquire more objective and in-depth range information about eye contour
creams in Spanish market.
• Study the characteristics of the packaging and its distinguishing features.
• Get information on technical details of packaging such as dimensions, prices,
content, volume, etc.
• Get conclusions and relevant data to generate proposals for cream packaging to
adjust to the market segment where the product is wanted to be positioned.
The study was made based on a selected sample -100 items- of eye contour
creams present on the Spanish market. Data have been collected through visits at
various points of sale, measurements in situ, web search and reports from
manufacturers. After statistical analysis of the data obtained in different
categories, type recognizable containers for the different market segments were
defined, determining its volume, finishes and applications. With this information,
every “typo” packaging has been modeled with Inventor and rendered with
KeyShot.

3 Results

The research results are presented in four categories of variables referring to


packaging, product, graphics and user. To be able to define the “typo” designs for
each range, the following classes have been established:
• Low range: price lower or equal to 2 €/ml.
• Mid range: price higher than 2 €/ml and lower or equal to 6 €/ml.
• High range: price higher than 6 €/ml and lower or equal to 12 €/ml.
• Luxury: price higher than 12 €/ml.

Packaging variables
Regarding materials, soft cardboard was present in the secondary packaging in
all ranges (92%), except in luxury products, dominated by hard cardboard (8%), as
it is shown in Fig. 1A. In the primary packaging, both soft and hard plastic
predominate, with materials such as polypropylene or polyethylene (PET) (79%).
However, soft cardboard was less present in high and luxury ranges, while
materials such as glass or metal increased its presence, as shown in Figure 1B y C.
The most prevalent shape of secondary packaging was both rectangular (78%)
and cube (22%) (Fig 2A), cylindrical form predominated in the primary
packaging, (46%) (Fig 2B) where Tube and Roll-on format only appeared in low
1086 A.M. Biedermann et al.

and mid range, with special predominance in low range. The only shape that
appeared in all ranges, and with similar percentages, was jar.
Regarding the relationship between price and volume, when product price
increases, so does the volume of both primary and secondary packaging,
regardless of its form.
The majority of the secondary packaging studied had over a 70% of empty
space. The volume of the empty space did not depend on the unit price of the
product.

Fig. 1. Materials: A: Secondary packaging; B: Primary packaging; C: Primary packaging stopper

Fig. 2. Shape: A: Secondary packaging; B: Primary packaging

Nevertheless, when referring to the percentage of container height, it could be


seen a clear decrease in the average percentage among the lowest range (71%) and
other ranges (50%). Although it is noteworthy that the medium, high and luxury
ranges have similar container rates, height also decreases, from 67% in the mid
range to 50% in the luxury range
For prices lower to 10 €/ml a group of products with dimension around 40 mm
can be found, while from 10 €/ml size increase up to reach 100 mm.
Unlike the case with the other dimensions, the height of secondary container
decreases as the price of the product rises. Thus, product packaging priced at less
than 10 €/ml were around a range of 100 and 140 mm high, while products with
prices higher than 10 €/ml were around 80 and 100 mm.
As it is the case with the length, the width of the secondary container increases
as the product price increases, from a range of widths between 20 to 40 mm to 100
mm. As in other cases, there is a differentiation between products of lower prices
Definition of geometry and graphics … 1087

to 10 €/ml where the width is practically clustered around the range of 20 to 40


mm, while from this price data are scattered, most with over 60 mm width.
With regard to the aesthetics of the package, it should be highlighted the
importance of white color in both containers (around 40%); the matte finish on the
secondary packaging (46%) and bright in the primary (63%); and opacity in both
the secondary (100%) and the primary (76%) container. The prevalence of opacity
may be related to conservation issues rather than to image matters.

Product variables
There is a big gap between the average unit price of high range (8,17 €/ml) and
luxury (15,96 €/ml) products and low (1,07 € /ml) and mid range (3,66 €/ml)
range products. The average unit price is 3,84 €/ml, with an average product price
of 56,22 €. Commercial and specialty brands have similar unit prices (2.15 €/ml
and 2.18 €/ml respectively), while private brands have a much lower average price
than other types (0.33 €/ml) and high and luxury products triple the average unit
price (7.23 €/ml). The vast majority of products (81%) contain 15 ml of cream
regardless of the range of the product price.
Concerning the place of sale, in hypermarkets or supermarkets 100% of the
products studied are low range, similar to what occurs in pharmacies, where 79%
are low range, being the rest mid range. In beauty centers, eye contour creams of
all ranges except luxury can be found. These can be found primarily in department
stores and specialized centers.

Graphics variables
Regarding the product brand, there is a greater use of sans serif typographies in
the lower ranges (78% low range, 42% mid range) while in high (50%) and luxury
(67%) ranges serif typographies predominate. However, for the remaining
information there is a clear predominance of sans serif fonts (96% low range /
77% mid range / 79% high range / 67% luxury range), and most used colors for
typography are black and white.
Graphics used in most eye contour creams are related to the product brand.
Therefore, it was considered that these will not have a decisive influence in the
study, since they are determined by the brand to which the product belongs.
The language most widely used in claims is English (49%), followed by
Spanish (28%) and French (21%). Its distribution is not uniform in all ranges,
since Spanish predominates in the low and mid ranges, while in high and luxury
ranges only the other two appear.

User variables
The first variable relative to the user that was analyzed was the relationship
between average unit price and the age to which it is addressed. The most
expensive products are targeted to users over 45 (5.51 €/ml) followed by those
between 30 and 45 years (3.58 €/ml). The lowest average price products are
targeted at users 20 to 30 years (1.74 €/ml).
Creams for the male audience represent 7% of the offer analyzed, the same
percentage as unisex creams, while creams dedicated to the female user represents
1088 A.M. Biedermann et al.

86%. If the average unit price is related with the genre to which it was addressed,
it can be seen that male creams have a much lower price (1.79 €/ml) than unisex
(3.93 €/ml) or female (4.0 €/ml) products, which have very similar prices.

Type packaging according to product positioning


A summary of the parameters appearing most frequently in the market study is
presented in Table 1. The columns represent different market segments: low-,
mid- high range and luxury products, and the files are divided into studied
variables categories. The table enables us to define the packaging and its variable
characteristic for each segment.

Table 1. Summary of the most frequent results sorted by product range.

Variable Low range Mid range High range Luxury


Shape and size of the Tube (53%) Tube (39%) Doser (50%) Jar (83%)
primary container Height > 80 Height > 80 Height > 100 Height 40-60
(mm) Diameter 15-20 Diameter 15-20 Diameter 20-40 Diameter 20-30
Shape and size of the Rectangular Rectangular Rectangular Hub prismatic
secondary container prismatic prismatic prismatic Length > 80
(mm) Length 30-50 Length 40 Length > 40 Height 80
Height 100-140 Height 100-140 Height 60-80 Width >60
Width 20-40 Width 20-40 Width >30
Output format Fine tip Fine tip / Metal No specific format No specific
PACKAGING

applier format
Primary packaging Soft plastic Soft plastic Hard plastic/ Metal Hard plastic/glass
material
Secondary packaging Soft cardboard Soft cardboard Soft cardboard Hard cardboard
material
Primary packaging White White/Blue Silver Golden
main colour
Secondary packaging White White/Silver Silver Golden
main colour
Finishing and opacity Bright Bright Bright Bright
of primary packaging Opaque Opaque Opaque Opaque
Finishing and opacity Mat Mat Bright Metalized
of secondary Opaque Opaque Opaque Opaque
packaging
Percentage of sample 49% 31% 14% 6%
PRODUCT

tested
Average price (€/ml) 1,07 3,66 8,17 15,96
Content (ml) 15 15 15 15
Place of sale Supermarket Beauty centres Superstores Superstores
Brand font Without serifs Without serifs With serifs With serifs
Text font Without serifs Without serifs Without serifs Without serifs
GRAPHICS

Colour Black Black Black/Golden Silver/Black


Claim No sales pitch (21%) No sales pitch (26%) No sales pitch (50%) No sales pitch
Anti-aging Dermatological Moisturizing (50%)
research Anti-aging Associated with
the brand
Language Spanish English English/French French
Gender Female =41% F=26% F=14% F=5%
Male =4% M=3% M=0% M=0%
Unisex =4% U=2% U=0% U=1%
USER

Age 20-29 7% 20-29- 3% 20-29- 0% 20-29- 0%


30-44- 6% 30-44- 5% 30-44- 4% 30-44- 0%
45-59- 16% 45-59- 10% 45-59- 6% 45-59- 0%
> 60 - 1% > 60 - 0% > 60 - 0% > 60 - 5%
T - 19% T - 13% T - 4% T - 5%
Definition of geometry and graphics … 1089

Noting that the price of products is related to both the brand, type, finishes, shape
and size of container as well as age and gender of the user (Table 1), type
containers have been defined for each of the four product ranges, presented in Fig
5, and some alternative packages have been also developed, as shown in Fig.6.

Fig. 5 Proposed packaging type: A: Low end; B: Mid range; C: High range; D: Luxury

Fig. 6. Alternative proposals: A: Low end; B: Mid range; C: High range; D: Luxury

For designing the typo packaging, the seven formats present in Spanish market
for primary packaging were considered: tube, roll-on, pot, dozer, pencil, spray and
jar. They were classified depending on the main variable, which was price, and all
the other variables were analyzed in relation to it. Thus, the more prevalent format
in each price range was selected as the typo, resulting that jar was the only one
that appeared in every price segment, although with very different characteristics
in each of them. Moreover, for every primary packaging, a secondary packaging
within the same range was designed.
These designs show the geometry and graphic applications that can be
considered as characteristic and typical of each product range (Fig.5) and for a
more complete definition, the alternative design for each product range show
complementary characteristics that can give additional information (Fig 6).
The detailed definition of the packaging is the starting point of the next phase
of the investigation, in which the results will be confronted with the user´s
opinion, for checking the adjustment between the characteristics defined and
user´s expectations and purchase predisposition.
1090 A.M. Biedermann et al.

4. Conclusions

This study reveals that primary and secondary packaging changes in multiple
parameters depending on the product range, regardless of the amount of product
offered, and the strongest findings are related to how to improve the
differentiation among the lowest product ranges and the top two. The most
significant parameters for it are shape, volume, material, color, finish, and graphic
applications, and clear trends in their relation with product price have been
detected in the research.
The only shape of primary packaging that appears in all ranges is jar, although
its characteristics are quite different depending on the product range. Thus, when
using this shape in product packaging, it is very important to pay attention to the
rest of variables for an adequate product positioning, as the versatility of this
shape might drive to undesired perception of the product range.
There is also a direct relation between primary and secondary packaging size
and volume and its price, regardless of its shape. Thereby, bigger attention must
be paid to the first two parameters rather than to the shape of packaging when
positioning in the higher ranges of the market is intended.
Regarding secondary packaging, there is a direct relation between the length
and width of the packaging and the unit price of the product, while its height
presents an inverse proportional relation with the unit price.
Nevertheless, there is not such a clear relation in primary packaging, because
although the same trend is evident in low, mid and luxury ranges, high range
products present different characteristics regarding height of the packaging.
Materials are not very significant, as plastic of all kind predominate in all of
the ranges. This trend is only broken in high and luxury ranges, where materials
such as metal or glass appear. So that, if the positioning in these ranges is
pretended, the use of these materials might be considered as a way of
differentiation from the lower ranges.
Little product differentiation can be made by color, as white predominate in
both primary and secondary packaging in all ranges. But product characterization
can be strengthened by its finishing, due to the prevalence of bright finish in
secondary packaging of high range products and metalized finish in luxury
products.
The latter conclusion is related to graphics applications, both in terms of
typography and language. Again, it is possible to achieve better product
differentiation by applying fonts either with (higher ranges) or without (lower
ones) serifs, or by either using Spanish (lower ranges) or French and English
(higher ones) for the claims applied on both primary and secondary packaging.
All the conclusions stated have been applied for designing the four typo
packaging that have been shown in the preceding section. In the same way,
parameters obtained might be considered by brands when working in the design of
their cosmetic products packaging, since they provide the information needed to
adjust the characteristics of the packaging within the product range in which it is
intended to position.
Definition of geometry and graphics … 1091

References

1. Silayoi, P., & Speece, M. (2004) Packaging and purchase decisions: An exploratory study
on the impact of involvement level and time pressure. British food journal, 106(8), 607-628.
2. Rigaux-Bricmont, B. Influences of brand name and Packaging on perceived quiality.
Advances in consumer research, 1982, 9(1).
3. Thomson, D. M., Crocker, C. Application of conceptual profiling in brand, packaging and
product development. Food Quality and Preference, 40, 2015, pp. 343-353.
4. Bloch, P. H. (1995). Seeking the ideal form: Product design and consumer response. Journal
of Marketing, 59(3), 16–29.
5. Fenko, A., Schifferstein, H. N. J., & Hekkert, P. (2010). Shifts in sensory dominance
between various stages of user-product interactions. Applied Ergonomics, 41, 34–40.
6. X5Tuorila, H., & Pangborn, R. M. (1988). Prediction of reported consumption of selected
fat-containing foods. Appetite, 11(2), 81–95.
7. Lee, H., Deng, X., Unnava, H. R., & Fujita, K. (2014). Monochrome forests and colorful
trees: the effect of black-and-white versus color imagery on construal level. Journal of
Consumer Research, 41(4), 1015-1032.
8. Otterbring, T., Shams, P., Wästlund, E., & Gustafsson, A. (2013). Left isn't always right:
placement of pictorial and textual package elements. British Food Journal, 115(8), 1211-
1225.
9. Chen, X., Barnes, C. J., Childs, T. H. C., Henson, B., & Shao, F. (2009). Materials’ tactile
testing and characterisation for consumer products’ affective packaging design. Materials &
Design, 30(10), 4299-4310.
10. Yang, S., & Raghubir, P. (2005). Can bottles speak volumes? The effect of package shape on
how much to buy. Journal of Retailing, 81(4), 269-281.
11. Reutskaja, E., Nagel, R., Camerer, C. F., & Rangel, A. (2011). Search dynamics in consumer
choice under time pressure: An eye-tracking study. The American Economic
Review, 101(2), 900-926.
12. Kano N, Seraku N, Takahashi F, Tsuji S. (1984) Attractive quality and must-be quality. JJpn
Soc Qual Contr;14:39–44.
13. Rundh, B. (2005). The multi-faceted dimension of packaging. Marketing logistic or
marketing tool? British Food Journal, 107(9), 670–684.
14. Ritnamkam, S., Sahachaisaeree, N. (2012) Cosmetic packaging design: A case study on
gender distinction. Procedia-Social and Behavioral Sciences, 50, , pp.1018-1032.
15. Amatulli, C., Guido, G., Nataraajan, R. (2015) Luxury purchasing among older consumers:
exploring inferences about cognitive Age, status, and style motivations. Journal of Business
Research, 68(9), 1945-1952.
16. Hanzaee, K. H., & Sheikhi, S. (2009). Package design and consumer memory.International
Journal of Services and Operations Management, 6(2), 165-183.
17. Underwood, R. L., Klein, N. M., & Burke, R. R. (2001). Packaging communication:
attentional effects of product imagery. Journal of Product & Brand Management, 10(7), 403-
422.
18. JANISZEWSKI, C., & MEYVIS, T. (2001). Effects of Brand Logo Complexity, Repetition,
and Spacing on Processing Fluency and Judgment. Journal of Consumer Research, 28(1),
18-32.
19. Ampuero, O., & Vila, N. (2006). Consumer perceptions of product packaging.Journal of
consumer marketing, 23(2), 100-112.
20. http://med.10-multa.com/istoriya/18647/index.html?page=7
21. Stanpa. http://www.stanpa.es/cms/13/Datosdelsector.aspx
22. Statistics obtained from the Web Portal Statista (www.statista.com)
Part VIII
Innovative Design

Engineering has to support industries in the global co mpetition with effective


methodologies and advanced tasks in the design processes. The Innovative De-
sign track focuses on the methods of Knowledge Based Eng ineering, the optimi-
zation of solutions for Industrial Design and Ergonomics, and the integration of
new techniques for Image Processing and Analysis in a design process.
The Knowledge Based Engineering topic deals with methodologies promising
to support decision making and routine tasks and to reduce the time for offer gen -
eration and for deep evaluations of performances. In particular, the papers present
a framework to capture the process’ decisional knowledge, a Design Archetype
tool to reuse design knowledge, a cost estimation function weighting the design
requirements, metrics to characterize the confidence level of an offer and a con -
figuration tool to predict the product energy efficiency in eco-design.
The Industrial Design and Ergonomics topic deals with rules for automotive
styling, product design explo iting usage information, hu man factors evaluation
and ergonomic design. In particular, the papers discuss styling DNA ro les con-
cerning brand and identity of a car design, the early generation of the user manual
as reference for the design activities, Virtual Reality technologies to support the
interactive design of ergonomic workstations and comfortable automotive seats
and the biomechanical risk assessment through manikin simu lation.
The Image Processing and Analysis topic deals with 3D reconstruction of small
up to very large systems, identification of defects into a mechanical co mponent. In
particular, the papers discuss methods for 3D reconstruction of rubber mem-brane
for carrying out an accurate mechanical characterization with pixel preci-sion or
of architectures from aerial digital photogrammetry using UAVs, and for B-scan
image analysis on alu minu m p lates for position and shape defect defini-tion.

Jean-François Boujut - Grenoble INP

Fernando Brusola - Univ. Politecnica de Valencia

Alberto Vergnano - Univ. Modena e Reggio Emilia


Section 8.1
Knowledge Based Engineering
A design methodology to predict the product
energy efficiency through a configuration tool

Paolo Cicconi1*, Michele Germani1 , Daniele Landi1 , Anna Costanza Russo1


1
Università Politecnica delle Marche
* Corresponding author. Tel.: +39-071-220-4797 ; E-mail address: p.cicconi@univpm.it

Abstract During recent years the European Ecodesign Directive has introduced
big changes in the design methodology of several energy-using products including
consumer goods such as ovens, washing machines and kitchen hoods. Additional-
ly, the introduction of the Energy Labelling Directive pushes manufacturers to im-
plement new energy-saving features in many energy-related products sold in Eu-
rope. As a consequence, several companies have been encouraging the
improvement of their energy using products paying attention to the related selling
cost. Eco-driven products require eco-design tools to support the eco-innovation
and the related sustainability improvement. The main scope of the proposed re-
search is the reduction of the time-to-market for the energy-using products such as
kitchen hoods. In this context, the paper aims to provide an approach to support a
pre-evaluation of the energy labeling related to kitchen hoods. A prototypical
software tool has been developed in order to simulate the energy performance of
new kitchen hood configurations in term of energy efficiency. The approach also
considers the introduction of virtual experiments in order to calculate the perfor-
mance of virtual modules. This tool makes the product-engineer more aware in the
decision-making about the energy-saving. As a test case, different product config-
urations have been compared analyzing the energy labelling and the overall ener-
gy performance.

Keywords: Ecodesign; energy efficiency labeling; KBE; kitchen hoods; virtual


prototyping.

1 Introduction

Nowadays, EU directives and normative lead several manufacturers of energy us-


ing product to follow the paradigm of the Ecodesign approach. In particular, the
EU Ecodesign Directive (Directive 2009/125/EC) establishes a framework to set
mandatory ecological requirements for energy-using and energy-related products
sold in all Member States [1]. The EU Commission (EC) has been regulating the

© Springer International Publishing AG 2017 1097


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_109
1098 P. Cicconi et al.

requirements regarding the energy efficiency classes for labelling different energy
consuming products, such as several household appliances. In particular, the
kitchen hood product is involved both in the Ecodesign directive and in the energy
labelling, as described in the following sections. The EU energy labelling aims to
describe the energy efficiency performance of several household appliances by the
calculation of the Energy Efficiency Index (EEI). The employment of more effi-
cient energy consuming products could lead to a reduction in the total amount of
the global energy consumed with an overall gain in term of the social impact. On
the other hand, the energy efficiency, which is the energy services provided per
unit of energy input, comes to an additional cost for the OEM producers. The EU
energy labeling is an EC response against the lack of the energy consumption in-
formation, which leads the consumers to underinvest in energy efficiency [2].
About energy labeling, some researchers studied the consumers’ willingness to
pay a price for high efficiency products depends on the back premium related to
the energy savings [3]. However, the prices of energy do not reflect the true mar-
ginal social cost of the energy consumption.
In order to reduce the time and cost impacts for the delivering of more efficien-
cy products, the large OEM producers have been investing in Eco-innovation ac-
tivities following an Eco-design approach. This context requires the employment
of design tools and methods able to support the designer in the early estimation of
the product energy performance with virtual prototyping tools during the Eco-
innovation flow. Kitchen hoods are a category of products involved from the be-
ginning in the Ecodesign Directives. In particular, the energy performance of the
air blower has an important weight in the calculation of EEI. The energy efficien-
cy of the blower is evaluated through the FDE (Fluid Dynamic Efficiency) index
which is incorporated in the EEI definition for kitchen.
Innovative, agile and rapid design methodologies are necessary to aid the prod-
uct-engineer in a more energy-aware design in accordance with the recent legisla-
tion. The main scope of the research is the reduction of the time-to-market for the
energy-using products such as the kitchen hoods, with the focus on the design
time. In fact, proposed paper aims to provide an approach to support a pre-
evaluation of the energy labeling related to kitchen hoods using a software devel-
oped during the research.

1.1 Eco-Design and Eco-Innovation

During the last years, the manufacturing industries have been completely rethink-
ing their way of designing and manufacturing by implementing responsible strate-
gies which are focused on products that have an ecological, social and economic
value. This situation has enhanced the adoption of “Design for Environment” or
“Eco-design” principles during the innovation process, in order to integrate the
environmental dimension of the sustainable development in the design of products
A design methodology to predict the product … 1099

[4]. According to the international ISO 14062 normative, the ECO-Design, which
is defined like the integration of the environmental constraints in the development
process of product design, leads to two types of approaches such as the Life Cycle
Assessment (LCA) and the Design for Environments (DfE). These approaches are
very suitable to support redesign processes, however, the concept of a redesign
approach shows the limits of the Eco-design thinking. Eco-innovation is a sort of
union between the eco-design and the innovation design, but it seems very diffi-
cult to give a limited definition to this term and to characterize its difference with
eco-design. For the European normative the eco-Innovation is defined as a sum of
actions that leads to the reduction of environmental impacts of products (ISO
14006:2011), which often also included social and ethical aspects. After these
consideration, is also important to put the attention on the concept of “eco-
innovation” in the global topic of the sustainable design. However, the literature
and the research approaches show a real difficulty to clear the differences and the
similarities between the eco-design and eco-innovation, and to define a boundary
between the two concepts [5][6].

1.2 Simulation and design

The modern kitchens are located in open areas with ventilation systems which are
often inappropriate for the evacuation of fumes, smoke, heat and odors produced
by cooking. This situation may be unpleasant for all persons living in a house with
a kitchen room which has an inefficient system for the air filtering and evacuation.
An adequate ventilation system is very required in modern house in order to re-
move smoke, volatile organic compounds, grease particle and vapor from the
cooking area. The CFD (Computational Fluid Dynamic) simulations [7] can pro-
vide information about the airflow distribution and air quality within the room.
This is an important issue which concerns architects, designers, and in some case
healthcare professionals [8]. Generally, the CFD analysis can simulate and evalu-
ate velocity fields, temperature maps and air concentration values throughout the
fluid dynamic domain to be investigated. This virtual approach allows to estimate
the impact of the geometric parameters and the boundary conditions on the system
studied. The flexibility of this approach enhances the investigation of new design
alternatives for the improvement of comfort in living rooms. CFD tools are wide-
spread applied in the study of the rotating machines such as the kitchen hoods be-
cause they implement advanced numeric turbulence schemes [9].
1100 P. Cicconi et al.

2 Energy Labeling for Kitchen Hoods

During the last years, the EU Ecodesign directive establishes that the products
connected with energy (ErP) are obliged to reduce the energy consumptions and
the environmental impacts. The EU Ecodesign directive (2009/125/EC) establish-
es a framework to set mandatory ecological requirements for energy-using and en-
ergy-related products, and it is complemented by the Energy Labeling Directive
(EU Directive 2010/30/EU, 2014/65/EU). The combination of the Ecodesign and
of the energy labeling is one of the most important improvement in the area of the
energy efficiency. During the last years the management of energy consumption
has become more and more important in the domestic field. For this reason, from
2015 the EU legislations enforce the producer of kitchen hoods to provide the fi-
nal customer an energy label that shows the characteristics regarding the energetic
consumption of the appliance (Fig. 1).

Fig. 1. Energy Label for kitchen hoods.

The EEI index is described as a scale from A to G which represents the ratio
between the annual consumption of the hood (AEC) and the standard annual con-
sumption (SAEC) expressed in ܹ݄݇Ȁ›‡ƒ”•Ǥ Additional efficiency indexes for
kitchen hoods are: fluid dynamics efficiency (FDE), lighting efficiency (LE), effi-
ciency in filtering fat (GFE), level of acoustic pollution expressed in dBA.
Starting from January 2015 the directive provides that until 2020 every two
years will be built a new class of greater energy efficiency (A+, A++, etc.).
The FDE index is defined as the ratio between the useful effect of the aspira-
tion system and electrical consumption (1). In particular, QBEP is the volumetric
air flow (m3/h) at the best efficiency point, P BEP is the related static pressure value
(Pa), and WBEP is the electric power consumption (W). A table defines the correla-
tion between the FDE value and the relative energy index.

(1)
The Lighting Efficiency Index (LE) is defined as the ratio between the average
illumination on the working surface and power consumption. The efficiency in fil-
tering fat (GFE) is calculated according to the standard EI 61591. Finally, the gen-
A design methodology to predict the product … 1101

erated noise level is calculated when the hood is at maximum power, excluding
the boost speed according with the standard 60704-2-13.
The AEC (Annual Energy Consumption) represents the annual average con-
sumption in ܹ݄݇Ȁ›‡ƒ” and is calculated as follows (2) where tH is the daily use
time (min), tL is the daily use time of the lighting system (min). The resultant EEI
(Energy Efficiency Index) is the ratio between (2) and SAEC which is the stand-
ard annual energy consumption (kWh/year).

(2)

3 Approach

A prototypical tool has been implemented to support the configuration of the


kitchen hoods’ functional modules and obtain an early feedback concerning the
energy efficiency indexes. As described before, the energy efficiency indexes are
related to the product and its functional groups such as blower, lighting, and filter-
ing. The proposed tool implements the knowledge and rules regarding the effi-
ciency indexes calculation. The scope of the application is to support the eco-
innovation process based on the production of small batches with a rapid and sim-
ple configuration tool.

Fig. 2. The tool’s architecture and the applied methodology for the tool development.

A methodological approach has been implemented in order to lead the model-


ing of a kitchen hood system and the developing of the tool’s architecture. As de-
scribed in Fig. 2, the modeling of a virtual system, which reproduces the behavior
of a kitchen hood, requires phases such as laboratory testing, data analysis, rules
formalization and validation tests. The formalized knowledge consists of rules and
functions implemented in the configuration tool. The proposed software can be
seen as a “black box” where the input related to a product configuration is con-
verted in an output through the calculation of the energy efficiencies. The input
1102 P. Cicconi et al.

consists of data related to the desired air flow rate, the required shape design
(which regards the inlet geometry), the filtering specifications, the lighting and the
required EEI. The output is the configuration of the energy modules and report
about the energy efficiencies with the final EEI value. The proposed approach
provides a database which can be filled by testing data and also virtual experi-
ments. In particular, the virtual prototyping allows to reduce time and cost related
to the physical tests with a benefit on the time to market.
The developed software also allows to evaluate new product configuration by
the selection of several functional modules. The implemented rules and functions
are able to evaluate the product performance and efficiency using the superposi-
tion principle as described in the following section. Different product and modules
configurations can be compared using the proposed tool shown in Fig.3 and Fig.4.

Fig. 3. The tool form to configure the air blower unit and calculate the performance

Fig. 4. The tool form to compute the energy efficiency indexes


A design methodology to predict the product … 1103

3.1 Modeling

A system has been modelled in order to reproduce the energy flows of a traditional
kitchen hood. Fig. 5 describes the kitchen hood's system with highlighted in red
the modules which has been implemented in the proposed tool. The system de-
scribed can be used for every filtering hood (which provides the air recirculation
inside the room) or exhaust hood (or extractor hoods, where the cooking fumes are
evacuated from the inside to the outside). Each blocks showed in Fig. 5 consists of
some variables such as the energy consumption, the air flow rate, the efficiencies,
the motor characteristics, etc. Functions have been implemented to solve the ener-
gy efficiency indexes from the variables values. Experimental data have been used
to define the operating curves which are implemented in the proposed system
model.

Fig. 5. The modeling of a kitchen hood’s system

The modeling approach is based on the Systems Theory. In particular, the su-
perposition principle has been applied to compute the air flow are and the pressure
values at the condition of the best efficiency point. The calculation of the aeraulic
performance related to a kitchen hood can be considered as a linear combination
of the effects due to the blower, the inlet geometry and the grease filter. In fact,
the inlet geometry and the filter introduce a pressure drop to the air elaborated by
the blower. The behavior of each functional block analyzed has been considered
as independent from the other ones.
The functional model of the blower has been analyzed considering the effects
of motor and impeller. The effect of the impeller has been described considering
data such as the operative curves with the relation between pressure and mass flow
rate. Each curve has been approximated and collected in the database with a fifth
degree polynomial (3). The motor behavior has been analyzed introducing the op-
erative curves with the information about the rpm, the torque and the power con-
sumption.
(3)

The fluid dynamic characterization of the grease filter has been focused on the
pressure drop effect (4) where the k term is the concentrated pressure drop related
to a defined filter. Each k term is collected in the database of the filters. The pres-
1104 P. Cicconi et al.

sure variation related to the kitchen hood’s shape has been considered as a func-
tion of the inlet geometry. Considering (4), the term ρ is the density, while g is the
gravity acceleration and v the air velocity.
(4)
Considering (1) and (3), the calculation of FDE has been defined as a function
of the mass flow rate (Q) due to the relation between P and Q, while the product
EEI has been calculated solving equation (2) where the blower power consump-
tion has been calculated considering the aeraulic performance and the electric mo-
tor’s operation curves.

3.2 CFD

The CFD simulations have been developed through a commercial CFD tool
which solves the Navier-Stokes equations through a Reynolds averaged approach
and uses a finite volume method (FEM) for the equation discretization. No-slip
conditions were applied to all the domain walls. In order to evaluate the numerical
model, several simulations have been carried out in at different operative condi-
tions. The characteristic blower curves have been reproduced with virtual experi-
ments in order to evaluate the FDE at the best efficiency point. Each computation-
al analysis concerns a specific set of rotational speed and outlet pressure condition
as input data, while the main results are the air flow rate and the resistance torque.
The simulated operative conditions are valid only if the real employed electric mo-
tor can provide the same conditions of torque and rpm.

Fig. 6. Comparison between real data (blue line) and simulated values (red line)

Fig. 6 shows how the CFD results are in accordance with the experimental test,
while a cross section of the 3D model of the relative blower is reported in Fig. 7
within a report about the pressure and velocity distribution. Physical tests have
confirmed the results obtain using the virtual CFD model, with a gap about 5%.
A design methodology to predict the product … 1105

Fig. 7. CFD pressure map (left) and velocity vectors (right) related to a free delivery condition.

4 Test case

The developed configuration tool has been tested to estimate the energy efficiency
of different kitchen hood configurations. Table 1 shows the configuration selected
for the validation analysis. In particular, two types of shape geometries, filters and
blowers have been considered and highlighted, while the same LED lighting has
been considered for all configurations. The test case is mostly focused on the fluid
dynamic impact on the final EEI. The error of the tool prediction is less then 5%
for the estimation of the FDE values, while the evaluation of the product’s EEI
fails only in 1 of 7 cases. The first 6 configurations represent product already ex-
isting, while the last configuration is a new variation which implements a blower
with a brushless motor. In this case, the impeller performance was simulated by a
CFD tool, whilst the motor curves were acquired by the test bench. The configura-
tion tool was able to predict the blower operation point and efficiency by combin-
ing of data from motor, filter and impeller. The result of the test case was a useful
feedback during the study of the early design related to a new product prototype.

Table 1. Comparison between FDE and EEI values predicted by the configuration tool and the
values acquired through experimental tests.

Shape Filter Blower FDE FDE EEI EEI


(geo) (Type) (m3/h) (tool) (test) (tool) (test)
T-Shape Aluminum 900 19,36 19,01 B B
T-Shape Baffle 900 18,16 17,94 C D
V-Shape Aluminum 900 16,73 16,10 D D
V-Shape Baffle 900 15,71 15,49 D D
V-Shape Aluminum 600 20,66 20,05 B B
V-Shape Baffle 600 19,33 18,86 B B
V-Shape Aluminum 650 32,10 31,50 A A
1106 P. Cicconi et al.

5 Conclusions

An energy modeling for kitchen hoods has been proposed. The knowledge base
about the calculation of FDE and EEI has been implemented in a software using
rules and database. The prototypical software can simulate the energy perfor-
mance related to different product configurations in term of energy efficiency.
This tool makes the product-engineer more aware in the decision-making about
the energy-saving during the life cycle. The approach can be considered as an eco-
innovation tool in product design, in fact it promotes the development of more en-
vironmentally friendly products. This tool also enhances the diffusion and dissem-
ination of knowledge and data for the determination and quantification of the en-
ergy labeling. The interaction of experimental tests, numerical analysis and
Knowledge Base allows a continuous improvement of the product during the en-
tire life cycle. This also enhances the reduction of the product energy consump-
tion, with advantages in social terms as the reduction of the environmental im-
pacts. As a test case, different configurations are compared using the proposed
tool with a low difference between real and virtual data. As future development, it
is expected to extend the analysis to the evaluation of the production phase and the
end-of-life, in order to determine the global impacts related to the entire life cycle
and not only to the use phase. The same approach could be reused for the virtual
design focused on different products where the energy labeling is required.

References

1. Gynther L., Mikkonen I. and Smits A. Evaluation of European energy behavioural change
programmes, Energy Efficiency, 2012, 5, pp. 67–82.
2. Gillingham K., Newell R.G. and Palmer K. Energy Efficiency Economics and Policy, Annu.
Rev. Resour. Econ., 2009, 1, pp. 597–619.
3. Galarraga I., González-Eguino M. and Markandya A. Willingness to pay and price elasticities
of demand for energy-efficient appliances: Combining the hedonic approach and demand sys-
tems, Energy Economics, 2011, 33, pp. S66–S74.
4. Tyl B., Legardeur J., Millet D., Vallet F. A comparative study of ideation mechanisms used in
eco-innovation tools, Journal of Engineering Design, 2014, 25 (10-12), pp 325-345.
5. Cluzel F., Vallet F, Tyl B, Leroy Y. Eco-design vs. eco-innovation: an industrial survey. Pro-
ceedings of the 13th International Design Conference - DESIGN 2014, 2014, pp.1501-1510.
6. Tyl B., Legardeur J, Millet D, Vallet F. A New Approach for the Development of a Creative
Method to Stimulate Responsible Innovation”, Proceedings of the 20th CIRP Design Confer-
ence, Ecole Centrale de Nantes, Nantes, France, 19th-21st April 2010, 2011, pp 93-104.
7. Kock J. et al. Experimental and numerical study of a radial compressor inlet, ASME 95-GT-
82, 1995.
8. Lee E., Feigly C. and Khan, J. An investigation of air inlet velocity in simulating the disper-
sion of indoor contaminants via computational fluid dynamics, Annals Occupational Hy-
giene, 2002, pp. 46–48.
9. Pitkanen H. et al. CFD analysis of a centrifugal compressor impeller and volute, ASME 99-
GT-436, 1999.
Design knowledge formalization to shorten the
time to generate offers for Engineer To Order
products

Roberto RAFFAELI1*, Andrea SAVORETTI2 and Michele GERMANI2


1
Faculty of Engineering, Università degli Studi eCampus, Via Isimbardi, 10, Novedrate,
22060, Italy
2
DIISM Department, Università Politecnica delle Marche, Via Brecce Bianche, 12, Ancona,
60131, Italy
* Corresponding author. Tel.: (+39) 031-7942500 ; fax: (+39) 031-792631. E-mail address:
roberto.raffaeli@uniecampus.it

Abstract Cost Estimation for offer generation in ETO companies is a critical and
time-consuming activity that involves technical expertise and a knowledge base.
This paper provides an approach to acquire and formalize the design and manufac-
turing knowledge of a company. The method has been described as a sequence of
steps, which moves from the data acquisition of the past projects to the definition
of a cost function based on dimensioning parameters. This approach has been ex-
perimented on a family of cranes for plants in collaboration with an industrial
partner.

Keywords: Knowledge formalization, Functional requirements, Engineer To


Order; DSM; Cost estimation

1 Introduction

The business model of many companies is based on the Engineer To Order model
and the customization of the products in the portfolio. The definition of the right
price in an offer is a critical activity that involve expertise, product knowledge and
the correct estimation of design and production efforts. Compiling technical pro-
posals is a time consuming activity and the strong competition on the market gen-
erally leads to poor success rates in the order acquisition. Therefore, it is mandato-
ry to employ consistent approaches to rapidly formulate reliable offers as new
requirements comes from a potential customer.

© Springer International Publishing AG 2017 1107


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_110
1108 R. Raffaeli et al.

In this context the paper shows an approach to formalize the design and manufac-
turing knowledge in order to configure valid solutions, define lists of the most sig-
nificant components and roughly define product layouts. The process moves from
a systematic acquisition of the customer requirements, which provide functional
and technical features of the product. General product architectures are defined as
functional diagrams and hierarchies of implementing modules. The requirements
acquired from the customer are then cascaded on such structures in order to de-
termine a preliminary product structure. In order to estimate manufacturing costs
the reached configuration must be embodied in preliminary solutions providing
details concerning the most significant sizes, parameters and attributes of the sin-
gle parts. In the field of products with a good level of standardization and/or mod-
ularization, this leads to simplified but complete design processes, however lim-
ited to the most significant choices and the main dimensioning activities.

2 State of the art

Knowledge representation in product design is a critical activity because it intend-


ed to manage and make sense of a large amount of data, in order to become infor-
mation. Research in product families, modularity, configuration design and design
rationale systems have resulted in considerable developments in knowledge cap-
ture during the last few decades [1].
Knowledge can be classified as formal vs tacit. Formal knowledge is embedded
from product documents, drawings, engineering dimensioning algorithms, while
tacit knowledge, which is made of implicit rules, comes from the experience of
people with technical expertise. Owen and Horvát [2] classify knowledge repre-
sentation into five categories: pictorial, symbolic, linguistic, virtual and algorith-
mic. The challenge of knowledge modeling and representation may concern the
design product and process knowledge formalization at different design stages and
the capture, use and communication of knowledge [3]. The goal is to reuse the
knowledge originated from the latter stages, in order to provide information for
the early stages, in particular for the product concept design.
Product concept design is an abstraction level that allows understanding product
behavior and the main function. Summers and Rosen in [4] discuss three function-
based representations with a focus on conceptual design and make a comparison
of the types of information supported by these representations. Requirements,
functions, behaviors, working principles, parameters, mathematical expressions
and structure or geometry are the main information needed for design process, es-
pecially in the conceptual design.
A way to reuse the product knowledge consists in building a product platform.
Product platform design have been widely studied in the last decades, because of
the importance for companies to offer a large variety of products. Otto et al. [5]
Design knowledge formalization to shorten … 1109

review the main activities for product platform design and examine a set of prod-
uct platform development processes used at several different companies.
Several authors have observed that the capture of design rationale cannot be com-
pletely automated, but it need the designer intervention. Nowadays, there are only
few tools that support decisions at the conceptual design stage. Since much has not
yet been defined during conceptual design phase, computer support tools are diffi-
cult to be applied at the early stages of product design. Designers often prefer to
use prior art solutions, which have already been experienced in the past. Moreo-
ver, in order to make a decision that involves a redesign of a part in a system, de-
signers must be aware of all the relationship between the part and the system. The
effect of any choice should be known since the early design stages, in order to
avoid mistake and time wastage in the later stages.
Research shows that although there are several methodologies for knowledge rep-
resentation, real applications in the industry are scarce. In particular, an approach
that allows to manage design knowledge throughout the whole design process
starting from the offering phase is lacking. In the context of platform based ETO
products, the proposed approach aims to an acquisition and formalization of the
design knowledge of a company. Moreover, this method investigates on the tools
which are able to represent the required knowledge on the basis of the data to be
processed.

3 The method

The main step of the method are listed below:


1. Acquire customer requirements, product architecture and costs of several prod-
uct variants from past projects
2. Build a product functional structure according to the customer requirements
3. Identify modules for product architectures from the product functional structure
4. Acquire input-output dimensioning parameters of each module
5. Represent design process as a network of dimensioning activities connected
through input-output dimensioning parameters
6. Build an activity-based DSM and sequence the dimensioning activity through a
partitioning tool
7. Build a parameter-based DSM and sequence the dimensioning parameters
through a partitioning tool
8. Define a cost function based on output dimensioning parameters

The first step, which is also the most onerous, concerns the data acquisition. Cus-
tomer requirements of past projects are collected. Product datasheets, specifica-
tions or technical proposals are the main sources of information. Then, design da-
ta, CAD model, drawings, and BOMs are gathered to acquire possible product
architecture data. This step could benefits of product data management systems
1110 R. Raffaeli et al.

(PDM). Finally, costs are analyzed according to product BOMs. These data are
collected for several product variants in order to make an exhaustive analysis.

Fig. 1. Flowchart of the proposed approach.

Customer requirements are converted into functional requirements, in order to


build a product functional structure. A requirements-functions matrix is used to
check the correspondence between customer requirements and functional require-
ments, which the product has to accomplish. Functions are grouped in order to
identify modules [6] and linked to the physical components, so the product struc-
ture is connected to the generic product architecture.
Dimensioning activities are performed on the identified modules. The most im-
portant parameters are identified for each module. They are used to determine the
main drivers in the module instantiation and, then, estimate its costs. The mini-
mum number of parameters are chosen and divided into input and output parame-
ters. While input parameters are the required data for the module dimensioning,
output parameters are the resulting data from the dimensioning activity. The input
parameters comes from the product technical specifications or they can be output
parameters of other modules, which means that there is a dependence relationship
between the modules. Product documents, drawings, spreadsheets, design stand-
ards along with seniors’ expertise are used to identify the links between input and
output parameters.
Once dimensioning parameters for each module are known, design process is rep-
resented as a network of dimensioning activities. Indeed, modules can be connect-
ed through input-output dimensioning parameters. An IDEF tool has been em-
ployed for an initial exploration of the design process as a sequence of elementary
activities. Such activities follow a sequence according to the parameter dependen-
cies. As some activities are mutually dependent, they must be solved together in
an iterative manner. To solve the dependencies and sort the activities an activity-
based DSM is employed [7]. Dimensioning activities are listed in a square matrix,
which is filled so that if the activity of the i row provides an input for the activity
Design knowledge formalization to shorten … 1111

of j column, the ij value is 1, otherwise 0. In order to sequence the activities, it is


possible to exchange corresponding rows and columns of the matrix employing a
partitioning algorithm. If the matrix reaches an upper-triangular form, all the ac-
tivities can be solved in sequence, without iterations. Blocks of elements remain-
ing under the diagonal are the activities characterized by a mutual dependence.
The next step consists in building a parameter-based DSM, which is a lower level
than the activity-based DSM, because it considers the input/output modules pa-
rameters instead of the modules dimensioning activities. The partitioned DSM
show all the product parameters sequenced according to the dependencies, thus it
is possible to know when a dimensioning parameter must be defined in order to
proceed with the design process. Moreover, this sequence allows to minimize the
iterations during the phase of the parameters determination.
While the previous steps concerned the formalization of the design process, the
last step regards the total product cost estimation. In [8] four different methods of
cost estimation have been identified. In particular, the authors make a comparison
between parametric method and case based reasoning method, concluding that the
two methods can be combined using a case based reasoning system to search for
similar cases and then adapting the case selected with a Cost Estimation Formula
(CEF) on the basis of similar extracted cases. Herein, the combined analytic and
parametric approach is leveraged. Basically, the presented method uses a CEF
moving from output design parameters, so that technical offers can be compiled
on the basis of the preliminary dimensioning process.
If pm,i is the ith parameter of the module mth, cost can be expressed as a function f
defined for each module and connecting the parameters resulting from the dimen-
sioning activity (such as weight, length, area, etc…): Cost = f (pm,1, pm,2,…, pm,n).

4 Application to a family of Cranes

This approach has been experimented on a family of cranes for industrial plants in
collaboration with an industrial partner. In order to build new offers, the company
is used to refer to past projects of which the costs are known. Thus, technical spec-
ifications of a cranes were compared to find correspondences and similitudes.

Table 1. Main technical characteristics of the cranes considered.

C1 C2 C3 C4 C5 C6 C7 C8 C9
Capacity (tons) 125 80 170 40 40 33 12.5 5 7
Span (m) 40.2 26.8 43.1 29.8 25.1 29.8 19.4 9.5 15
Hook lift (m) 26 27 31 14 30 14 35 14.5 18
Hoist speed (m/min) 2 1.6 1.5 8 8 8 60 30 40
Trolley speed (m/min) 40 10 20 40 40 40 60 40 30
Bridge speed (m/min) 60 16 20 80 80 80 80 60 60
1112 R. Raffaeli et al.

A set of 9 overhead crane for industrial plants with different technical specifica-
tions have been considered. Customer requirements, product design data and costs
have been collected as in table 1. By combining all the customer requirements, a
general functional structure has been built. The customer requirements and the
product functional requirements have been grouped in a matrix in order to identify
correspondences. By combining product design data, a general product architec-
ture has been created. In the functional structure the material, energy and signal
flows have been reported. The software Modulor [9] has been used to represent
the functional and modular structures.

Fig. 2. A screenshot of a cranes family functional structure

Fig. 3. A screenshot of the activity A212 regarding the drum and block dimensioning

Crane functions have been grouped and 16 modules have been identified. The cor-
respondence between functional, modular and components structure has been es-
tablished. The most important design parameters for each module have been iden-
tified. Moreover, parameters for costs estimation of the module have been
selected. With regard to the metal carpentry, weight and surface of the parts have
been chosen, in order to estimate the module costs. Conversely, the costs of com-
Design knowledge formalization to shorten … 1113

mercial parts, like motors or gearboxes, are available from vendor catalogs. Input
dimensioning parameters have mostly been obtained from spreadsheets and di-
mensioning standards.
Then, an IDEF diagram has been built in order to represent the design process as a
sequence of dimensioning activities, which are linked together by the dimension-
ing parameters. The IDEF diagram has been expanded down to such a level that
all the dimensioning tasks and parameters are shown and a network of dependen-
cies between modules are evidenced. Fig. 3 shows an example of the elementary
tasks for the dimensioning of the drum and the block.
lighted.

Fig. 4. A screenshot of the activity DSM partitioned (a part of the 81x81 matrix)

Dimensioning tasks have been organized in a DSM to elaborate dependencies and


sequence the design process activities. Fig.4 shows a screenshot of the partitioned
activity-based DSM, in which the interdependencies between activities are highAs
last step, product costs of past project have been used in order to build simplified
CEF for the identified modules. For the crane cost model, it was decided to sepa-
rate the purchased material from the metal structural parts which are manufactured
in the company.
In particular, for metal structural parts like girders and end trucks, weight and sur-
face parameters have been used to estimate the module costs. The costs of the
steelwork modules are basically estimated proportionally to the part weight. Once
all modules have been processed, technical offers are compiled according to the
preliminary design phase. The resulting global CEF (1) of the crane can be synthe-
tized in:

(1)

where Cpm refers to the purchased material, Wi is the i-module parameter, Cmat,i the
unit cost of material, P the productivity in kg/h and Clab the labor costs. Purchased
parts costs have been retrieved from vendor catalogs, while productivity is a mean
value derived from the past projects.
1114 R. Raffaeli et al.

5 Conclusions

In this work, an approach to formalize the design and manufacturing knowledge


has been shown. The method has been described as a sequence of steps, which
moves from the data acquisition of the past projects to the definition of a cost
function based on several output dimensioning parameters. This approach has led
to a significant saving of time in formulating new offers, but a critical assessment
of an expert is still needed. Moreover, this method allows collecting and formaliz-
ing the design knowledge of a company, which is now enclosed in people, in order
to make it available and transmittable within the company. Future works could
concern an improvement of the automation of the design process and of the cost
estimation. Moreover, an automatic tool for acquiring and collecting the
knowledge coming from the past projects would be useful for the ETO companies
in order to enhance the internal knowledge base.

References

1. Sriram R. Intelligent systems for engineering: a knowledge-based approach. Springer Verlag;


1997.
2. Owen R., Horvát I., Towards product-related knowledge asset ware housing in enterprises. In
Proceedings of the 4th international symposium on tools and methods of competitive engi-
neering, TMCE 2002, pp. 155-70
3. Senthil K. Chandrasegaran, Karthik Ramani, Ram D. Sriram, Imré Horváth Alain Bernard,
Ramy F. Harik, Wei Gao, The evolution, challenges, and future of knowledge representation
in product design systems Computer-Aided Design, 2013, 45 (2), 204–228
4. Summers J. and Rosen D. Mechanical Engineering Modelling Language (MEML): Require-
ments for conceptual design. In 19th International Conference on Engineering Design, Seoul,
Korea, August 2013.
5. Otto K., Hölttä-Otto K., Simpson T.W. Linking 10 years of modular design research: alterna-
tive methods and tool chain sequences to support product platform design, ASME Design
Engineering Technical Conferences, Portland, OR, August 2013.
6. Stone R.B., Wood K.L. and Crawford R.H. A Heuristic Method to Identify Modules from a
Functional Description of a Product, Design Studies, 2000, 21 (1), 5-31
7. Browning T. R. Applying the Design Structure Matrix to System Decomposition and Integra-
tion Problems: A Review and New Directions, IEEE Transactions on Engineering Manage-
ment, 2001, 48 (3), 292-306.
8. Duverlie P., Castelain J.M. Cost estimation during design step: Parametric method versus case
based reasoning method. International Journal of Advanced Manufacturing Technology,
1999, 15 (12), 895–906.
9. Raffaeli R., Mengoni M., Germani M. An early-stage tool to evaluate the product redesign
impact”. Proceedings of the ASME 2011 International Design Engineering Technical Con-
ferences, DETC2011/DTM-47625, Washington, DC, USA, August 2011.
Customer/Supplier Relationship: reducing
Uncertainties in Commercial Offers thanks to
Readiness, Risk and Confidence Considerations

A. SYLLA1,2, E. VAREILLES1, M. ALDANONDO1*, T. COUDERT2, L.


GENESTE2 and K. KIRYTOPOULOS3
1
Univ. de Toulouse / Mines Albi / CGI - France
2
Univ. de Toulouse / ENI Tarbes / LGP - France
3
National Technical Univ. of Athens - Greece
* Corresponding author. Tel.: +33 - 5 63 49 32 34; fax: + 33 - 5 63 49 31 83. E-mail address:
michel.aldanondo@mines-albi.fr

Abstract: Nowadays, in customer/supplier relationship, suppliers have to define


and evaluate some offers based on customers’ requirements and company’s skills.
This offer definition implies more and more some design activities for both tech-
nical solution and its delivery process. In the context of Engineering-To-Order,
design and engineering activities are more important, the uncertainties on offer
characteristics is rather high and therefore, suppliers bid on the calls for tender de-
pending on their feelings. In order to provide suppliers with metrics that enable
him/her to know about the confidence level of an offer, we propose a knowledge-
based model that includes four original metrics to characterize the confidence lev-
el of an offer. The offer overall confidence relies on four indicators: (i) two objec-
tives ones based on Technology Readiness Level and Activity Risk Level, and (ii)
two subjective ones based on the supplier’s skills and risks aversion. The
knowledge-based model for offer definition, offer assessment and offer confidenc-
es is based on a constraint satisfaction problem.

Keywords: Customer/Supplier Relationship; Knowledge-Based Systems; Readi-


ness; Maturity; Confidence

1 Introduction

The proposed paper concerns the assistance of a supplier in a customer/supplier


relationship. More accurately, it aims at aiding the definition of a commercial of-
fer for both system (product, system or service) and delivery process. The present-
ed contribution belongs to the stream of works that deals with the set-up of
knowledge-based tools aiding the system-process definition (that can include

© Springer International Publishing AG 2017 1115


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_111
1116 A. Sylla et al.

some design activities) and supporting the quotation of performance, cost and cy-
cle time [1]. In this offer definition context, the system-process definition can vary
from a very routine activity up to a highly creative and so far much less routine
one [2]. For example let us consider a computer system or a truck, the definition
of an offer consists mainly in selecting some options and components in a cata-
logue, checking their consistency and computing a cost and a standard delivery
time. At the opposite, the definition of an offer for a crane or for a specific ma-
chine-tool can require significant engineering or creative design activities for both
system solution and delivery process. Given these elements, the customer/supplier
relationship can be characterized, according to [3], as either very routine assem-
bly-to-order (ATO) or make-to-order (MTO) offer definition, or much less routine
engineer-to-order (ETO) offer definition. For 20 years now, configuration soft-
ware’s have been recognized as very efficient tools for aiding suppliers in their of-
fer definition activity in ATO-MTO situations [4]. When dealing with ETO, it is
less the case because the design activity is more consequent and thus Computer
Aided Design software must be used. It is important to note that ATO-MTO or
ETO is not a binary issue. In an ATO-MTO situation, all design problems for both
system solution and delivery process have already been studied and solved in ad-
vance before launching the activity of the offer definition (in a very formal way if
a configuration software is used). Therefore, the level of uncertainty in the offer
characteristics is rather low and the supplier feels very confident in the fact that
the defined offer matches the customer’s expectations (including price and due
date). When the situation begins to move from ATO-MTO towards ETO, design
or engineering activities are more significant. Two kinds of approaches can be
seen in companies for the offer definition activity. The first one relies on a de-
tailed design of offers for both system solutions and delivery processes. Thus un-
certainties are low and supplier’s confidence is high but this approach is time and
resources consuming. On the opposite, the second one tends to just clarify the
main ideas or concepts about offers avoiding detailed design, but leaving a great
deal of uncertainty and a scant confidence.
Given all previous elements, the goal of this paper is to propose a theoretical ap-
proach and a knowledge-based model aiding suppliers to define promising offers:
for “rather” routine design situation in order to be able to collect knowledge, for
situation “between” ATO-MTO and ETO, when more than 50% of system sub-
assemblies and process activities are entirely defined, that avoids the entire de-
tailed design of offers by saving time and resources’ commitment and strengthens
the confidence in the main ideas or concepts about offers.
Our main and original contribution is to add a new characteristic or indicator to
system-process offers that can quantify a kind of “confidence level” (in a similar
sense as the one proposed by [5]. This means that each sub-assembly, each deliv-
ery process activity and resulting system-process is characterized with its own
“confidence level”. This new indicator allows the supplier to compare competing
solutions on: performance, cost, lead time but also, and we have never seen that in
the scientific literature, confidence. The suppliers feel now more self-confident to
Customer/Supplier Relationship … 1117

decide about the offer to propose to the customer whatever the stage of its devel-
opment. In nowadays highly competitive markets, where customers don’t hesitate
to compare various suppliers through competitive process, the confidence indica-
tor is a strong supplier support that avoids detailed designs while having a clear
quantification of offer confidence. Knowing the confidence level of each offer el-
ement reduces the stress of a supplier in the decision making and helps him/her
during offers negotiations.
The remaining of the paper is organized in three sections as follows. In a second
section, the main ideas about concurrent configuration of system and process for
ATO-MTO and ETO situations are recalled and the support provided by the Con-
straint Satisfaction Problem framework is explained. The third section is dedicated
to the proposition of the “confidence level” indicator with various aggregation
mechanisms for both system solutions and delivery processes. In the last section,
some conclusions are drawn and further perspectives are developed.

2 Offer Configurations in ATO-MTO-ETO Situations

When dealing with concurrent configuration of product and process problem, [6,7]
have shown that the product can be considered as a set of components and its pro-
duction process as a set of production operations.
According to the customer’s expectations, the configuration of a product is
achieved either by selecting components in product families (as an engine in a cat-
alogue) or choosing values of descriptive attributes (power and weight of an en-
gine). Of course all combinations of components and attribute values are not al-
lowed. Thus, as explained by many authors [8,9], the product configuration
problem can be considered as a discrete constraint satisfaction problem (CSP),
where a variable is a product family or a descriptive attribute and constraints spec-
ify acceptable or forbidden combinations of components and attribute values.
Some kind of product performance indicators can characterize the product, thanks
to some mixed constraints (symbolic and numerical domains) that link the most
important product characters (for example : crane performance function of crane
height and acceptable load).
For process configuration, a similar approach is proposed by [10,11]. According to
the configured product characteristics (selected components and attributes values),
the resources for each production operation can be selected in families of re-
sources, and in some case a quantity of resource can be specified too. Of course,
selected components and values (for products) and selected resources and quanti-
ties (for operations) impact operation durations and therefore the production pro-
cess delivery time or cycle time of the configured product. For simplicity, we as-
sume a sequence of operations and therefore that the lead time equals the sum of
operation durations. As for product, process configuration can be considered as a
CSP, where each operation gathers variables corresponding to resource families,
1118 A. Sylla et al.

resource quantities and operation duration [12]. Constraints restrict possible asso-
ciations.
For both product and process, all variables can be linked to cost indicators (one for
product and one for process) with again some mixed constraints in order to get a
total cost. With the previous problem descriptions, [10,11] have suggested (i) to
gather these two problems into a single concurrent problem and (ii) to consider
this concurrent problem as a CSP. Considering this problem as a CSP, allows the
use of propagation or constraint filtering mechanisms as an aiding tool. Each time
a customer’s expectation is inputted (mainly in the product and less in the pro-
cess), constraints propagate this decision and prune variables values for descrip-
tive attributes, component families, resources families, resources quantities, opera-
tion duration and then update performance, cycle time and total cost. For a detail
presentation with an easy to understand example, we deeply suggest to consult
[13]. This kind of problem modeling is the ground basis of configuration prob-
lems. The key point is that all possible solutions have been studied in advance
meaning that all product families and relevant components, all attributes with their
possible values, all process operations with their resource families and resources
have been analyzed and qualified before operating the configuration system. Thus
the configuration process is infinitely routine and there is absolutely no design or
creative activity. In that case, when the customer says ok, the detailed design of
both product and process is almost automatically generated without any doubt or
uncertainty and thus the supplier is fully confident in his/her ability to achieve
his/her commitments, with no unnecessary stress.
Moving from products to systems is trivial. We assume for systems: (i) a system is
a set of sub-systems (ii) a sub-system is represented by a set of descriptive attrib-
utes and one family of technical solutions (equivalent to a component family). For
processes, the model is absolutely the same. Same indicators, performance, lead
time and cost are kept. All interdependencies, restrictions between system and
process variables are modeled with discrete constraints. All indicator computa-
tions are supported by mixed constraints. From now, we will speak only of con-
figuration of systems (and not only products) and processes.
Moving from ATO-MTO to ETO means that some engineering activities either to
design new sub-systems or to finalize the design are necessary in order to satisfy
the customer’s requirements. For the system side, moving from ATO-MTO to
ETO means that the system is new and has never been designed completely be-
cause: (1) at least, one of its sub-systems has to be designed in order to answer to
the customer’s requirements, or (2) the system is composed of a set of existing
sub-systems which have never been assembled together. For the delivery process
side, moving from ATO-MTO to ETO means that some engineering activities
have to be carried out in order to design or finalize the design of the system there-
fore: (1) new engineering activities can be added to the delivery process and tuned
or (2) the process durations (design and production activities) can be updated to
take into account the engineering activity.
Customer/Supplier Relationship … 1119

3 – Offer Overall Confidence Definition

This section is dedicated to the definition of the offer overall confidence indicator.
We propose that this new and original indicator relies on two pairs of specific in-
dicators, one pair characterizing the system solution, and the other one, the deliv-
ery process. Each pair of indicators is composed of one objective indicator and its
pre-defined scale whereas the second one is much more subjective and supplier-
dependent. First, objective indicators are presented for the system and process
sides, then, are the subjective ones. This section finishes with the first aggregation
mechanisms in order to compute the offer overall confidence, and how this infor-
mation can help suppliers in decision making.
Objective indicators give reliable unbiased information on system solutions and
delivery processes and characterize the readiness of technology used for the sys-
tem solution and the risks level for the delivery process. We propose to add to
each sub-system of the system solution and each activity of the delivery process,
these new objective indicators. Let’s start with the system side. The offer overall
confidence relies at least partially on the readiness of technology used in the sys-
tem solution. Indeed, the technology readiness level or TRL indicates how much a
system is ready to be deployed. TRL is a systematic metric/measurement devel-
oped by [14, 15, 16] at US National Aeronautics and Space Agency (NASA) for
the measure of the maturity of technologies. It has been adopted by US govern-
ment organizations like US Department of Defence (DoD) and US Department of
Energy (DoE), by Industry and increasingly internationally [17,18]. TRL is based
on a scale from 1 to 9 with 9 being the most mature [19]. In our proposal, for each
sub-system, we associate to each technical solution (of its family of technical solu-
tions) a TRL. Therefore, selecting a technical solution for a sub-system leads to
the identification of the correct TRL. Let’s now move to the process side. The of-
fer overall confidence relies also on the risks taken by the supplier in case of suc-
cess, meaning that he/she has won the tender. Indeed, every business is exposed to
risks all the time and such risks can directly affect day-to-day operations, decrease
revenue or increase expenses. Their impact may be serious enough for the busi-
ness to fail. As far as we know, there is no way to characterize the risk level for
each activity of a delivery process. Therefore, based on the CMMI and TRL, we
propose the first version of ARL, for Activity Risks Level, based on a nine-level
scale. This nine-level scale is dedicated to the main risk of an activity and relies on
the main risk probability of occurrence (high or low), the main risk impacts (seri-
ous or marginal) and the main risk treatments (it exists or not action plans to man-
age the risk). In our proposal, for each activity, we associate an ARL. Depending
on the model and knowledge, ARL can be modified by the selection of adequate
resources and valuation of their quantity.
Subjective indicators reflect more the supplier feelings about the offer and rely
on his/her skill, expertise and point of view on the whole situation as well as
his/her risk aversion. Indeed, the fact that all the technologies selected for the sys-
1120 A. Sylla et al.

tem solution are ready to be deployed does not guaranty that the system solution
matches customer expectations. Moreover, certainly, not all sub-systems need a
maximum readiness level as a prerequisite for an application [15,16] and inverse-
ly, a given readiness level is not sufficient for selecting a technical solution. Fol-
lowing the same reasoning for the process side, the fact that all the activities of the
delivery process have their main risk level at 9 with low probability of occurrence,
marginal impact and plenty of treatments does not guaranty that the delivery pro-
cess will run correctly, without any hazard and any delay or additional cost. We
therefore propose the first version of SFL, for Supplier Feeling Level, based on a
three-level scale. This three-level scale corresponds to the feeling (bad, neutral or
good) of the supplier about the offer. In our proposal, we associate an SFL to each
sub-system of the system solution and each activity of the delivery process.
The offer overall confidence relies at the same time on TRL and SFL of the sys-
tem side and ARL and SFL of the process side. Some aggregation mechanisms are
needed at each level of the bill-of-material for the system solution, for the com-
plete set of activities for the delivery process and also for the overall offer.
Let’s start with the system side. When a system is composed of several sub-
systems, its readiness level depends on the TRL of each of its sub-systems and of
the readiness of their integration or IRL [19]. Then, the readiness of each system
SRL is computed using TRLs and IRLs. Several SRL calculation methods have
been proposed in the literature: matrix algebra [19,20,21] or tropical algebra ap-
proach [22]. The most used SRL calculation method is the one proposed in [19]
and it is the calculation method adopted in this paper. This method leads to a five-
level scale for SRL. We propose to use the same aggregation method for the sub-
jective indicators SFL of the system by taking into account the SFL of each sub-
system as well as the SFL of their integration. Let’s continue with the process
side. After determining the ARL of each activity of the delivery process, the risk
level of the whole delivery process or PRL has to be computed. It is important to
recall here that the phenomenon of integration as described in a system does not
exist in the delivery process. As a first stage, we propose to use an average method
based on ARL to compute the PRL as well as its subjective indicators SFL of the
activities. Let’s finish with the offer overall confidence. The offer overall confi-
dence relies on both system solution and delivery process and therefore should
weight them equally. Therefore, as a first stage, we propose a two-step approach
to compute the offer overall confidence. First, the objective indicators SRL and
ARL are modulated by the subjective ones SFL: a good feeling increases the indi-
cator, a bad feeling decreases it and a neutral one has no impact. The supplier has
to specify how much it goes up and down. Second, the offer overall confidence is
computed as the average of the modulated indicators.
Customer/Supplier Relationship … 1121

4 - Conclusion

In this paper, we have proposed an original way to assess confidence in offers


while bidding, from the supplier or bidder point of view. Our proposals are based
on the extension of configuration process from ATO-MTO towards ETO situation.
This extension is necessary as some configurations have never occurred and some
others require systems to be specifically designed then produced. In order to cope
with ETO situation, specific values have been added to the configuration models
with a specific meaning.
Then, we have proposed three new indicators to measure the degree of confidence
in the overall offer. Two of them are objective and independent of the supplier
(TRL and ARL). They characterize the readiness level of each sub-system and the
risk level of each activity and are both based on a nine-level scale. The last one is
more subjective and relies on the supplier feelings (SFL) about the offer and rely
on his/her skill, expertise and point of view on the whole situation as well as
his/her risk aversion. Aggregation mechanisms have been proposed in order to
compute the SRL of the system solution, the PRL of the whole delivery process
and the SFL for both system and process. In order to compute the offer overall
confidence, objective indicators SRL and PRL are modulated by their respective
SFL. Then, the offer overall confidence is computed as the average of modulated
SRL and PRL.
With these three original indicators TRL, ARL and SFL and the proposed aggre-
gation mechanisms, a supplier is now able while designing system solutions and
delivery processes, to evaluate one or several offers with: (i) conventional indica-
tors (cost, lead time and performance) and also (ii) objective and subjective confi-
dence. Thus, the supplier can select the better one with less stress and a better con-
fidence. These proposals have been confirmed by several companies in system and
service sectors. We have now to test it on real cases and to improve it with much
more sophisticated aggregation methods. The use of Case-Based Reasoning and
experience feedbacks will be used to support the supplier in the valuation of the
subjective indicators and the model updates.

4 Referencing

1. W.J.C. Verhagena, P. Bermell-Garciab, R.E.C. van Dijkc, R. Curran - A critical review of


Knowledge-Based Engineering: An identification of research challenges - Advanced Engineering
Informatics Volume 26, Issue 1, pages 5–15, 2012.
2. B. Chandrasekaran - Design problem solving : a task analysis. In Artificial Intelligence
Magazine, Volume 11, pages 59-71, 1990
3. J. Olhager - Strategic positioning of the order penetration point - International Journal of Pro-
duction Economics - Volume 85, Issue 3, pages 319–329, 2003.
4. A. Felfernig, L. Hotz, C. Bagley, J. Tiihonen - Knowledge-based Configuration From Re-
search to Business Cases - Morgan Kaufmann – 2014.
1122 A. Sylla et al.

5. MR Endsley, D.G Jones – Chapter 7 Confidence and Uncertainty in situation awareness and
decision making - Designing for situation awareness, Taylor & Francis, pages113-121 – 2004.
6. S. Mittal, F. Frayman - Towards a generic model of configuration tasks, in: Proceedings
of IJCAI, pp. 1395–1401, 1989.
7. M. Aldanondo, E. Vareilles - Configuration for mass customization: how to extend product
configuration towards requirements and process configuration - Journal of Intelligent Manufac-
turing, Volume 19, Issue 5, pages 521–535, 2008.
8. T. Soininen, J. Tiihonen, T. Mannisto , R. Sulonen - Towards a general ontology of Configura-
tion - Artificial Intelligence for Engineering Design, Analysis and Manufacturing Volume 12 Is-
sue 4, pages 357–372, 1998.
9. D. Sabin et R. Weigel - Product configuration frameworks - A survey - IEEE Intelligent Sys-
tem and their Applications, Volume 13, Issue , pages 42-49, 1998.
10. P Pitiot, M Aldanondo, E Vareilles - Concurrent product configuration and process planning:
Some optimization experimental results - Computers in Industry, Volume 65, Issue 4, Pages,
610-621, 2014.
11. LL Zhang, Q Xu, Y Yu, RJ Jiao - Domain-based production configuration with constraint
satisfaction - International Journal of Production Research, Volume 50, Issue 24, pages 7149-
7166, 2012.
12. R. Bartak - Constraint satisfaction for planning and scheduling problems. – Constraints,
Volume 16, Issue 3, pages 223-227, 2011.
13. P Pitiot, M Aldanondo, E Vareilles, P Gaborit, M Djefel, S. Carbonnel- Concurrent product
configuration and process planning, towards an approach combining interactivity and optimality
- International Journal of Production Research, Volume 51, Issue 2, Pages 524-541, 2013.
14. S.R. Sadin, F.P. Povinelli, - The NASA Technology Push Towards Future Space Mission
Systems - Acta Astronautica Volume 20, pages 73-77, 1989.
15. J. C. Mankins - TECHNOLOGY READINESS LEVELS, A White Paper - Office of Space
Access and Technology NASA, 1995.
16. J. C. Mankins- Technology Readiness Assessments: A Retrospective - Acta Astroutica, Vol-
ume 65, Issue 9-10, pages: 1216–1223, 2009.
17. Sauser, B.J., D. Verma, J. Ramirez-Marquez, and R. Gove. (2006). From TRL to SRL: The
Concept of Systems Readiness Levels. Conference on Systems Engineering Research, April 7-8,
Los Angeles, CA, 2006.
18. R. Magnaye, B. Sauser, P. Patanakul, D. Nowicki, W. S. Randall - Earned readiness man-
agement for scheduling, monitoring and evaluating the development of complex product systems
- International Journal of Project Management, Volume 32, Issue 7, 2014.
19. W. Tan, J.E. Ramirez Marquez, B. Sauser - A Probabilistic Approach to System Maturity
Assessment - Systems Engineering, Volume 14, Isuue 3, pages: 279-293, 2011.
20. London, M. A., Holzer, T. H., Eveleigh, T. J., & Sarkani, S. (2014). Incidence matrix ap-
proach for calculating readiness levels. Journal of Systems Science and Systems Engineering,
23(4), 377-403.
21. J. E. Ramirez-Marquez, B. J. Sauer - System development planning via system maturity op-
timization - Engineering Management, IEEE Transactions, Volume 56, Pages 533-548, 2009
22. McConkie, E., Mazzuchi, T. A., Sarkani, S., & Marchette, D. (2013). Mathematical proper-
ties of system readiness levels. Systems Engineering, 16(4), 391-400.
Collaborative Design and Supervision Processes
Meta-Model for Rationale Capitalization

Widad Es-Soufi1, Esma Yahia1 and Lionel Roucoules1*


1
Arts et Métiers ParisTech, CNRS, LSIS, 2 cours des Arts et Métiers 13617 Aix en Provence,
France
* {Widad.ES-SOUFI, Esma.YAHIA, Lionel.ROUCOULES}@ensam.eu

Abstract Companies act today in a collaborative way, and have to master their
product design and supervision processes to remain productive and reactive to the
perpetual changes in the industrial context. To achieve this, authors propose a
three-layers framework. In the first layer, the design process is modelled. In the
second, the traces related to the decisional process are captured. In the third, both
the collected traces and the design context model are used to support decision-
making. In this paper, authors address the first two issues by proposing a meta-
model that allows one to capture the process’ decisional knowledge. The proposal
is presented and then illustrated in a case study.

Keywords: collaborative design and supervision processes, process modelling,


traceability, rationale capitalization, decision-making.

1 Introduction and research background

The research reported in this paper is interested in the product design and supervi-
sion processes, a brief definition is provided of each.
The product design is a process in which an output (i.e. product) of a high
added value is produced. It consists of modelling activities that use different re-
sources in order to transform an input into an output that respects the imposed
constraints. The product design also consists of decisional activities that aim at
choosing one or several solutions, among all the design alternatives, based on
some performance criteria. The product design is a complex decision-making
process. Indeed, the decisions are made by several actors and have a major impact
on the final product. In [1], authors have shown that 85% of the decisions, that
were made in this phase, impact more than 80% of the product final cost.
The supervision is a decisional activity carried out by a supervisor to survey
and control the progress of an industrial process. It is a decisional activity that
generates an action depending on both the supervision result and the set-point. The

© Springer International Publishing AG 2017 1123


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_112
1124 W. Es-Soufi et al.

supervision is also a complex decision-making process for two reasons. First, the
supervisor that surveys an industrial process should make, in the shortest time, the
right decision in case an alarm is received. Second, the decision that is made has
an impact on the supervised industrial process.
In order to master these complex processes, authors propose a three-layers
framework [2]. The first layer uses a process meta-model, that captures the
knowledge of the design and supervision processes, in order to model them and
thus it helps companies to understand them. The second layer uses a trace meta-
model to capture the design and supervision rationale and thus facilitates the deci-
sions retrieval which is one of the main time loss reasons. The third layer analyses
the captured knowledge and proposes the most suitable design or supervision
process to be followed according to the industrial context. In this paper, authors
address the first two layers by proposing a meta-model that models and captures
the design and supervision knowledge through the 6W’s concepts traceability [3].
The remainder of this paper is organized as follows. In section 2 related work is
presented. In section 3, related work with respect to the aspects bounding our re-
search context is discussed. In section 4, the proposed meta-model is presented
and added values are discussed. In section 5, the proposal is illustrated in a design
example. Section 6 presents the future work and concludes the paper.

2 Related work

Companies are recognizing that process modelling is a higher priority as there is


an increasing need to master, understand and improve their processes. In the con-
text of collaborative engineering, a multitude of research was interested in process
meta-modelling. In [4], authors introduce the PPO model (Process, Product, Or-
ganization) which is partly based on the GRAI1 methodology. PPO describes the
relation between the triplet: Product data, Processes in which data transit and Or-
ganizations where these processes run. In [5], authors focus their research on
process modelling and knowledge traceability to manage conflicts. In [6], authors
establish a conceptual data model to evaluate and track design decisions in a
mechatronic environment. In [7], authors identify knowledge constructs for design
rationale in order to manage changes. In [8], authors propose a FBS (Function,
Behaviour and Structure) based model that allows one to model the enterprise ob-
jects according to four views: process view, product view, resource view and the
external effect view. The meta-models of some modelling languages, such as
BPMN2, UML3 and IDEF04, also capture some of the process knowledge.

1 https://en.wikipedia.org/wiki/GRAI_method
2 http://www.bpmn.org/
3 https://en.wikipedia.org/wiki/Unified_Modeling_Language
4 https://en.wikipedia.org/wiki/IDEF0
Collaborative Design and Supervision Processes ... 1125

3 Discussion of related work

In this section, the studied meta-models are compared according to the following
three points of view bounding our context. First, the modelling capability that is
the most important point of view; it concerns the ability of the meta-model to ex-
press the knowledge that we want to capture. The six considered criteria are the
6W’s concept themselves that are described in our context as follows:
x Who: it is the ability to model the actor that performs the activity; namely its
name, role, skills, etc. The actor is considered as a human resource.
x What: it is the ability to describe the product data (i.e. the input and output
data) needed to execute the activity. In the context of product design, this
criterion refers to both the input and the output solution spaces. Whereas, in
the supervision context, this criterion refers to the state of the supervised in-
dustrial process before and after making the decision.
x When: it is the planned and real start time as well as the planned and real end
time of the execution of the activity.
x How: it is the set of resources (material, software, human, etc.) used to exe-
cute the activity.
x Where: it refers to the activity in question, among the process activities.
x Why: it is the justification of all the choices that were made during the exe-
cution of the activity.
Second, the representative point of view. It concerns the external view of the
meta-model and describes its ability to be both simple and well expressed. Au-
thors define five criteria as follows:
x Simplicity: it describes the meta-model’s level of complexity. A simple
meta-model is more practical since it is easily understood and efficiently al-
terable if any change is detected in the organization. Simplicity can be char-
acterized by the number of concepts describing the meta-model, as well as
the quality of their graphical signification [9].
x Richness: it describes the ability of the meta-model to represent the knowl-
edge inside the organization. It refers to the number of concepts and their
power of expression [9]. A meta-model is literally rich if it is able to be ex-
panded.
x Norm: it introduces the syntax and the semantics characterizing the grammar
and the mathematical meaning of the meta-model’s concepts, respectively. A
normed meta-model is easily understood and verified.
x Notation: it describes how the meta-model’s concepts are represented
(graphically, textually, in the form of mathematical equations, etc.)
x Software support: it describes whether a tool supporting the meta-model ex-
ists.
Third, the methodological point of view. This aspect concerns the systematic
approach of the meta-model. Authors identify three criteria as follows:
1126 W. Es-Soufi et al.

x Granularity: it is the process’s level of abstraction also called decomposi-


tion. We need a meta-model that permits a full architectural description, i.e.
the total decomposition of process into a set of sub processes and activities.
x Consistence: it means that both the meta-model and all its concepts should
make sense. Redundant or irrational concepts have to be eliminated. This
criterion is defined in our context as the capacity of the meta-model to de-
scribe a specific problem by including the needed concepts without prevent-
ing it to be expanded and thus rich.
x Instantiation: this criterion is defined as the implementation level of the
model in order to assess whether a software supports the instantiation of the
meta-model.
All the meta-models, presented in Section 2, have a norm and allow one to graphi-
cally express their concepts. The PPO meta-model ([4]) has a fairly good model-
ling capability since it completely models the Who, How, and Where concepts. In
addition, it is fairly simple and rich, consistent and allows us a total granularity.
The meta-model of Ouertani et al. ([5]) has a good modelling capability since it
completely models the Who, When, How, Where and Why concepts. It is simple,
rich, consistent, instantiable and allows us a total granularity. The meta- model of
Couturier et al. ([6]) has a limited modelling capability since it models just the
Who concept. It is fairly simple and rich, inconsistent, instantiable and does not al-
low us a total granularity. The meta-model of Moones et al. ([7]) has a very good
modelling capability since it models all the 6Ws concepts besides being simple,
rich, consistent, instantiable and allowing us a total granularity. The FBS-PPRE
meta-model ([8]) has a limited modelling capability since it allows us to model
just the What and Where concepts. It is not simple but fairly rich, consistent and
allows one a total granularity. The BPMN and UML meta-models have a fairly
good modelling capability since they model the Who, What, How and Where con-
cepts. They are rich and fairly consistent. However, they are not simple. The
IDEF0 meta-model has a good modelling capability since it completely models
the Who, What, How, Where concepts and partially models the Why concept.
However, it doesn’t allow a total granularity.
The studied meta-models do not meet the totality of our requirements since
they were proposed under different contexts. It is, therefore, necessary to extend
some of them to construct a meta-model that perfectly matches our requirements.
Authors choose to extend the IDEF0 and BPMN meta-models by specifying their
concepts (for example, the IDEF0 resources are extended to human, hardware,
software and documentary resources and the BPMN input is extended to input,
constraints and resources). Authors also extend the meta models identified in [4],
[5] and [7] since they model much of the 6Ws concepts besides being simple, rich,
consistent and allowing us to express the total granularity of a process.
Collaborative Design and Supervision Processes ... 1127

4 Proposal overview

The meta-model presented in Fig. 1 is the proposal of this research. It captures the
design and supervision knowledge. Namely, the decisions that were taken and the
choices that were rejected, while supervising a process or designing a product.
The different use cases, that may be encountered, when creating a process
within the context of collaborative design and supervision are identified. First, the
user starts by creating a process (cf. Process class in Fig. 1) and providing the re-
lated information including the name and the objective of the process as well as
the name of the user that is creating it. Second, the user creates the different activi-
ties (cf. Activity class) that may be either modelling, decision or supervision ac-
tivities. The user describes the activity by providing its name, description, type
(i.e. modelling, decision or supervision), state (i.e. available to be executed, in
progress or validated), real start and end time, event (i.e. start if the activity is the
first to be executed, end if it concludes the process or Null otherwise) and the suc-
cessor gateway which refers to the nature of the link between the current activity
and the one that will follow [10, Sec. 8.3.9].
An activity can be either planned by the engineer (cf. PlannedActivity class) or
unplanned i.e. not defined in the process model (cf. UnplannedActivity class). In-
deed, sometimes during the execution of the process, some unplanned activities
need to be performed when an opportunity or an obstacle comes along. For exam-
ple, it is impossible to execute the machining process if there are no enough raw
materials. The unplanned task here is to execute the supply activity. If the activity
is already planned, the user should identify both the time in which the execution is
supposed to start and the time in which it is supposed to end. Otherwise, if the ac-
tivity is unplanned, he should explain the reason behind its occurrence.
An activity may have an input and should produce an output, both of them are
called product data (cf. ActivityInputOutput class). We assume that the objective
of this paper is mainly to retrieve the product data no matter how they are struc-
tured. Indeed, we propose to store the input and output data in a product database
in a way that they can, at any time, be accessible and exploited by the running
process. In the case where the activity is re-executed, the stored product data file
will be incremented automatically and saved in the product database.
During its execution, an activity is supported by human, software, documen-
tary and/or hardware resources (cf. Resource class). The user describes the context
of each used resource. For example, the machine that is used during the execution
of an activity must be well described in terms of its availability and trust factor.
This latter is important to have some understanding on the well-functioning of the
machine. An activity is constrained by some controls (cf. ActivityControl class).
They could be internal (cf. InternalControl class) like the constraints imposed
from anterior activities that belong to the same process. Controls could also be ex-
ternal (cf. ExternalControl class) like the specification imposed by the customer or
the set-point related to the supervision activity. Another type of controls concerns
1128 W. Es-Soufi et al.

the decision activity (cf. DecisionActivityControl class), it is based on the per-


formance indicator characterized by its name, type and priority.

Fig. 1. The proposed meta-model for modelling and tracing the design and supervision processes

The proposed meta-model is implemented in Eclipse5 and allows one to model and
trace the design and supervision knowledge. Indeed, authors assume that it is im-
portant to trace all the knowledge constructs identified in Fig. 1. Therefore, the
proposed meta-model is instantiated in Eclipse to create real world models and
generate a XMI (XML Metadata Interchange) trace, that can be stored in a process
trace base. Authors assume that the proposal allows companies to understand their
design and supervision processes through the process modelling. They also as-
sume that, throughout the knowledge traceability, the proposal helps companies to
gain the time that they usually lose when retrieving the decisional information.

5 Case study: collaborative design of an electric torch

The considered design process contains eleven interdependent activities and in-
volve many engineers working together to design an electric torch. Engineers are

5 https://eclipse.org/
Collaborative Design and Supervision Processes ... 1129

asked to: (1) Describe how the electric torch may be used by highlighting its func-
tions. (2) Study in-depth the product functions which are realized through a physi-
cal principle by a specific technology. (3) Describe for each function its energetic
properties. (4) Provide an approach to find technology solutions related to the
functions. (5) Identify and describe the products used in the design. Finally, (6)
Give a first CAD model of the product and progressively refine it.
The proposed meta-model (Fig. 1) is instantiated to create the electric torch de-
sign trace (Fig. 2). This latter captures all the design knowledge including the
process context, the process activities (Where), the engineers that were performing
these activities (Who), the date when they performed them (When), the rationale
behind their choices (Why), the resources that they used to execute these activities
(How), and the results of the execution of these activities (What).

Fig. 2. Part of the generated XMI trace


1130 W. Es-Soufi et al.

6 Conclusion

This paper proposes a collaborative design process meta-model whose objective is


to model and trace the design and supervision rationale. This helps companies to
manage their processes to be more productive and reactive to changes. Indeed, the
proposed meta-model helps structuring the enterprise’s processes which makes
easy their understanding. It also helps documenting the decisional process and
memorizing the rejected choices. Future work consists in learning from the proc-
ess traces, that were generated by the proposed meta-model, to support engineers
in their decisions-making processes.

References

[1] C. Berliner and J. A. Brimson, Cost Management for Today’s Advanced Manufacturing:
The CAM-I Conceptual Design. Harvard Business School Press, 1988.
[2] L. Roucoules, E. Yahia, W. Es-Soufi, and S. Tichkiewitch, “Engineering design memory
for design rationale and change management toward innovation,” CIRP Annals - Manu-
facturing Technology, 2016.
[3] J. A. Zachman, “A Framework for Information Systems Architecture,” IBM Syst. J., vol.
26, no. 3, pp. 276–292, Sep. 1987.
[4] P. Nowak, B. Rose, L. Saint-Marc, M. Callot, B. Eynard, L. Gzara-Yesilbas, and M.
Lombard, “Towards a design process model enabling the integration of product, process
and organization,” in 5th International Conference on Integrated Design and Manufac-
turing in Mechanical Engineering, IDMME, 2004, pp. 5–7.
[5] M. Ouertani, L. Gzara-Yesilbas, and G. Ris, “A Process Traceability Methodology to
Support Conflict Management,” in Proceedings of the 10th International Conference on
CSCW in Design, CSCWD 2006, May 3-5, 2006, pp. 471–476.
[6] P. Couturier, M. Lô, A. Imoussaten, V. Chapurlat, and J. Montmain, “Tracking the con-
sequences of design decisions in mechatronic Systems Engineering,” Mechatronics, vol.
24, no. 7, pp. 763 – 774, 2014.
[7] E. Moones, E. Yahia, and L. Roucoules, “Design process and trace modelling for design
rationale capture,” in Joint Conference on Mechanical, Design Engineering & Advanced
Manufacturing, 2014.
[8] M. Labrousse and A. Bernard, “FBS-PPRE, an enterprise knowledge lifecycle model,” in
Methods and tools for effective knowledge life-cycle-management, Springer, 2008, pp.
285–305.
[9] F. Daoudi and S. Nurcan, “A benchmarking framework for methods to design flexible
business processes,” Software Process: Improvement and Practice, vol. 12, no. 1, pp. 51–
63, 2007.
[10] O. M. G., “Business Process Model and Notation (BPMN) Version 2.0,” Jan. 2011.
Design Archetype of Gears for Knowledge
Based Engineering

Mariele Peroni1, Alberto Vergnano1*, Francesco Leali1, Andrea Brentegani1


1
Department of Engineering Enzo Ferrari, University of Modena and Reggio Emilia, Via
Pietro Vivarelli 10, Modena 41125, Italy
* Corresponding author. Tel.: +39-059-205-6278; fax: +39-059-205-6126. E-mail address:
alberto.vergnano@unimore.it

Abstract An engineering design process consists of a sequence of creative, inno-


vative and routine design tasks. Routine tasks address well-known procedures and
add limited value to the technical improvement of a product, even if they may re-
quire a lot of work. In order to focus designers work on added value tasks, the pre-
sent work aims at supporting a routine task with a Design Archetype (DA). A DA
captures, stores and reuses the design knowledge with a tool embedded in a CAD
software. The DA algorithms drive the designer in selecting the most effective de-
sign concept to deliver the project requirements and then embody the concept
through configuring a CAD model. Finally, a case study on the definition of a DA
tool for gear design demonstrates the effectiveness of the DA tool.

Keywords: Design Archetype, design knowledge, Computer Aided Design, en-


gineering design, design automation

1 Introduction

An engineering design process is carefully planned with a structure of tasks in or-


der to give more certainty of achieving the given requirements. A number of alter-
native tasks structures might be available, which make it difficult to define general
rules [1]. Researches on Knowledge Based Engineering (KBE) classify the design
tasks as creative, innovative and routine, [2]. However, in mature technology do-
mains, effective task structures are known and much more tasks become routine.
The design variables, their variation ranges and the knowledge necessary for their
definition are all directly instantiable from existing technical solutions. These rou-
tine tasks can be aided or even automated by KBE applications, [3], which may
generally unload designers and focus their work on added value tasks.
Design experience is recognized as fundamental in enabling suitable design
choices, [4]. KBE can be regarded as a transfer process of design experience from

© Springer International Publishing AG 2017 1131


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_113
1132 M. Peroni et al.

senior designer and documentation of past projects to new or future design teams.
First, the necessary knowledge about design of products, processes and manufac-
turing resources has to be captured and structured. Then, KBE requires to define a
model with problem solving capabilities which aids or even automate the design
choices in the domain of concern. Finally, the model is implemented in a design
tool in order to reuse the knowledge in future projects, [5].
KBE applications as design automation tools are conceived to reduce the engi-
neering costs, [6]. More advanced modeling and simulation of technical solutions
can be supported by other tools driven by knowledge on physical phenomena, [7].
On the other hand, it is difficult to fully automate the complete design process for
complex subsystems, as for the transmission gears in question, when multiple de-
cisions must be made on solutions from concept to detail design. Researchers de-
veloped different modeling frameworks to capitalize knowledge for more complex
subsystem design, [5,8,9]. However, KBE still find difficulties for a wide adoption
in industry, mainly due to shortcomings in the methodological support, the trans-
parency and traceability of knowledge and in the standardization of models, as
demonstrated by a recent review, [10].
In the present work, we face these challenges by introducing the Design Arche-
type (DA) tool, for knowledge capture, store and reuse through the company CAD
software, with rules formalized in a user friendly software tool, [11]. The DA idea
is taken from the ontology, which defines the concepts and relationships that are
used for providing the functionality of a technical solution, [5]. A DA is conceived
as the formalization of the ontology in knowledge rules implemented in a design
tool. The implementation of the knowledge rules within a user friendly software
tool linked to the CAD functionalities would foster the accessibility and usability
to the KBE application, [12].
The paper is organized as follows. After a definition of the concepts of DAs,
we introduce a method for DA development. The next section reports a case study
application of the method on the definition of a DA for planetary gearset in tractor
transmissions. Finally, we discuss the KBE implementation and draw the conclu-
sions.

2 Development of a Design Archetype

2.1 Design Archetype tool

A DA is a design tool to aid an engineer in the selection of the most suitable con-
cept to address the project requirements and in the embodiment design into a CAD
model. A DA stores the knowledge about the subsystem design in its algorithms
within a software tool linked with the CAD environment, [13,14]. A DA is con-
Design Archetype of Gears for Knowledge Based Engineering 1133

ceived to be broadly used into the design departments and not prerogative just of
expert developers. Since CAD software can be guided by office tools, the accessi-
bility level to the DA is decreased by adopting a spreadsheet software, capable to
formalize the knowledge but also already familiar to any actual or future profes-
sionals hired by the company.
The DAs are organized in design repositories, in order to effectively keep the
value of knowledge gained through experience into the company, [15]. The archi-
tecture of a DA is organized in two layers, [11]. The design requirement will be
introduced into the top level, which rules the selection of the best candidate con-
cept. Then, the DA updates the parameters of the concept model and produces a
first attempt CAD model. The concept dimensioning and verification are traceable
thanks to the technical documentation provided by the DA. The proposed method
to develop a DA follows a systematic approach in order to be general and reusable
in different engineering systems, [16].

2.2 Design task clarification

The first phase for the DA development is the retrieval of the necessary infor-
mation from the company database of engineering material. Each subsystem vari-
ant designed in the company must be analyzed with a systematic workflow as:
1. make a checklist of the fundamental requirements for the subsystem:
x layout constraints from the kinematic schemes of the whole system;
x main parameters that drive the subsystem dimensioning from design
statements and requirement lists;
x other rules and constraints from international standards, company proce-
dures and best practices;
2. define the possible architectures of working principles:
x structures of functions and subfunctions from design datasheets;
x architectures of working principles organized to fulfil the function struc-
ture from project reports;
x review of the working principles with a Failure Modes and Effects Anal-
ysis (FMEA);
3. describe the features of the working principles with mathematical models:
x design criteria for the working principles from designer knowledge and
reports;
x theories and formulae, boundary conditions, parameters ranges, simplifi-
cation hypothesis and reference results;
x fundamental features of the system from 3D CAD models and 2D draw-
ings;
4. review the gathered knowledge:
x concepts refinement through interviews with senior engineers;
x improvement of concepts in light of researches and developments.
1134 M. Peroni et al.

2.3 Top layer organization of the Design Archetype

The top layer of the DA must organize the possible architectures of working prin-
ciples in order to cover the whole range of input requirements. Each architecture is
linked to some subranges, with possible overlaps between the validity domains.
The workflow consists of three phases:
1. analysis of the previous checklist and mathematical models in order to group
the requirements in few distinctive parameters to be handled by an algo-
rithm for the selection or rejection of the concepts throughout the range;
2. evaluation of these distinctive parameters in order to define the subranges of
validity for the parameters of the working principles;
3. definition of an algorithm in order to use the distinctive parameters to ad-
dress the input requirements to different working principles architectures.

2.4 Lower level models for the Design Archetype

At the lower level, each architecture of working principles is embodied by a 3D


CAD model which is the generalization of a verified design solution. The parame-
ters of a model must be adjusted for different input requirements, as linked by the
previous distinctive parameters. An effective rule for the parameters update must
be defined. Three possibilities are here discussed:
x pantograph construction: all the dimensions of the CAD models are
simply scaled;
x similarity: the parameters of a model are selectively scaled while keep-
ing constant one physical relationship, like kinematics, Hooke, Newton,
Froude, Reynolds, Biot relationships [16];
x value interpolation: the parameter values can be taken from two or more
design variants and used as data points for a value interpolation in order
to compute the actual design ones.

The pantograph construction is the simplest rule but it rarely works, because
the physical behaviors are ruled by different powers of the physical quantities. The
similarity criterion works quite well but only in case an invariant relationship is
assumed as the distinctive parameter driving the selection of the working princi-
ples architectures. Also the distinctive parameters can be used as rules to scale the
models. Even if they are not invariant as the similarity, they similarly represent the
main system performances by grouping different design variables. The parameters
that cannot be regulated by mathematical laws must follow the value interpolation
criterion. For instance, this criterion is conveniently used for tolerances, surface
finishing, chemical and heat treatments, technology limitations due to cast walls
thickness and tools geometries, predominant company or international standards.
Design Archetype of Gears for Knowledge Based Engineering 1135

The DA must automatically produce the CAD model for the embodiment of the
principles architecture of the subsystem. The parameters of the CAD models are
linked to the values computed in the cells of a spreadsheet software, according to
the first layer of the DA tool. The DA provides also design guidelines, explaining
in details the concept and embodiment design phases. If required by the specific
design process the DA can provide also conventional verification criteria and pos-
sibly generates models for behavioral simulations.

3 Design Archetype of planetary gearset of transmission drives

3.1 Design tasks for the final drive system

The planetary final drive delivers the fundamental function of transferring torque
to the tractor’s wheel, reducing the rotation speed of the wheel axes. The final
drives are of great importance and must deliver high performance strength, fatigue
resistance, low noise and vibrations. The final drives currently manufactured are
analyzed, following the systematic workflow introduced in Sec. 2.2. The infor-
mation necessary to define the top level and the concept models of the DA are re-
trieved from the company PLM environment. The requirements are identified and
linked to the subfunctions of the system. The requirements of the planetary final
drives are classified as follows:
1. Geometry: maximum dimensions, correct meshing of gears;
2. Kinematics: reduction ratio to perform the tractor ground speed;
3. Loads: Surface Load Capacity, Bending Load Capacity;
4. Duration: fatigue strength, wear resistance.

The system is investigated as a structure that connects all the subfunctions


through flows of material, energy and information, as shown in Figure 1. This
schematization helps to define the mathematical relations driving the parameters
of the design processes. First, the transmission ratio is defined for each gear mesh-
ing as:

Zs  Zr
Wi (1)
Zr
1136 M. Peroni et al.

Fig. 1. Function structure of a gearset.

where Zs and Zr are the numbers of teeth of the sun and the ring respectively. The
second and third parameters, the Safety Factors for the Contact and Bending
stresses are defined for each gear meshing (i.e. sun/planet, planet/ring) as the ratio
between the allowable and design stresses:

V C,all V C,lim ˜ Z1
SFC (2)
V C,des Ft W i  1
Y1 ˜ K1 ˜ ˜
d p ˜ b Wi

V B,all V B,lim ˜ Z2
SFB (3)
V B,des Y ˜ K ˜ Ft
2 2
b ˜ mn

where σC,lim and σB,lim are the limit contact and bending stresses of the material, Z1,
Y1 and K1 are geometry and load factors defined in the ISO 6336-2 standard, Z2, Y2
and K2 are geometry and load factors from the ISO 6336-3, Ft is the transverse
load tangential to the reference cylinder of the gears with diameter dp, b is the
facewidth and mn is the normal module. Except τi, which is a design requirement,
all the parameters of (2) and (3) are computed for the design variants in order to
define target values for dimensioning.
After they are defined as first attempt, the reference values for the parameters
are reviewed through interviews with senior designers. For example, Figure 2a
shows the reference value for contact Safety Factor with a dashed line, compared
with the dots of the design values. The values are reported as divided by the refer-
ence value for non-disclosure agreement with the company. Finally, many other
parameters influence the system performances as material, surface finish, heat
treatments, quality, lubrication. Typical values are assigned together with senior
designers.
Design Archetype of Gears for Knowledge Based Engineering 1137

3.2 Concept layout of the planetary final drive Design Archetype

The design of the final drive can generate countless solution variants for the same
initial requirements. The problem is simplified by fixing simple rules for the pa-
rameters that strongly influence the sizing process, like mn, the width/module ratio
l and the pressure angle α. The variation domain of the parameters is defined ana-
lyzing all the design variants. For example, the variation of mn is reported in Fig-
ure 2b in function of the power to be transmitted. mn is reported as the ratio with a
characteristic mn value selected for gears transmitting a power of 100HP.
Other best practices must be respected about defining the gear teeth number
due to geometric and kinematics limits. First, a gearset can be assembled only if
the following relations between the gears teeth are fulfilled:

a) b)
Fig. 2. Design parameters for the final drives: a) Contact Safety Factor and b) gear modules.

­ Zr  Z s
°° Zp
2
® Z s  Zr (4)
° integer
°¯ N planet

Where Zp is the number of teeth of the planets and Nplanet is the number of planets
of the gearset. A planetary gearset is shaped by complex geometric and kinematics
features that generate an atypical vibrational behavior. The excitation frequencies
can be partially neutralized if the planets have an odd number of teeth. The excita-
tion of vibration by the teeth action in a simple epicyclic gear system can be neu-
tralized by a suitable choice of teeth numbers of the sun as:
1138 M. Peroni et al.

­ Zs
°° N z integer
planet
® Z r1 (5)
° s z integer
°¯ N planet

Another important aspect that limits the number of variants is the interference
between the gears. In fact, to avoid the interference, it has been demonstrated that
the teeth number of the sun must be higher than the minimum value:

2
ZS min (6)
W sp2  1  2W sp sin 2 D W sp

where τsp is the transmission ratio between the sun and the planet and α is the pres-
sure angle. The design process is further standardized by choosing for each sub-
system the working principle that better meet the requirements in terms of me-
chanics, costs and company experience. In particular, some constraints are
introduced about the number of planets, the type of planetary gearset (simple or
compound), the architecture of pins (cantilever or simply supported), the basic ar-
chitecture of the carrier.

3.3 Integration of CAD models with knowledge stored in


spreadsheets

The DA algorithms are stored in spreadsheets which process the input require-
ments as kinematic conditions (input power, torque and speed), minimum safety
factor and life, transmission ratio, interface dimensions (center distance), material
and lubricant oil features, and compute the dimensions of all the components as
gears internal and external diameters, gears facewidth and teeth geometry.
The spreadsheets are embedded in the parametric CAD models so that the de-
sign process can be completely managed through the CAD software. The model
generation process consists in opening the template model as defined by the con-
cept layout, opening the embedded spreadsheets, updating the required inputs and
reference parameters, launching the calculations and regenerating the model that
will respect all the physical laws described in 3.1 as well as the limitations intro-
duced in 3.2. The process involves a user friendly software and results is a simple
and light model, easy to manage since made of few features, but compliant with a
very complex theory. As a consequence it can be used for concept design and fea-
sibility studies in order to define the better configuration.
Design Archetype of Gears for Knowledge Based Engineering 1139

Fig. 3. Workflow of the design process for the generation of the model.

This model will be used as starting point for the following design phase in
which the components dimensions will be optimized with the use of more power-
ful and specific tools and more details and auxiliary components are introduced.

4 Discussion and conclusions

The DA approach is evaluated in the definition of a design tool for the planetary
gearset for tractor transmissions. The systematic methodology for knowledge cap-
ture resulted effective. The analysis of the final drives currently manufactured and
their design documentation enabled to identify the fundamental parameters, as
linked to the functions and subfunctions of the system. Reference values are de-
fined as target for dimensioning the solutions.
The problem is simplified by adopting few possible architectures of working
principles according to design best practices and by making assumptions on kine-
matics and dynamics parameters that strongly influence the sizing process. The
formalization of knowledge is transparent thanks to an easy to use office tool. The
DA tool processes the requirements, delivering the dimensions of all the compo-
nents. These parameters are used to automatically generate a CAD model as a de-
sign concept, for the designer to proceed with the detail design phase.
The DA application for the planetary drive system formalizes acknowledged
best practices and designers' experiences. The DA organizes the design process
with a traceable sequence of tasks. The DA tool is currently used in the company
to automate the design tasks for planetary drive variants. The knowledge is not
formalized into an international standard, however the CAD model and the
datasheet tool are internal standard with respect to the company software envi-
ronment. The DA is also open to the integration of innovations.
1140 M. Peroni et al.

Future works can add verification criteria to the DA, as the dynamics analyses
in the case study, and provide updated black box models to be used in model
based simulation environments for an interactive design verification.

Acknowledgments The authors gratefully acknowledge CNH for the financial support and Dr.
Michele Forte, Ing. Monica Morelli and the whole CNH Driveline Design Team, for the valuable
technical support.

References

1. Di Angelo L., Di Stefano P. An evolutionary geometric primitive for automatic design synthe-
sis of functional shapes: The case of airfoils. Advances In Software Engineering, 2014, 67,
164-172.
2. Gero J.S. Design Prototypes: A Knowledge Representation Schema for Design. AI Magazine,
1990, 11(4), 26-36.
3. La Rocca G. Knowledge based engineering: Between AI and CAD. Review of a language
based technology to support engineering design. Advanced Engineering Informatics, 2012,
26(2), 159-179.
4. Göker H.M. The effect of experience during design problem solving. Design Studies, 1997,
18(4), 405-426.
5. Studer R., Benjamins V.R. and Fensel D. Knowledge Engineering: Principles and methods.
Data & knowledge engineering, 1998, 25(1), 161-197.
6. Chung J.C., Hwang T.S., Wu C.T., Jiang Y., Wang J.Y., Bai Y. and Zou H. Framework for in-
tegrated mechanical design automation. Computer-Aided Design, 2000, 32(5), 355-365.
7. Chapman C. B. and Pinfold M. The application of a knowledge based engineering approach to
the rapid design and analysis of an automotive structure. Advances in Engineering Software,
2001, 32(12), 903-912.
8. Skarka W. Application of MOKA methodology in generative model creation using CATIA.
Engineering Applications of Artificial Intelligence, 2007, 20(5), 677-690.
9. Rezayat M. Knowledge-based product development using XML and KCs. Computer-Aided
Design, 2000, 32(5-6), 299-309.
10. Verhagen W.J.C., Bermell-Garcia P., Van Dijk R.E.C. and Curran R. A critical review of
Knowledge-Based Engineering: An identification of research challenges. Advanced Engi-
neering Informatics, 2012, 26(1), 5-15.
11. Peroni M., Vergnano A., Leali F. and Forte M. Design Archetype of Transmission Clutches
for Knowledge Based Engineering. In International Conference on Innovative Design and
Manufacturing, ICIDM, Auckland, New Zealand, January 2016.
12. Liening A. and Blount G.N. Influences of KBE on the aircraft brake industry. Aircraft Engi-
neering and Aerospace Technology, 1998, 70(6), 439-444.
13. Chandy K.M. Concurrent program archetypes. In IEEE Scalable Parallel Libraries Confer-
ence, Mississippi State, US, October 1994, 1-9.
14. Eilouti B.H. Design knowledge recycling using precedent-based analysis and synthesis mod-
els. Design Studies, 2009, 30(4), 340-368.
15. Regli W.C. and Cicirello V.A. Managing digital libraries for computer-aided design. Com-
puter-Aided Design, 2000, 32(2), 119-132.
16. Pahl G., Beitz W., Feldhusen J. and Grote K.H. Engineering design: a systematic approach,
Springer-Verlag London, 2007.
The Role of Knowledge Based Engineering in
Product Configuration

Giorgio COLOMBO1, Francesco FURINI1 and Marco ROSSONI1*


1
Politecnico di Milano, Dipartimento di Ingegneria Meccanica, via La Masa 1, 20156
Milano, Italy
* Corresponding author. Tel.: +39-02-2399-8292; fax: +39-02-2399-8202. E-mail address:
marco.rossoni@polimi.it

Abstract Digital design and manufacturing are critical drivers of competitiveness


but only few companies and organizations have the capability to support digitali-
zation across the whole Product Lifecycle. In several cases the information flow is
discontinuous, the roles and the issues are not properly defined, the tools are het-
erogeneous and not integrated in the company organization. An approach that con-
siders an appropriate data and information organization, an efficient internal or-
ganization and the availability of integrated software tools that are implementing
the industrial best practices, could innovate important and critical aspect of the in-
dustrial processes. This paper gives an overview of the main themes related to
Knowledge Management in industrial context, focusing on product configuration
process. The current role of the knowledge in product configuration will be dis-
cussed. Then, a brief overview on Knowledge-based Engineering will be present-
ed. Regarding Knowledge Based methodology, acquisition and formalization
techniques and tools will be analyzed. Finally, an application focused on assembly
lines configuration will be presented.

Keywords: Product Configuration, Automatic Configuration Process,


Knowledge Based Engineering, Knowledge Formalization

1 Introduction

“Creating a knowledge society in Europe is a necessity if we want to remain com-


petitive in the global economy and sustain our prosperity… If we want to sustain
our European way of life, and we want to do so in an environmentally-responsible
way, we will have to engineer a paradigm shift so that we gradually move from the
resource-based, post 2nd World War economy to a knowledge-based economy.”
Janez Potočnik, former Commissioner for Research, Science and Innovation of the

© Springer International Publishing AG 2017 1141


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_114
1142 G. Colombo et al.

European Union, said these words at the Conference on Structural Funds at War-
saw (13 February 2006). They assume today more significance after the financial
and economic crisis. Skills and knowledge developed during the last century are
essential resources of the industrial companies: this “heritage” plays a strategic
role with respect to the production capabilities of the emerging countries, limiting
the effects due to the high competitiveness in terms of production of these coun-
tries in the global market. It is fundamental to maintain, consolidate and improve
this know-how, using proper methodologies and tools to work better and faster.
From this point of view, the information management techniques can provide
powerful methodologies and tools to realize computer applications able to assist
the human experts to carry out fundamental activities for the companies, such as
the product configuration and costs estimation. This paper gives an overview of
the main themes related to knowledge management in industrial context, focusing
on product configuration process. The current role of the knowledge in product
configuration will be discussed. Then, a brief overview on Design Automation
methods, focusing on Knowledge-based Engineering, will be presented. Regard-
ing Knowledge Based methodology, acquisition and formalization techniques and
tools will be analyzed. Finally, an application related to assembly lines configura-
tion will be presented.

2 Product Configuration

The product configuration can be defined as a special design activity that given a
set of customer requirements and a product family description, the configuration
task is to find a valid and completely specified product structure among all alter-
natives that a generic structure describes [1].
Sabin and Weigel [2] state that the product configuration process consists in
providing a complete description of a product variant according to customer’s re-
quirements. A configurator is a system that perform this process: it should allow a
designer to engineer a product satisfying to customer’s requirements and standards
of the specific domain, even if the required product has been not yet developed in
the past. Today, the development of product configurators is still an open issue for
the scientific community [3].
One of the issues of the paper is the knowledge required when an expert per-
forms complex industrial activities like the product configuration and how it can
be represented in a computer system to assist or replace humans in certain situa-
tions. In that case is possible to define the “knowledge” as the necessary set of
contents and cognitive processes elaborating solution that satisfies specific initial
requirements. Modeling those contents and processes is an open challenge in
computer science, which contributed to the development of the Artificial Intelli-
gence. The modern computer techniques are perfectly suitable for the management
of enormous quantities of data and information, but they are still less adequate to
represent cognitive activities. The “expert” is the main actor of the product con-
figuration process; he or she usually owns “general knowledge and skills”, for ex-
ample communication skills and capability to understand documents of different
The Role of Knowledge Based Engineering … 1143

types. Moreover, he or she has a lot of technical-scientific experiences and a spe-


cific industrial know-how related to products and processes as described by [4].
The product knowledge concerns the architecture of the product itself: it is infera-
ble at high level from the assembly representation and/or BOM. The process
knowledge concerns the sequence of the activities necessary in our case to config-
ure a product, with the definition of the inputs and outputs, resources, tools, con-
trols and responsibilities of each activity. The product configuration integrates the
execution of some complex activities (e.g. the choice of a part, customer require-
ments analysis, selection of existing parts, evaluation of different accessories, …)
and detailed elaboration of the product architecture. Now, in some industrial con-
texts (for example, companies producing components for power transmissions, oil
and gas plants, industrial fans, manufacturing and assembling plants) the role of
product configuration is crucial for the competitiveness. To improve this activity
is crucial to acquire and formalize knowledge, mainly the tacit one (i.e., the
knowledge stored in the experts’ brain), define the “best practices”, develop tools
to aid expert or automate the process and to avoid loss of knowledge and know-
how. In the next section, methods for the acquisition and formalization of the
knowledge will be discussed with the objective of save and transfer knowledge.
Traditional solutions to face the product configuration are not able to ensure the
current need of the companies. KBE (Knowledge Based Engineering) systems can
profitably help to solve this problem both from a commercial and technical point
of view simplifying and automating, at least partially, the process configuration
[5].

3 Knowledge Based Engineering

Among the objectives of the industrial organizations, a dynamic and active man-
agement of the technical knowledge plays a relevant role. The availability of intel-
ligent systems able to assist and replace the human experts, suggesting the best re-
liable solutions is a strategic aspect of primary importance for the success of the
company. In this view, in the last decades several research activities have been
conducted in order to develop intelligent systems focused on different activities in
the product lifecycle development process. The IT methodologies and tools used
for this purposes are coming from the domain of Artificial Intelligence,
CAD/PLM, and mathematics. Expert systems, agents, parametric models coupled
with other programming languages, graphs and several others techniques have
been used for the realization of prototypes able to manage some of the tasks done
by human experts during the product lifecycle, from the conceptual design to the
production, post-assistance and maintenance. One of the most relevant contribute
in this field is provided by Knowledge Based Engineering (KBE). It is a methodo-
logical approach and a category of tools for the development of applications origi-
nated by the Object-Oriented methodology, focusing on an abstract model of
product and components. UML class diagram presented in the last section are ob-
1144 G. Colombo et al.

ject oriented models. KBE is a system based on object-oriented tools, finalized to


the modeling and representation of the knowledge of a specific domain. In litera-
ture, different definitions of KBE are reported. An appropriate one is “computer-
ized system that uses the knowledge about a determined domain to find the solu-
tion of a problem in the same domain. The solution is the same reachable by an
expert in the same domain” [6]. It is important to highlight the difference between
KBE applications and tools; application is a software system developed to solve a
specific design problem, for example automatically design a specific machine
family. Currently, the authors are applying this methodology to the domain of
product configuration and cost estimation, where are more important the needs of
“work better and faster”, limiting costs and time of the process.

4 Acquisition and Formalization of the Knowledge

The development of a KBE software for product configuration needs the acquisi-
tion, formalization and representation of the knowledge used by a human expert.
The key players in acquisition and formalization are the experts of the product and
process and not the IT technicians. This statement is very important because the
focus will be on description and consolidation of the company best practices, ra-
ther than methods and applications.
As stated before, an organization often doesn’t manage the knowledge in opti-
mal way: it is usually orally transmitted and sometimes not shared. During the ac-
quisition step the knowledge is gathered and organized, enabling the reuse for fu-
ture activities. The acquisition requires documents arrangements (e.g. book parts,
manuals, scientific and technical publications, norms, catalogues, drawings, CAD
models, notes and sketches): this concerns the explicit knowledge. The acquisition
of the tacit knowledge is more complex; it requires interviews of experts that ex-
plores strategic and not formalized aspects. For these reasons, it is important to
find ways and languages to extract more efficiently the information from the ex-
perts. The knowledge engineer is an emerging professional figure, able to operate
in these practices. The knowledge has to be acquired from all the experts, from the
highest levels, regarding the sequence of certain activities, to the details regarding
specific technical choices. The results of these activities must be expressed in
documents. Techniques, such as the mind maps, facilitate the digital storage of all
acquired documents [7].
Knowledge Management by means of documents, in traditional or digital for-
mat, is a “static” type of management. This one often requires direct intervention
of experts for the research of the proper information, acquisition and application to
the specific case. Computer techniques allows the management of the knowledge
to be done in a “dynamic” way. In that case a software tool searches the solution,
which is proposed to the expert (assisted design) or directly implemented (Auto-
matic Configuration). The development of an application with such characteristics
The Role of Knowledge Based Engineering … 1145

is based on a proper representation of the knowledge with computer techniques. In


fact, from the state of the art [8, 9], it is reasonable to consider applications for au-
tomatic configuration limited to specific products, or product families, not to the
development of whatever product.
The development of a software application for assisted or automatic configura-
tion requires the representation of information and knowledge with proper IT tools
[10]. The contents of the technical documents need to be translated in order to be
implemented easily by means of a computer. The natural language is not an effi-
cient tool to reuse and share technical information and knowledge. Hence, the
translation, using proper languages, is needed; this procedure is called “formaliza-
tion”. Different languages, mostly graphical, have been developed by several re-
searchers; for example, the flow chart for the documentation of algorithms.
The experiences of the authors in the field of Design Automation led to consid-
er two main graphical languages for the formalization of process and product
knowledge. The formalization of product architecture could be done by using the
“Class Diagram” of the Unified Modeling Language (UML) [11]. The concept of
class is used to represent an elementary component (e.g. a screw) rather than a
complex one (e.g. an engine). The attributes correspond to the parameters of the
component (e.g. the type, the diameter and the length of the screw or the number
of cylinders and valves for the engine). The methods permit to execute operations
using previous parameters (e.g. computation of engine power).
The modelling of product configuration process is usually performed by IDEF0
diagram (http://www.idef.com). All the activities involved in the configuration
process are represented in a hierarchical structure of layers, from the top general
level to the most detailed one. The IDEF0 diagrams are easily understandable; this
characteristic makes it a good tool for the sharing and the diffusion of information
among experts of different background (for example experts in product develop-
ment or software developers).
The theme of the acquisition and formalization of the knowledge in the indus-
trial organizations and in technical domains is complex and articulated and would
need a deep study. Actually, activities in this research field are focused on other
interesting approaches, especially ontologies.

5 Application of KBE to the product configuration

A meaningful example of KBE in product configuration and quotation is present-


ed in this section. Several manufacturing companies are producing several stand-
ard products, combining part families, like in the case of power transmissions
(joints, gearboxes, etc.). Other companies are producing and selling using the ap-
proach of the Engineering to Order (ETO), for example producers of manufactur-
ing systems and machine tools. Both these situations require the product configu-
ration, followed by the definition of the economical offer for the customer.
1146 G. Colombo et al.

Configuration and quotation are complex activities, that today use computer sys-
tems for the direct interaction with the customer (like web sites with digital cata-
logues). The process from the request for quotation to the order confirmation is
one of the strategic processes for the industrial competitiveness and its efficiency.
In several cases the information flow is not continuous, roles and the issues are not
properly defined, the tools are heterogeneous and not integrated. An approach that
considers an appropriate data and information model, an efficient internal organi-
zation and the availability of integrated software tools that are implementing the
industrial best practices, could innovate this important and critical aspect of the
industrial processes.
The case study was proposed by an important industrial partner and we are de-
veloping a software prototype based on KBE approach and tools. The application
deals with the automatic configuration of assembly lines for automotive domain.
Figure 1 shows the information flows and the general structure of the configu-
rator. The information contained in the request for quotation are the input of the
application. They are processed by “Customer Requirement Processing” module.
Thanks to a set of rules, the application selects all the options relatively to the re-
gional characteristics: they are related to both the customer (e.g. local supplier)

Fig. 1 Information flows involved in KBE configurator

and the country in which the assembly line is going to be installed (e.g. safety
standards, electric energy frequency and voltage, and so on). Furthermore, the
type of product being assembled (e.g. a cylinder head) drives the list of tasks (not
ordered) that have to be performed to obtain the final product. Then, the “KBE to
PLM interface” module extracts a subset of information stored in the company da-
tabases (i.e. PLM) useful to the product configuration. A set of rules allows the
The Role of Knowledge Based Engineering … 1147

“Tasks Sequencing and Available Resources Selection” module to aggregate the


resources able to perform the specific assembly task (e.g. it defines a workstation
with the selection of robot, end effector, control equipment, and so on). The result
is a matrix in which the rows are the tasks (not yet ordered) and the columns are
the resource (i.e. aggregation of parts). Then, the application executes the tasks
sequencing (i.e. the list of tasks is new ordered) and Assembly Line Balancing
(ALB) (i.e. it assigns a unique resource to a single process taking into account
technological constraint) depending on required throughput, sizing eventual inter
operational buffers. During this stage, a multi-criteria optimization could be per-
formed to achieve an optimal design and a Discrete Event Simulation (DES) vali-
dates the results. Finally, the “Visualization/Reporting” module generates the
BOM of the machines and equipment, producing also 3D CAD models (Figure 2)
of the plant, 2D drawings and, finally, it performs the costs assessment.
The application may represent an interesting example of the potential connect-
ed to the development of intelligent application in a complex industrial process
that integrates different functions [12]. The application is still under development.

6 Conclusions

This paper gives an overview of the main themes related to knowledge manage-
ment in industrial context, focusing on product configuration process. As said be-
fore, there is the need to engineer a paradigm shift from resource-based to
knowledge-based companies. A brief analysis of the main concepts of Knowledge
Based Engineering has been discussed: two fundamental aspects of Knowledge
Management that are acquisition and formalization have been analyzed. Then, the
main relevant issues related to the integration of “Intelligent” application with the
company infrastructure and the knowledge sharing have been argued. The devel-

Fig. 2 Example of 3D model automatically created by the KBE application.


1148 G. Colombo et al.

opment of an application that allows the configuration of assembly lines to be per-


formed automatically proves the suitability of the KBE approach and proposes a
generic framework to foster knowledge sharing across different function of the
companies. Furthermore, this approach encourages the “first time right” solution
which leads to cost and lead-time reduction.
A relatively new approach for distributed and cooperating knowledge-based
engineering systems is based on ontologies. Ontology based tools allow people or
software agents to share common understanding of the structure of information,
make domain assumptions explicit as well as intelligent search and retrieval in in-
ternet.

References

1. Männistö T. Peltonen, H. and Sulonen R. View to product configuration knowledge modelling


and evolution. In AAAI 1996 Fall Symposium on Configuration, AAAI 1996, Vol. 2, Port-
land, August 1996, pp. 111-118 (AAAI Press).
2. Sabin D. and Weigel R. Product configuration frameworks-a survey. IEEE intelligent systems,
1998, 13(4), 42-49.
3. Zhang L. L Product configuration: a review of the state-of-the-art and future research. Interna-
tional Journal of Production Research, 2014, 25(21), 6381-6398.
4. Ishino Y. and Jin Y. Acquiring engineering knowledge from design processes. Artificial Intel-
ligence for Engineering Design, Analysis and Manufacturing, 2002, 16(2), 73-91.
5. Felfernig A. Hotz L Bagley C. and Tiihonen J. Knowledge-based configuration: From re-
search to business cases, 2014 (Newnes).
6. Stokes M. Managing engineering knowledge - MOKA: methodology for knowledge based en-
gineering applications, 2001 (Professional Engineering Publishing).
7. Eppler M. A comparison between concept maps, mind maps, conceptual diagrams, and visual
metaphors as complementary tools for knowledge construction and sharing. Information Vis-
ualization, 2006, 5(3), 202-210.
8. Wang H. La Rocca G. and van Tooren M. J. L. A KBE-enabled design framework for
cost/weight optimization study of aircraft composite structures. In International Conference
of Computational Methods in Sciences and Engineering, ICCMSE’14, Vol. 1618, Athens,
April 2014, pp. 394-397 (AIP Publishing).
9. Colombo G. Morotti R. Regazzoni D. and Rizzi C. An approach to integrate numerical simula-
tion within KBE applications. International Journal of Product Development, 2002, 20(2),
107-125.
10. Sainter P. ldham K. Larkin A. Murton A. and Brimble R. Product Knowledge Management
within Knowledge Based Engineering Systems. In Proceedings of ASME International De-
sign Engineering Technical Conference and the Computer and Information in Engineering
Conference, IDETC/CIE’00, Baltimore, September 2000.
11. Gomaa H. Software modeling & design: UML, use cases, patterns, and software architec-
tures, 2011 (Cambridge University Press).
12. Ascheri A. Colombo G. Ippolito M. Atzeni E. and Furini F. Feasibility of an assembly line
layout automatic configuration based on a KBE approach. International Conference on Inno-
vative Design and Manufacturing, ICIDM’14, Montreal, August 2014, pp. 324-329.
Section 8.2
Industrial Design and Ergonomics
Safety of Manufacturing Equipment:
Methodology Based on a Work Situation Model
and Need Functional Analysis
Mahenina Remiel FENO1, Patrick MARTIN 2* , Bruno DAILLE-LEFEVRE3,
Alain ETIENNE 2, Jacques MARSOT3 Ali SIADAT2
1
Arts et Métiers (ENSAM) Aix en Provence campus, LSIS , 2 cours des Arts et Métiers, 13
617 Aix-en-Provence, France
2
Arts et Métiers (ENSAM) Metz campus, LCFC, 4 rue Augustin Fresnel, 57078 Metz,
France
3
Institut national de recherche et de sécurité (INRS), Work Equipment Engineering Depart-
ment, 1 rue du Morvan, 54519 Vandœuvre-Lès-Nancy cedex, France
* Corresponding author: Martin Patrick. Tel.:+ (33) 3 87 37 54 65; fax: +(33) 3 87 37 54 70. E-
mail address: patrick.martin@ensam.eu
Abstract: The aim of “integrated prevention” is to conduct a preliminary risk analysis in order to
achieve a lower level of risk in the design of future work equipment. Despite the many safety doc-
uments that exist, many companies, particularly SME/SMIs, do not yet apply these safe design
principles. Integration of safety in the design process is mainly based on the individual knowledge
or experience of the designers and is not conducted in any formalized way. In order to answer to
this problem, this paper presents a methodology to involve engaging stakeholders in dynamic dia-
logue and a framework so that they may together define the information necessary for implement-
ing safe design principles during the functional specification. The proposed methodology has been
validated to industrial case.
Keywords: work situation, integrated prevention, requirement specification, need functional
analysis, safe design

1. Introduction
The concept of “integrated prevention” has been widely shared by European
countries since the 1990s (Figure 1). It consists of applying safe design principles
as early as possible in the design process. The aim is to conduct a preliminary risk
analysis in order to achieve a lower level of risk in the design of future work
equipment.
Despite that many safety documents that exist (e.g., design instructions, guides
and standards), many companies, particularly SME/SMIs, do not yet apply these
safe design principles correctly. This is largely because the different participants
in the design process (engineers, technicians, project leaders) are not prevention
specialists and lack of appropriate methods and tools. As a result, it is difficult for
them to make the correct choices in a timely manner without penalizing the pro-
ject cost or delaying project completion. Consequently, integration of safety in the
design process is mainly based on the individual knowledge or experience of the

© Springer International Publishing AG 2017 1151


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_115
1152 M.R. Feno et al.

designers and is not conducted in any formalized way [2]. Safety requirements are
usually addressed in formulaic sentences such as “the equipment should respect
regulations and standards” or “should be safe, ergonomic and easy to use” etc. As
a result, prevention issues and technical requirements are often handled separately
and the safety problems are often dealt with at the end of the project once the con-
cepts and technical solutions have already been defined. At this point, the
measures implemented are mainly corrective, merely to satisfy the regulations.
This cannot be considered to constitute true safety integration, which takes into
account the future activity of the operators, including “reasonably foreseeable
misuse” [3].

Usage limit
(safety target)
Risk level
No
Hazard ?

Yes

Yes
Intrinsic preven-
Removed ?
tion
No

Yes Individual or col-


Protection ? lective protection

No

Warning Notice

No Yes
Safety level ? Objective reached

Figure 1. Risk reduction process according to NF EN ISO 12100 [1]

In response to this problem, the following methodology involves engaging


stakeholders in dynamic dialogue so that they may together define the information
necessary for implementing safe design principles during the functional specifica-
tion.
Safety of Manufacturing Equipment … 1153

2. State of the art


A number of publications concerning safety integration at the specification
stage recommend considering health, safety and ergonomics as design objectives
that should be specified in the requirement document. To do so, specifications
should go beyond safety recommendations contained in standards and take into
account predictable use of the work equipment, for instance by analyzing the ac-
tivities of the operators of similar machinery [4] to [8].
Need Functional Analysis (NFA) is a well-known methodological tool stand-
ardized [9] that can support the specification stage. While a number of studies
have highlighted the benefits of functional analysis in the prevention of risks be-
cause of its pluridisciplinary approach [10], others have described its limitations in
regard to its ability to specify different contexts of use and future user activities
[11].
MOSTRA (Work situation model) resulted from previous INRS research on
safety integration in design [12]. The specific objective of this model is to help de-
signers to take into account different contexts of use and future user activities.
MOSTRA is based on the concept of work situations according to a systemic
model described by Guillevic [13], and uses the entities involved in safe working
practices. Figure 2 shows the different concepts that designers typically deal with
(e.g., system, function, technical solution, consumables) and MOSTRA allows
them to consider those concepts that mainly concern the users, the tasks to be per-
formed, and the associated risks (for example, dangerous zones, hazards, danger-
ous events, or safety measures).

Figure 2 Simplified view of MOSTRA [13]


1154 M.R. Feno et al.

The model cannot manage the design process by itself but, in order to exploit it,
it is necessary to use it in conjunction with traditional design tools. Through such
a combined approach, the methodology relevance is assured by the logical use of
the traditional tools and the data consistency is provided by MOSTRA.

3. Specification methodology for safe design


In order to achieve our goal, we decided to use the “MOSTRA” model to form
a link between the functions identified with NFA and the work-situation parame-
ters needed for the risk assessment.
3.1. NFA and safety requirements
Safety requirements may be integrated in the functional analysis at three possi-
ble levels, the choice of which can lead to different results:
• General constraints: as enacted by EN 1325-1 [9], however, although this is
necessary, it is not sufficiently detailed and may lead to the designer developing
the prevention apart from the technical and functional requirements,
• Function: this approach is relevant only when the objective is to design a safe-
ty-related system. However, integration at the functional level leads designers to
specify the prevention separately from the functional requirement.
• Function performance criteria: the goal is to identify all parameters which
have a direct impact on safety. The functional decomposition of the system is then
used to define the future user tasks on the work equipment.
We will continue with this last approach in our methodology. The “us-
er/designer” should be guided to obtain a complete picture of a design task. Alt-
hough they naturally provide the foundation on which to focus design efforts,
there are other important criteria that the user may not even perceive, such as safe-
ty issues. Otto and Wood [14] define these as latent specifications (needed, but not
always expressed by the customer). To do this, it is necessary to ask what the pos-
sible work situations are and which entities are involved for each function. The se-
cond stage of NFA method therefore needs to be divided into two different phases:
description and characterization.
3.2. Description step
The description phase should be carried out by a work team (designers, users,
project leader), with the help of a structured and easy-to-use questionnaire which
collects all information, including latent information. At this point it is necessary
to decide whether it is better to:
• Directly use MOSTRA links to build the questionnaire and gather information
about work situations such as “Environment”, “User task”, “Work team”, etc.
• Use a tool such as “5Ws and an H”, which is often used in industrial problem
solving [15]. The work team must answer “What”, “Who”, “Where”, “When”,
“Why” and “How” the function is accomplished. This tool uses an intuitive, de-
scriptive and imaginative way to describe the work situation because it uses basic
question prompts thereby generating answers in natural language. Firstly an ex-
Safety of Manufacturing Equipment … 1155

ploratory test was conducted so that these two approaches could be compared. A
case study of band saw machines for the food industry was chosen, and two study
groups were formed. Each group was composed of two technical designers and an
ergonomist, who each had the same level of knowledge of the case study. In both
questionnaires the participants were asked to specify four functions (F1: set-up the
blade, F2: remove the blade, F3: cutting meat, F4: cleaning the machine). The first
team started with functional analysis and the “5Ws and an H” questionnaire, while
the second started with the MOSTRA-based questionnaire.
Table 1. Functional chart – Industrial application

Function : To receive and to place parts to manufacture from uphill machine line to the milling unit
MOSTRA
Criteria Value
Object
Geometry : deformable and non rectilinear part C
Maximum dimensions : (2x200) x 20 x 12000 (Width, Thickness, Length) C
Minimum dimension : compatibility with the existing conveyor and clamping
S, C
system
Maximum weight : about 750 kg (62kg/m) C
WHAT Surface finition : no slippery parts for good grip C
Stability of parts : homogeneous part with easily identifiable center of gravity C
Room temperature EV
Initial state : parts positioned on the conveyor C, S
Final state : machining position C
Precision of the placement +/- 2 mm C
Machine : long parts (automatic configuration) S, C, FM
WHO Operator : short parts (manual command configuration) WT, C, FM
Operator : short and long parts for the clamping system WT, C, UT
WHERE From the uphill conveyor to the manufacturing area of the milling machine S
WHEN Before the milling cycle UT
Machine: long parts: automatically positioned by the uphill conveyor accord-
S, C, FM
ing to the entered command.
Operator: short parts: manually positioned by the operator (on sight) on the WT, C, UT,
conveyor up to the position of the laser dead stop and the clamping system. S
Need visibility from the milling control panel while positioning manually to
see the parts through the conveyor of uphill machine line and the laser dead UT, FM, S
stop
HOW
Accessibility of the operator to the milling control panel during manual opera-
UT, S
tions
Operator position : standing in front of the control panel with visibility for the
WT, IM, UT
positioning
Automatic mode : 1m/s FM, UT
Manual mode : <0.5 m/s FM, UT
No handling from the operator S, UT

Legend: Consumable (C), System (S), Work Team (WT), Environment (EV), User
Task (UT), Tool (T), Intervention Mode (IM), Functioning Mode (FM)

This test shows that the “5Ws and an H” questionnaire overlap those of the
MOSTRA. We therefore recommend using the “5Ws and an H” questionnaire for
the description step. The MOSTRA-based questionnaire will be used during the
characterization step. A chart was created to orientate the group discussion to-
wards achieving our objectives. Then an industrial case study: company that de-
1156 M.R. Feno et al.

signs and manufactures both specialized and standard machines with several op-
tional functions (drilling, stamping, and sawing machining transfer line) for work-
ing with steel beams, was performed (Table 1).
3.3. Characterization step
The objective of the characterization step is to define the performance criteria
that characterize each previously identified entity. Each performance criterion,
specially health, safety and ergonomic aspects, should be measurable, testable or
verifiable at each successive step in the development process [14]. To achieve this,
it is necessary to first associate one or several MOSTRA objects with each de-
scription according to what it characterizes. To do this the “5Ws and an H” were
mapped with MOSTRA objects and then MOSTRA-based questionnaire allows
completion and verification of the data coherence with regard to the function con-
cerned. These associations allow identification of the main working situations in
which the function is effective. The structure used to define the working situations
is illustrated in Table 2.

Table 2 - Data structure of main work situations – industrial case


F: Receive and place parts to manufacture from uphill machining transfer line
to the milling unit.
|_ WS1: Automatic positioning for long parts |_ WS3: Manual command of
|_ UT1: Placing parts the clamping
|_ C1: Long parts |_ UT5: Clamping the
|_ S4: Conveyor of the line parts
|_ FM2: Automatic mode |_ C1: Short parts
|_ C2: Long parts
|_ WS2: Manual positioning for short parts |_ S2: Clamping system
|_ UT1: Placing parts
|_ WT: Operator
|_ IM1: Standing posture
|_ C2: Short parts
|_ FM1: Manual mode

The final step is to add a quantitative or qualitative value to each criterion. The-
se can be either predefined attributes of the Mostra UML model (task name, dura-
tion, work team, intervention mode) or specific parameters (initial/final state,
speed …). This step facilitates and enhances the risk analysis, which should be
carried out iteratively according to NFA progress. Within the framework of this
study we used the IDAR® method developed by CETIM [16]. Based on the
EN12100 standard approach, this method has a specifically user- centered and
human-safety oriented analysis, which matches our objectives.

3.4. Discussion
We describe the solutions implemented with regard to the following function:
to receive and to place parts to manufacture from uphill machining transfer line to
the milling unit (Table 1). According to the part length, two operating modes were
Safety of Manufacturing Equipment … 1157

initially defined by the industrial partner: manual and automatic. In answering the
“5Ws and an H” questions, the designers realized that the technical solutions re-
tained for these two operating modes were not entirely satisfactory from a safety
point of view. For the short part, the operator needs to control and simultaneously
visualize the part’s positioning. The current location of the control panel leads to
an uncomfortable position for the operator. In addition, the transferring wheels
were designed for parts longer than 300 mm, but when answering the “What”
question, it was stated that some customers produced shorter parts (250mm). In
these situations, the operator has to handle the parts manually, risking a hand be-
ing crushed between the part (25 kg) and the transferring wheels.
Another safety issue highlighted by the proposed methodology concerned the
possible interactions during the loading/unloading phase. It quickly became appar-
ent that the end-user would perform this operation during the production time,
when the machine was operating in an automatic mode for a period of several
minutes and did not require the intervention of the operator. This working practice
comes under the definition of “reasonably foreseeable misuse” in the Machinery
Directive and must also be taken into account by the designer; this was not the
case in the initial design. As the preparation area was located close to the convey-
or, its access was also prevented by the safety device. However, it seems highly
likely that it will be bypassed at some time in the future due to productivity con-
straints.

4. Conclusion
The aim of this research work is to use the “user/designer” pair to define the in-
formation necessary (intermediary objects [17]) for integrating safety require-
ments at the specification stage. Our hypothesis is to integrate safety requirements
as performance criteria for each function and not as specific functions or general
requirements, in other words, to specify that each function should be safe. We
suggest using:
1. Need Functional Analysis, which is used to identify all functions of a future
product (work equipment in our case).
2. An intuitive and descriptive tool such as “5Ws and an H” to define, for each
function, the usage-based criteria including safety criteria.
3. The MOSTRA working situation model to organize and capitalize these data.
This model was specifically developed to support safety integration at the design
stage.
In addition to the specific benefits of traditional functional analysis (e.g., sav-
ing time in the subsequent design-process steps, possibility of capitalizing the re-
sults of the analysis etc.), the proposed approach creates a common basis for both
NFA and risk analysis. The first industrial application yielded relevant results: un-
safe work situations were identified that had not been detected in the original de-
sign by the industrial partner. However, this case study only allowed validation of
1158 M.R. Feno et al.

the potential benefits from a designer’s point of view. Data was mainly provided
by the designers and few data were supplied by the final user.
This work has been performed in the frame of the dual laboratory between
INRS and ENSAM/ LCFC (safety design of working situation: functional re-
quirements, equipment design, working place management).

References
1. NF EN ISO 12100: Safety of machinery —General principles for design —Risk assessment
and risk reduction, CEN, Bruxelles, 2010
2. Fadier E., De La Garza C. (2006) - Safety design: Towards a new philosophy. Safety Sci-
ence, 44, Issue 1, 2006, pp. 55-73
3. Directive 2006/42/CE , the European parliament and of the council of 9 June 2006 on the
approximation of the laws of the member states relating to machinery, Official Journal Law,
L157-24 to L157-86., 2006
4. Prudhomme G., Zwolinski P., Brissaud D. Integrating into the design process the needs of
those involved in the product life-cycle Journal of Engineering Design, vol. 14-3, 2003, pp.
333-353.
5. Sagot J.C., Gouin V., Gomes S. Ergonomics in product design: safety factor. Safety Sci-
ence, 41, 2003, pp. 137-154.
6. Ghemraoui R., Mathieu L, Tricot N. Design method for systematic safety integration, CIRP
Annals - Manufacturing Technology 58, 2009, pp.161-164
7. Moraes A.S.P., Arezes P. M., Vasconcelos R., From ergonomics to design specifications:
contributions to the design of a processing machine in a tire company, IEA 2012: 18th World
congress on Ergonomics - Designing a sustainable future, IOS Press 2012, pp. 552-559
8. Darses F., Wolff M., How do designers represent to themselves the users' needs?, Applied
Ergonomics 37(6), 2006, pp. 757-764.
9. EN 1325-1 Value management, value analysis, functional analysis vocabulary. Value analy-
sis and functional analysis, CEN, Bruxelles, 1997.
10. Marsot J., Claudon L. Design and Ergonomics - Methods and Tools for integrating ergonom-
ics at the design stage of hand tools. International Journal of Occupational Safety and Ergo-
nomics, 10(1), 2004, pp.11-21.
11. Fadier E., Neboit M. Essai d’intégration de l’analyse ergonomique de l’activité dans
l’analyse de la fiabilité opérationnelle pour la conception: approche méthodologique. Actes
du colloque «Recherche et Ergonomie», Toulouse, 1998, pp. 61-66.
12. Hasan R., Bernard A., Ciccotelli J., Martin P., Integrating safety into the design process: el-
ements and concepts relative to the working situation. Safety Science - Special issue « Safety
in design », 41(2-3), 2003, pp. 155-180.
13. Guillevic C., Psychologie du travail, Éditions Nathan, collection Fac Psychologie, Paris,
1991,225 p.
14. Otto K., Wood, K., Product Design: Techniques in Reverse Engineering and New Product
Development, Prentice Hall, Upper Saddle River, NJ 07458, 2001
15. Tapan K. Bose, (2010) - Total Quality of Management. Chapter 10. Basic Decision-making
and Problem-solving Tools. Pearson Education India. ISBN: 978-8-131-70022-8.
16. Falconnet – Dequaire E., Meleton L. (2001) - IDAR®: une méthode d'analyse des risques
dans le cadre de la directive Machines, CETIM, Senlis, 2001, 164 p.
17. Boujut JF, Blanco E. Intermediary Objects as a Means to Foster Co-operation in Engineering
Design. Journal of computer supported collaborative; 2002. 12 (2):205-219.
Identifying sequence maps or locus to represent
the genetic structure or genome standard of
styling DNA in automotive design

Shahriman Zainal Abidin1*, Azlan Othman2, Zafruddin Shamsuddin2, Zaidi


Samsudin2, Halim Hassan2 and Wan Asri Wan Mohamed2
1
Industrial Design Department, Universiti Teknologi MARA, Malayia
2
Styling Department, Perusahaan Otomobil Nasional Sdn Bhd (PROTON), Malaysia
* Corresponding author. Tel.: +6-035-544-2750; fax: +6-035-544-2790. E-mail address:
shahriman.z.a@salam.uitm.edu.my

Abstract This paper discusses the need to identify the rule that can be used as a
unified point of references to styling DNA in automotive design. Recently, there is
no promising model or framework that can be used in design methodology for
styling DNA. In reference to that, inquiry research activity was carried out among
selected Malaysian public and styling designers in PROTON. Findings from the
study indicated that the perception of designers positively correlate with prefer-
ences of Malaysian users. Furthermore, results showed that designers consistently
tried to create a concept via sketches based on a specific area of the car. Conse-
quently, the sequence maps or locus have been established in this research and the
challenges revealed how it can be used as a starting point to build or create charac-
ters as embodied agents of character traits concerning brand and identity of a car
design.

Keywords: Automotive design; genetic structure; genome standard; sequence


maps; styling DNA

1 Introduction

Research on “Styling DNA” in automotive design has become important. For car
design in particular, in the conceptual phase of development, styling designers
have given specific consideration on the visual appearance of the design [1]. Styl-
ing of DNA dealt with the creation of brand image of the organization through its
character traits that represent the Deoxyribonucleic acid or DNA of the car. DNA
in this context is defined as a molecule that encodes the genetic instructions em-
ployed in the growth physical form or product [2] (see Figure 1). The basic ele-

© Springer International Publishing AG 2017 1159


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_116
1160 S.Z. Abidin et al.

ment of DNA is the genome. In design, a genome is the whole “life form” set of
genes in the DNA [2]. In definition, it is one haploid set of chromosomes contain-
ing the genes. Broadly: the genetic material of an organism. In the perspective of
genetic terminology, the terms refer to a full set of chromosomes as well as all the
inheritable traits of an organism. Genetic is known as a locus or loci, and it con-
tains the chromosomes in rank required to build and maintain that life form. Scien-
tifically, for the human being DNA, there are about 8 to 16 sequence maps or lo-
cus to represent the person genetic structure or genome standard [3]. Most of the
design research on genome explores the sequences, maps, chromosomes, assem-
blies and annotations. Even, the new car styling DNA shows potential disappear-
ance from the current model and has the potential to bring us closer towards a de-
sign aim. In a notion, there are about 3 to 6 sequence maps depending on brand
and identity of the car for instance Volvo’s value-based design cue (Soft nose and
grille, V-shape bonnet, shoulder line, tail lights, third side windows, and flowing
line) [4]. However, until today, there is no clear evidence shown that the terminol-
ogy of styling DNA is being used correctly in automotive design.

Fig. 1. Styling DNA interpretation for car design

The basic elements and properties of product form in relation to Styling DNA
in both industrial design and engineering design are Visual Elements (VE). VE
consists of a point, line, plane or surface, and volume [5, 6]. The designers manip-
ulated this elements and properties in creating form, also known as “Formgiving.”
A Formgiving approach based on industrial design is a form evolution of artistic
visual elements [5]. Meanwhile, for engineering design, form evolution is based
on technical principle solutions [6]. However, there are two elements contributed
to issues on form studies to Styling DNA in automotive design such as “Syntactic”
and “Semantic.” Syntactic or form syntactic deals with the structure and composi-
tion of visual elements of the physical appearance [7]. Among the issues of Syn-
tactic in design are: 1) Terminology (i.e., form elements and form entities), 2)
Laws of form (i.e., can be explained in terms of structured or controlled and un-
structured or uncontrolled), 3) Interpretation (i.e., not only in art and design but al-
Identifying sequence maps or locus to represent … 1161

so in other disciplines, especially engineering, mathematics, physical sciences, and


humanities), and 4) The perception of Gestalt (i.e., appreciation of visual appear-
ance in design). Meanwhile, Semantic is a study of meanings [8]. Among the is-
sues of Semantic in design are: 1) Meaning (i.e., qualities of the interpreter), 2)
Forms communicated (i.e., meanings through signs), 3) Perception of a product
(i.e., can induce meaning), 4) Cultural context of products, and 5) Incorporation of
both the practical and the aesthetical functions of a product.
Since the elements of syntactic and semantic seems measurable to this study,
we have decided to investigate on how it can be used as variables and framing
concerning to the establishment of sequence map or locus to represent genetic
structure of the genome standard in automotive design. This is based on Malaysian
design influences and PROTON has been chosen as a case example.

2 Research Objective

The main objective of this research is to identify the sequence map or locus that
represents genetic structure of the genome standard of styling DNA in automotive
design. This based on Malaysian design context. Here, we pose three research
questions for the study: RQ1. The ambiguous characteristics of styling DNA
through designers sketching processes of Malaysian design will lead to a natural
variety in output. We refer to this phenomenon as “consistency.” Thus, how do
designers assess styling DNA through their sketching assignments on intended
achievement? RQ2. Designers choose what elements of styling DNA to sketch ra-
ther than transforming uniformly. We refer to this phenomenon as “selectivity.”
Thus, we are interested in understanding what types of elements designers sketch.
What are the characteristics (character traits) of these elements? RQ3. Designers
may sketch only to a partial degree (“completeness”) in Styling DNA process.
How, then, are elements styling DNA by designers for completeness?

3 Method

The framework of this research is based on the stages of the Design Research
Methodology (DRM) [9] (see Figure 2). The DRM emphasizes several factors: 1)
The need to formulate success as well as measurable criteria (for example, the role
of the Criteria definition stage is to identify the aims that the research is expected
to fulfill, as well as the focus of the research project); 2) The need to focus De-
scriptive Study I on finding the factors that contribute to or prevent success; 3)
The need to focus the Prescriptive Study on developing support that addresses
those factors that are likely to have most influence; and, 4) The need to enable
evaluation of the developed support (Descriptive Study II).
1162 S.Z. Abidin et al.

Fig. 2. A Design Research Methodology framework (adapted from Blessing et al.,


1998)

4 Results/Discussion

4.1 Positive correlation of public and designers’ perceptions

Table 1. Results of Pearson Correlation Test

Results from the study indicated positive correlations in public and designers’
perceptions. It was based on 300 respondents from the range of .258** to .480**
Identifying sequence maps or locus to represent … 1163

with regards to significant items such as MAS logo, Keris, KLCC, Wau and Bunga
Raya (Hibiscus) that represented the characteristics of Malaysian brand and identi-
ty (see Table 1). Pearson correlation showed that all items represented 1) Icon,
sign, and symbol (MAS logo); 2) Object and artifact (Keris); 3) Building and Ar-
chitecture (KLCC); 4) Art, decoration, culture, heritage and costume (Wau); and
5) Nature resources (Bunga Raya) are significant at the 0.01 level (2-tailed).

4.2 Manual sketch exercise complimentary to identifying styling DNA of car de-
sign
In general, early stage development processes are shown to be categorized by
divergent and explorative processes. By understanding how 5 designers (1 st exper-
iment) generate design variation (influenced by 5 items in Table 1) at the superior,
intermediate and detail form levels, improvements could be suggested which
would enhance the Styling DNA of the car, thus introducing ambiguity and vari-
ance. Based on the analysis of form structure level for the three-quarter front view
and three-quarter rear view set upon heuristic evaluation of all elements indicated
by 10 designers (2nd experiment) (see Figure 3 and 4), it showed that designers
have given emphasized at intermediate (features), and detail (components) form
levels. However, there is no evident reveal that the designer has given emphasis at
the superior (Gestalt) level in the assessment of Styling DNA of the car.

Fig. 3. Analysis of form structure levels for the three-quarter front view set based
on heuristic evaluation of all elements indicated by designers
1164 S.Z. Abidin et al.

Fig. 4. Analysis of form structure levels for the three-quarter rear view set based
on heuristic evaluation of all elements indicated by designers

For the research questions, the specific findings were as follows:

RQ1. The ambiguous characteristics of styling DNA through designers’ sketching


processes of Malaysian design will lead to a natural variety in output. We refer to
this phenomenon as “consistency.” Thus, how do designers assess styling DNA
through their sketching assignments in intended achievement?
x Designers sketching process to styling DNA characteristically have a natural
variety in consistency, and the assigned task did not produce an identical re-
sult every time in the location of sequence map of locus or genetic of syntac-
tic properties of styling DNA.
x For target performance, i.e. the ability of the designer to realize intent, per-
formance varied considerably among designers and between assignments as
shown in this research. The introduction of ambiguity to the sketching process
is, of course, a natural source of inspiration and variety towards establishing
syntactic properties of styling DNA of the car.
x Reflective thinking led to new interpretations and present opportunities for
new solutions in the process of sketching on styling DNA as performed by the
designer. In fact, designer introduced elements that are of vertical character
(i.e. divergent) in hermeneutic processes, a characteristic that is not found in
the structured process.
Identifying sequence maps or locus to represent … 1165

x As proposed in this paper, a major difference of designers’ in the recognition


and consideration of the purpose of form elements in the sequence maps or
locus to represent the genetic structure or genome standard concerning syn-
tactic properties of styling DNA of the car. As suggested by Figures 3 and 4,
the utility of form elements increases with the greater level of detail; hence,
on the other form level, the utility is low.

RQ2. Designers choose what elements of styling DNA to sketch rather than trans-
forming uniformly. We refer to this phenomenon as “selectivity.” Thus, we are in-
terested in understanding what types of elements designers sketch. What are the
characteristics (character traits) of these elements?
x This finding implies that designers, in fact, choose what elements to form el-
ements in the sequence maps of locus or genetic in relation to syntactic prop-
erties of styling DNA of the car, rather than transforming uniformly.
x The visual purpose is shaped and described by the main motif, representing
expressive characteristics and defining the typology of the product into styl-
ing DNA sequence map.
x Most form transformation (represented by the generation of similarities and in
consistencies) occurs at the intermediate and detail levels of syntactic proper-
ties of styling DNA of car design.
x In the initial phases of form exploration elements in the sequence maps or lo-
cus to represent the genetic structure or genome standard concerning syntactic
properties of styling DNA of the car design, the utility is not of primary im-
portance. Rather, the search for new stylistic themes, embodying new design
formats and generating novel representations, an activity that may be far re-
moved from the focus on utilitarian function, is of core interest.

RQ3. Designers may sketch only to a partial degree (“completeness”) in styling


DNA process. How, then, are elements styling DNA by designers with complete-
ness?
x Our findings show that designers, in fact, design only to a partial degree, ex-
hibiting a low level of completeness in sketch transformation towards ele-
ments in the sequence maps or locus to represent the genetic structure or ge-
nome standard about syntactic properties of styling DNA of the car.
Moreover, it shows that designers are not focusing on creating a character on
the traits. For example, the line indicating the split line between the bonnet
and bumper is only transformed in proper with regards to the embodiment of
character about brand and identity of the car.

5 Concluding Remarks

The research indicated that “features” and “components” of form structure con-
tributed to the form structure level of the chromosome. They are required to build
1166 S.Z. Abidin et al.

and maintain life form as well as all the inheritable traits of a car design for styling
DNA. There are two contributions to this research. First is the conceptual contri-
bution of cross-disciplinary and multidisciplinary approach, towards understand-
ing styling DNA and reliability, and qualitative data on sketching activities. Se-
cond is empirical contribution such as visual information on styling DNA in
design, functional reliability, and adopting academia research results into practi-
cality through industry cooperation. For future research, such studies could ex-
plore and develop algorithms for styling DNA, which can control intuitive form
features that may have an arbitrary structure. Other methods of research such as
“simulation trials” in the empirical analysis should be done concerning a future
study. Also, the next stage or further research is a question of how to create the
character as embodiment agent of character traits concerning brand and identity of
the car design.

Acknowledgments

This research is gratefully supported by Universiti Teknologi MARA (Grant


number: 600-RMI/RAGS 5/3 (128/2014)), Perusahaan Otomobil Nasional Sdn
Bhd (PROTON), and Ministry of Higher Education, Malaysia.

References

1. Tovey, M.: “Styling and design: intuition and analysis in industrial design.” Design
Studies, 18(1): 5-31 (1997)
2. Abidin, S.Z., Othman, A., Shamsuddin, Z., Samsudin, Z., Hassan, H.: “The challenges
of developing styling DNA design methodologies for car design.” Proceedings of
E&PDE14, 16th International Conference on Engineering and Product Design Educa-
tion – Design Education & Human Technology Relations, Enschede, DS78-2, 738-743
(2014)
3. Nusbaum, C., et al.: “DNA sequence and analysis of human chromosome 8.” Nature,
439, 331-335 (2006)
4. Karjalainen, T.M.: “It looks like a Toyota: Educational approaches to designing for
visual brand recognition.” International Journal of Design, 1(1), 67-81 (2007)
5. Akner-Koler, C.: “Three-dimensional visual analysis.” Stockholm: Reproprint (2000)
6. Muller, W.: “Order and meaning in design.” Utrecht: Lemma Publishers (2001)
7. Warell, A.: “Design Syntactics: A Functional Approach to Visual Product Form.”
Göteborg: Chalmers University of Technology (2001)
8. Monö, R.:“Design for product understanding,” Stockholm: Liber (1997)
9. Blessing, L., Chakrabarti, A., Wallace, K.M.: “An Overview of Descriptive Studies in
Relation to a General Design Research Methodology.” In Frankenberger, E., Badke-
Schraub, P., Birkhofer, H.(eds).: Designers - the key to successful product develop-
ment. Germany: Springer-Verlag (1998)
Generating a user manual in the early design
phase to guide the design activities

Xiaoguang SUN1*, Rémy HOUSSIN1,2, Jean RENAUD1, Mickaël GARDONI1,3


1
INSA de Strasbourg, LGECO, 24 bd de la Victoire, 67084 Strasbourg Cedex, France
2
Université de Strasbourg, 3-5 rue de l’Université, 67000 Strasbourg, France
3
ÉTS / LIPPS, Montréal, H3C 1K3, Québec, Canada
* Corresponding author. Tel.: +33669503151; fax: +33368854972. E-mail address:
remy.houssin@insa-strasbourg.fr

Abstract: In order to improve product performance, this paper provides specific


guidance for designers to take product usage information into account in the early
design phase. Firstly, the influence of design modification on design process is
presented. Secondly, by reviewing some helpful studies, the primary reasons of
the design modification occur after prototyping phase is stated. Thirdly, we pro-
pose that a user manual can be created before prototyping stage to direct the de-
sign activities. We define the design process as 3 levels, function level, task level,
and behavior level. At the function level, designers decompose the high-level
function into sub-functions until elementary level that all elementary functions are
arranged in a pre-determined order. At the task level, designers allocate each ele-
mentary function to machine, user, or interactive between the two. At the behavior
level, Designers study the interrelationships between user’s behavior and struc-
ture’s behavior to examine whether the interactions meet the required performance
or not. Finally, the simple case verifies the effectiveness and practicability of the
proposed method in this paper.

Keywords: Design Process; Function; Task; Behavior; User manual.

1 Introduction

Most mechanical engineering designs involve various considerations (safety, cost,


availability, ecological, etc.), and designers should deal with these considerations
in proper proportion [1]. The object of our study can be a machine, a production
system, an equipment or a simple product. Usually, users intervene the product
testing after prototyping phase and some modifications may be required. This
change refers to modify the product or some correlated documents. It can take

© Springer International Publishing AG 2017 1167


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_117
1168 X. Sun et al.

place at any phase in the whole product development process. Basically, it may
due to various reasons, such as market response, imperfect customer and user re-
quirements analysis, lack of the method and tool support, etc. [2], as is shown in
Figure 1. Usually, the costs of the design change are very expensive and some ad-
ditional equipment and procedures may be brought into system that may affect the
product performance. The impact of early design changes on the entire product
development process is less than the late changes, and the costs of the change raise
with its development time. Consequently, the sooner discovering the design altera-
tion, the better, and it is significant to the whole product development process.
Customer
Acceptance
Requirements
Testing
Engineering

Modification
System
(Market, product System
Requirements
positioning, customer/user Testing
Engineering
requirements, product
developments means,
support tools, etc.) System
Architecture
Integration
Engineering
Testing

Subsystem
Design Integration
Testing

Prototype Unit Testing

Fig. 1. Design modifications in V-model.

Nevertheless, sometimes modifications cannot solve the problems, simply


changing the structure of product hardly works on the contradictions between
product and user. Some useful procedures are brought to the system to only ensure
an acceptable interaction between product and user, however users may feel it dif-
ficult to support the way that they work. The main reason is that most current de-
sign studies stagnate at the functional analysis level, without examining the global
system’s behavior (system and user), technique solution cannot solve all design
problems. User aspect is an important factor in the process of the product opera-
tion, however few practicable methods can be adopted for designers concern all
these factors in the design activities [3]. It is also one of the main reasons of the
design modifications. This paper mainly concentrates on improving the product
performance by taking product usage information into account during the early
design phase to reduce the design changes that may occur after the prototyping
stage.

2 State of the art

Up to now, many existing jobs of estimating user experience during the design
phase has been reviewed by Chaffin [5], and digital user models (DHM) has been
Generating a user manual in the early design … 1169

paid the most attention for virtual manufacturing designs from an ergonomic per-
spective [5, 6, 7, 8]. The DHM allows designer to simulate user action, ability, and
performance and find out potential problem that could save a lot of time and mon-
ey during the design and prototype testing phase [5]. However, it also exists sev-
eral limitations in this method. For instance, usually, the simulation is carried out
at the end of the design process. At this point, the detail design has been finished
and the product’s model (CAD model) has been in the issued state. If the simula-
tion results are not satisfactory, designers have to take up a lot of time and energy
to modify the product model again. Moreover, simulation must rely on real user
motion data and it would take a lot of time and energy for data collection. Some
others studies show that human mental and sensor are also crucial for improving
performance and reducing errors. In order to reduce the workload of the operator,
Naderpour et al. [9] built an abnormal situation modeling (ASM) to help operator
to handle the problems about abnormal situations (intensive and strenuous mental
activities). This method can effectively decrease the operator’s mental stress, and
error rates in the operation. However, the function of updating knowledge is re-
quired for improving.
2.1. Function, Structure and Behavior
The ultimate goal of the most engineering design is to create an artifact, prod-
uct, system, or process that carries out a function or functions to meet customer
needs. The Function-Behavior-Structure (FBS) framework was first proposed by
Gero [10], who pointed out that the design activities involve a set of links among
the function, behavior and structure. The function can be considered as to meet the
customer requirements and shows the purposes of the product (system). The struc-
ture can be defined as the units of the product and expresses the internal and ex-
ternal states of a physical element. The behavior is a property (rotation, rolling,
vibration movements, etc.) of the structure and/or user. Actually, behaviors are the
real outputs of the system (product), reveal the physical characteristics of the
product and reflects product performance, however function is only a desire [11].
Unfortunately, these behaviors are studied only from a technical point of view in
order to verify its reliability and potential problems in the detailed design phase,
not from the user’s perspective.
A general problem that the most designers are confronted with is that there may
be a gap between the desired performance and the actual performance. For in-
stance, design activities are carried out according to customer requirements to de-
sign an organized object that runs reliably and efficiently. However, when design
finished, the prototype may not run as efficiently as expected, even some danger-
ous phenomena may be generated, the users might feel that it difficult to support
the way that they work, while the human-machine interface may provide a poor
match with the needs of the task [12]. These problems may be attributable to (1)
users intervene in the prototype testing in the last design phase, (2) after prototyp-
ing phase, some additional equipment (sensors, protection device, etc.) may be
added for security purposes. These additional equipment usually effectively pre-
1170 X. Sun et al.

vent some dangerous phenomena in the process of product operation. But it may
influence the continuity of the device operation, thereby reducing the productivity,
(3) the design process is iterative, especially for a complex product, there are not
mature methods for designer to fully take user information into account during the
early design phase.
Although taking the product use factors in to account during the design stage
severely obeys European directives 98/37/ED and 2006/42/EC [13, 14], technical
solutions are liberally selected in accordance with specified requirements by de-
signers. Moreover, the lack of the method support makes it difficult to concerning
product usage information during the design phase.
2.2. Behavioral Design Approach
Usually, the customer is the person who purchases the product, not the person
who actually uses it. The customer takes notice of product function, the productiv-
ity, the cost, the efficiency, etc., while the end user may pay more attention to
product reliability, security, usability, and operability. It means that designers need
to transform not only the customer requirements, but also the user requirements
into the product performance.
Behavioral Design Approach has been proposed by SUN et al. [3] to help de-
signers find out potentially hazardous phenomena and zones of the product (sys-
tem) during the early design stage by analyzing the interaction between user’s be-
havior and structure’s behavior. They held that the system’s function can be
achieved by technical solution (automatic function) or user (manual function). De-
signers find out the structure for the automatic function in accordance with some
principles, and manual function will be performed by user. Then, designer obtains
the structure’s behavior from the structure tasks and the user’s behavior from user
tasks. If the interaction between user’s behavior and structure’s behavior does not
ensure the required performance, designers should make some modifications. A
behavioral design approach (BDA) software was developed by SUN et al. [3] as a
simple case to valid the applicability of this method. However, there are still many
problems demanding further research. Such as, how to categorize disparate users,
e.g. specialists, experienced, normal users and newcomers, how could the designer
acquire more knowledge of the user, how designers use this method, by which
way, and so on.

3 Considering the product usage information during the early


design phase

As discussed above, one of the main contributory factor of these problems are
human and use aspects. In order to take the product usage information into ac-
count during the design phase, designer should know how the user operates the
product (system).
Generating a user manual in the early design … 1171

3.1. The role of the user manual


When the user first gets access to the product, how does the user know how to op-
erate a product? If user has ever seen the similar product before, he (she) will fol-
low their experience. Otherwise, user has no choice to turn to the user manual or
user guide. Actually, most of them prefer combining knowledge in their mind to
turning to the user guide. Usually, a user model [16] is developed in user’s mind
during the interaction with the product (system). It is a mental model that the user
thinks how he (she) operates the product (system). It completely depends on how
much knowledge they have acquired. There are some variances between the de-
signer’s conceptual model and the user’s mental model because they are in the dif-
ferent knowledge level. The best situation is that the user can operate the product
(system) correctly in accordance with designer’s conceptual model without any
guidance (user manual, user guide, etc.). It may be possible for a simple product
(system), however, a user guide is necessary for a complicated product (system).
Usually, a product and a user manual are provided together to the customer,
and the product is paid more attention than the user manual. The user manual usu-
ally is created by an expert at the end of the design process (Figure 2). Actually, a
good user manual not only can help user to operate the product in a right and an
efficient way, but also can help enterprise to save a large amount of the cost of
staff training and customer service. However, user manuals tend to be less helpful
than they should be. Even the best manuals cannot be counted on, many users do
not read them.

Fig. 2. User manual is created at the end of the design process.

3.2. Generating a user manual during the early design phase


The best way for design work is that the user manual is written at first (or in paral-
lel), then the design activities are carried out by following the manual [16]. While
the product was being designed, potential users could simultaneously test the
manuals and model of the system, giving important design feedback about both. If
the user manual can be created during the design phase: (1) a much more satisfac-
tory resolution could have been devised, (2) designers could know how the user
operate the product and develop the conceptual model that suitable for the user,
(3) some unfavorable or dangerous phenomena (from user’s perspective) can be
avoided, as is shown in Figure 3.
1172 X. Sun et al.

Fig. 3. Generating a user manual during the design stage.


We propose to take user’s factor into consideration by creating a user manual to
direct the design activities during the early design stage. If (1) all elementary func-
tions are completed in a determined order, (2) all tasks are completed in a deter-
mined order, and (3) all entities are completed in a determined structure. Designer
could create a user manual during the design stage.
Normally, a product’s (system’s) function can be divided into many sub-
functions, and each sub-function may be subdivided. Each sub-function can be
fulfilled by a task. To achieve a function, the task may be allocated to machine,
user or interaction between two. Therefore, designers can not clearly define a
function is automatic function (allocate to machine) or manual function (allocate
to user). Here, we define the design activities as 3 levels, function level, task level,
and behavior level, as is shown in Figure 4.

Function level Task level Behavior level

Fig. 4. Function to behavior process

At the function level, firstly, designers determine the overall goal of the system.
It can be described as, “What is the system want to achieve? But do not concern
how to achieve this goal”. Designers decompose the high-level function into ele-
mentary function by using Function analysis method. Elementary function can on-
ly be performed by a single entity. Finally, to accomplish the high-level function,
a set of elementary function must be arranged in a pre-determined order, as infor-
mation generated by a function will be an input to other functions.
At the task level, the question of “how to achieve the overall goal of the sys-
tem” should be answered. Each elementary function can be fulfilled by a task, de-
signers allocate each task to machine (technical task), user (socio-technical task),
or interactive between the two (interactive task) in accordance with Structured
Analysis and Design Technique (SADT) [17], as is shown in Figure 5 (a). Each
task (no matter technical task, socio-technical task, or interactive task), can be car-
Generating a user manual in the early design … 1173

ried out with some resources and under some constrains, has an input and the out-
put links to another task. Designers analyze interaction with each related task to
verify the overall task order and hierarchy. At this level, the link between each
task, the order and the hierarchical of each task is determined by using Structure
Tree Method.
Control
Sensor

Input activities Input activities Water Turn on the Water Turn off the
Activity Wash hands
water water

User
Resources User Mechanical devices
Mechanical devices

(a) (b)
Fig. 5. Structured Analysis and Design Technique (SADT)
At the behavior level, designers find out the structure for each technical task.
Meanwhile, the task object and the user behavior are identified. Here, we use the
behavior design approach [3] to analyze the interaction between user’s behavior
and structure’s behavior. If the interaction between the user’s and structure’s be-
havior does not ensure the needed performance, designers have to make some
modifications. Designers can modify the structure, or go back to task level to real-
locate the task to user or machine, or go back to the function level to modify the
function decomposition.

4. The case study

Nowadays, some poor designs are commonplace around the world. Especially for
special user’s groups, for instance, the Alzheimer patients (It is a chronic neuro-
degenerative disease) cannot turn on the tap to wash their hands after using the
bathroom, they do not know how to use the tap (Figure 6). Even the guidelines
have been put in front of them because they lose the ability to learn (Figure 7).
They still need some helps from the staff. It increases the workload of staff in vir-
tually.

Fig. 6. Tap for Alzheimer patients.


1174 X. Sun et al.

Fig. 7. Guidelines proposed by the staff.


In designer’s conceptual model, it takes three steps to use this kind of tap: 1)
clockwise rotate the knob to discharge the hot water and anticlockwise rotate the
knob to discharge the cold water; 2) get water; 3) rotate the knob to counter direc-
tion to turn off the tap. We have made an investigation with the staff who works at
Alzheimer's Disease Association. The patients do not know to turn the knob, press
the knob, and stretch out their hands to access the sensor to get the water because
they do not know where they should put their hands. They only know use the old
Generating a user manual in the early design … 1175

classical tap with a handle. More importantly, they lose the ability to learn and al-
ways forgetful. Therefore, the user’s (Alzheimer patients’) mental model like this:
1) they want to find the handle; 2) they do not find the handle; 3) they cannot turn
on the tap. The contradictions between designer’s conceptual model and user’s
mental model are obvious.
Designers should follow the way of how the Alzheimer patients use the tap,
otherwise, it will lead a poor design. According to user’s (Alzheimer patients’)
mental model, we proposed the model for designers like this, 1) turn the handle to
right to discharge the water; 2) wash hands; 3) then finish (they often forget to
turn off the tap).
Therefore, the conceptual user manual for Alzheimer patients should follow the
main three steps, 1) turn the handle to right to discharge the water; 2) wash hands;
3) the handle was turned back automatically after Alzheimer patients leave as is
shown in Figure 5 (b). In this way, designers also get some other useful infor-
mation that benefit the design activities, for example, the tap must have a handle.

5 Conclusion and future work

In this paper, we propose to take user’s factor into consideration by creating the
user manual to direct the design activities during the early design stage. We define
the design activities as 3 levels, function level, task level, and behavior level. At
the function level, designers decompose the high-level function into sub-functions
until elementary level. At the task level, designers allocate each elementary func-
tion to machine, user, or interactive between the two. At the behavior level, de-
signers analyze the interaction between user’s behavior and structure’s behavior.
The case that we presented shows the problem that result from the inconsideration
of user factor and we find out the conceptual solution.
This method provides a new vision for designers to carry out the design activi-
ties. It allows designers to find out the potential problem before prototype stage
that may occur during the product use phase. Our future work focuses on improv-
ing our method and applying this method to practice to help designer to carry out
the design work (develop software or provide a systematic approach). We will al-
so estimate the design scheme that by using the behavioral design approach. We
are looking forward to new contribution in the broad areas under this topic.

References
1. Juvinall R C, Marshek K M. Fundamentals of machine component design. New York: John
Wiley & Sons, 4-14, (2006).
2. WAN L, GUO G, DAI H. Realization of Engineering Change in PLM System. Journal of
Chongqing University (Natural Science Edition), 28(1), 112-115, (2003)
3. Huichao SUN, Rémy Houssin, Mickael Gardoni, et al. Integration of user behavior and
product behavior during the de sign phase: Software for behavioral design approach. Inter-
national Journal of Industrial Ergonomics, 43(1), 100-114, (2013)
1176 X. Sun et al.

4. Kolich M, Taboun S.M. Ergonomics modelling and evaluation of automobile seat comfort.
Ergonomics, 47(8), 841-863, (2004)
5. Chaffin D B. Improving digital human modelling for proactive ergonomics in design. Ergo-
nomics, 48(5), 478-491, (2005)
6. Julian Faraway, Matthew P. Reed. Statistics for Digital Human Motion Modeling in Ergo-
nomics. Technometrics, 49(3), 277-290, (2007)
7. Jung, K., Kwon, O., & You, H. Development of a digital human model generation method
for ergonomic design in virtual environment. International Journal of Industrial Ergonom-
ics, 39(5), 744-748, (2009)
8. Magistris G D, Micaelli A, Savin J, et al. Dynamic digital human models for ergonomic
analysis based on humanoid robotics techniques. International Journal of the Digital Hu-
man, 1(1), 81-109, (2015)
9. Mohsen Naderpour, Jie Lu, Guangquan Zhang. An abnormal situation modeling method to
assist operators in safety-critical systems. Reliability Engineering and System Safety 133,
33-47, (2015)
10. Gero, J. S. Design prototypes: a knowledge representation schema for design. AI Magazine
11(4), 26-36, (1990).
11. David G. Ullman. The Mechanical Design Process, Fourth Edition. McGraw-Hill, New
York, (2010)
12. Martin Maguire. Socio-technical systems and interaction design-21st century relevance.
Applied Ergonomics 45(2), 162-170, (2014)
13. Directive 98/37/ED of the European parliament and of the council of 22 June 1998 on the
approximation of the laws of the Member States relating to machinery. OJ L207, p.1–46
(23.7.1998).
14. Directive of the European Parliament and of the Council of 17 may 2006. On machinery,
and amending Directive95/16/EC (recast). OJ L157, p. 24-86 (9.6.2006).
15. Cordero, Cristina Alén, and José Luis Muñoz Sanz. Measurement of machinery safety lev-
el: European framework for product control: Particular case: Spanish framework for market
surveillance, Safety Science, 47(10), 1285-1296, (2009).
16. Norman, Donald A. The design of everyday things. Basic books, 190-191, (2002).
17. Ahmed F, Robinson S, Tako A A. Using the structured analysis and design technique
(SADT) in simulation conceptual modeling. Proceedings of the 2014 Winter Simulation
Conference. IEEE Press, 1038-1049, (2014).
Robust Ergonomic Optimization of Car
Packaging in Virtual Environment

Antonio LANZOTTI1, Amalia VANACORE1*, Chiara PERCUOCO1


1
Department of Industrial Engineering, University of Naples Federico II
*
Tel.: +39-081-768-2930; fax: +39-081-768-2187. E-mail: amalia.vanacore@unina.it

Abstract. Ergonomic design of automotive seat is a very challenging task whose


results may directly influence driver’s comfort and safety. Seat comfort can be
improved by identifying car-packaging solutions that allow an optimal driver’s
posture. In order to reduce the time and cost for testing, ergonomic analysis is car-
ried out in virtual reality (VR) environment with digital human models (DHM)
that can be used to simulate the anthropometric variability of a target population
of users and thus verify the robustness of design solutions with respect to the an-
thropometrical noise factor. In this paper we illustrate a case study concerning the
comfort improvement of a minicar packaging set up via robust ergonomic design
(RED) with digital human models. The aim is the identification of the optimum
levels for the seat control parameters that minimize the driver’s comfort loss with
respect to a preferred posture. The approach adopted for the analysis of data ob-
tained from the virtual experiments is based on the joint generalized linear model-
ing of mean and dispersion of the driver’s postural comfort loss (i.e. the ergonom-
ic response of interest).

Keywords: Robust Ergonomic Virtual Design; Postural Comfort Loss; General-


ized Linear Models; Seat Comfort Improvement

1 Introduction

Robust Design (RD) is a well-known systematic statistics-based methodology


to design products whose performance is least affected by variations due for ex-
ample to diversity in product components (i.e. inner noise) and/or diversity in the
external environment (i.e. outer noise). Pioneered in 80’s by Dr. Genichi Taguchi
at the AT&T Bell Laboratories [1, 2], RD Methodology has been extensively used
in industries to improve the performance of many processes and products.
In the field of ergonomic design there are several successful applications of
Taguchi methods [3]; in particular, RD methods have become widely adopted in
automotive industries because of their cost- and time-efficiency for quality im-

© Springer International Publishing AG 2017 1177


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_118
1178 A. Lanzotti et al.

provement in a vehicle development process. Many of these applications concern


the vehicle seat design aiming at minimizing the driver’s discomfort finding de-
sign solutions that are as less sensitive as possible to anthropometric variability,
which, in this specific context of application, can be consider one of the most crit-
ical noise factor to deal with.
In this paper we further develop a Robust Ergonomic Design (RED) strategy pro-
posed in [4] and aiming at effectively plan comfort improvement experiments in
virtual reality (VR) environment using Digital Human Models (DHMs) in order to
identify the optimum levels for the seat control parameters that minimize the driv-
er’s comfort loss with respect to a preferred posture [5].
The RED strategy can be schematized in three main phases. The first phase aims
at developing a (physical or virtual) prototype of the system or product consid-
ered. The prototype should be made considering all the system/product require-
ments, including interactions with the users, and should allow for the analysis of
the main ergonomic features. The second phase concerns the identification of a
relevant ergonomic response for the system/product under study as well as the
main design parameters and noise factors impacting on it. The ultimate aim of this
phase is to predict and evaluate the best setting of the design parameters (with re-
spect to the ergonomic response) that provides a robust solution with respect to the
action of noise factors. Finally, the third phase aims at improving the optimal pa-
rameter settings to get a satisfactory level of performance of the system consid-
ered.
The data analysis tools adopted in [4] allow to investigate only the impact of de-
sign factors on the system mean response neglecting any investigations on their
effects on the variation of the system response.
Since in comfort improvement applications, achieving high precision by minimiz-
ing the dispersion is as important as getting the mean at the target, in this paper
the strategy adopted for the analysis of data is based on the joint generalized linear
modeling of mean and dispersion of the driver’s postural comfort loss which is de-
fined as the ergonomic response of interest.
The whole strategy is fully generalizable to ergonomic improvement contexts
other than the one illustrated in the case study. However the discomfort index rep-
resenting the system ergonomic response can be applied to improve postural
(dis)comfort only when an ideal or preferred posture can be identified.
The paper is organized as follows: in the second section a description of the main
design factors of minicar packaging is provided; the VR experimental setup is de-
scribed in the third section; in the fourth section the strategy for the analysis of da-
ta obtained from comfort improvement experiments is fully described; the results
of robust optimization for the minicar packaging are summarized in the fifth sec-
tion; finally conclusions are drawn in the sixth section.
Robust Ergonomic Optimization of Car … 1179

2 Minicar packaging definition

The primary focus in minicar packaging is the driver’s workstation. Driver’s


package refers to the locations and adjustment ranges of the steering wheel and
seat with respect to the pedals, but also encompasses the physical locations of con-
trols and displays with which the driver interacts [7].
The starting point for the overall layout and packaging is the so-called Seating
Reference Point (SRP) that defines the position of the occupant in the vehicle. The
SRP describes the hip point of a 50th percentile manikin, specifically the pivot
point of the torso and upper legs. Other important reference points used as stand-
ard practice to define driver’s position are, the H30 (i.e. the hip point measured
relative to vehicle floor), the BOF (i.e. Ball of Foot: the ball joint of foot) and the
AHP (i.e. Accelerator Heel Point: the position of the heel while placed on the ac-
celerator).
Following designer considerations and the SAE packaging guidelines, in [4] the
following five design parameters for minicar packaging setup were identified as
potentially affecting driver comfort:
A. Lumbar Prominence
B. H30;
C. Seat Cushion Angle;
D. Steering Wheel to BOF;
E. Seat Track Angle.

Fig. 1. Design parameters chosen in the pre-design experimental phase

The initial levels of the above design factors (in bold in Table 1) were chosen on
the basis of the designer remarks, the evaluation of the best in class and initial tri-
als in virtual environment realized in order to avoid unrealistic posture. The angu-
lar positions (parameters C and E) are positive clockwise.
1180 A. Lanzotti et al.

Table 1. Description of design parameters and levels

Levels
Design Factor Dimensions
-1 1
A. Lumbar Prominence 1.3 10 cm
B. H30 40.0 50 cm
C. Seat Cushion Angle 0 13.9 degrees
D. Steering Wheel to BOF 35.7 60 cm
E. Seat Track Angle −0.9 6 degrees

3 Comfort improvement experiments with Digital Human


Models (DHMs)

Comfort improvement experiments were aimed at identifying the optimum lev-


els for design parameters in order to minimize driver’s comfort loss with respect
to a preferred posture identified by the optimal joint angles defined in [5] and re-
ported in Table 2.

Table 2. Summary statistics (values in degrees) of preferred joint angles as defined in [5]

Modal value Modal value


J Joint angle Min Max
for males for females
1 Neck inclination 30 47 44 66
2 Arm flexion angle 19 50 40 75
3 Elbow angle 86 128 113 164
4 Trunk- Thigh angle 90 101 99 115
5 Knee angle 99 121 117 138
6 Foot calf angle 80 93 92 113

For a specific design setting, the preferred driving posture and thus the values rec-
orded at driver’s joint angles depends on driver’s anthropometrical characteristics
[5-10]. In order to obtain a robust solution for the minicar packaging design, the
anthropometric variability must be considered as a noise factor in the experimental
design and accounted for when evaluating (dis)comfort. For these reasons, in [4],
the experiments — performed with digital human model (DHM) Jack by UGS —
were planned according to a cross-array with an inner array defined as a two level
full factorial (25) design and an outer array with only four runs corresponding to
four levels of the anthropometrical noise factor (i.e. 5th, 50th percentile of stature
over the female user population and 50th, 95th percentile over the male user popula-
tion).
A tool of the software Jack, the Enhanced SAE Packaging Guidelines, gives inter-
actively the possibility to choice the car packaging for the market segment of po-
Robust Ergonomic Optimization of Car … 1181

tential users. The packaging setting enables to generate reproducible posture of the
manikin through the Posture Prediction tool, based on research of the University
of Michigan Transportation Research Institute (UMTRI) [7].
The anthropometrical variables sex and stature are easily managed in the Posture
Prediction software packages. After sex, percentile, and posture of the virtual
manikin are fixed, the joint angles are measured through the Comfort Assessment
tool, based on the Porter and Gyi results too [5].
Therefore, in order to test a generic design solution (defined by i-th experimental
treatment in the inner array), it is necessary to:
x build the digital mock-up of the minicar corresponding to the tested de-
sign setting;
x import the obtained digital mock-up into the virtual environment;
x choose the anthropometrical dimensions of the DHM;
x accommodate the DHM on the vehicle;
x analyze the DHM’s joint angles.
The (dis)comfort response for the i-th (minicar packaging) design solution can be
expressed as total expected comfort loss (i.e. Weighted Comfort Loss; WCL) with
respect to m preferred joint angles (e.g. referring to Table 2, m is set equal to 6)
and, assuming n levels for the noise factor, it is expressed as follows:

m n
yi WCLi L hl wl (1)
i
1l 1

where L hl is a quadratic comfort loss function estimated in correspondence


i
of the observed -th joint angle for the l-th level of the noise factor, hl ; wl are
the corresponding weights set according to a discretization algorithm proposed in
[11]. For a fixed design setting, the WCL allows to accounted for the anthropo-
metric variability affecting joint angles.

4 Data analysis strategy for comfort improvement


experiments

The analysis of data obtained from comfort improvement experiments aims at


identifying the best (in terms of WCL) and the most robust (with respect to an-
thropometric variability) setting of minicar packaging design parameters.
In this paper we pursue the objective of minimizing the (dis)comfort response var-
iability as well as move the mean on target (i.e. the nominal value for the expected
comfort loss is zero) by adopting the joint modeling of mean and dispersion via
generalized linear models, GLMs [12]. A GLM generalizes linear regression by
1182 A. Lanzotti et al.

allowing the linear model to be related to the response variable via a link function
and by allowing the magnitude of the variance of each measurement to be a func-
tion of its predicted value.
The basic idea of the application of GLMs in the context of robust design is relat-
ed to the substitution of the Taguchi’s Performance Measures by two distinct
models for the mean and the dispersion [13].
Briefly we introduce the general issues of GLMs useful in the specific field of ap-
plication.
Under the assumption that WCL for i-th minicar packaging design solution,
WCLi , is distributed according to a density function belonging to the Exponential
family, its expected value and variance can be expressed as follows:
E >WCLi @ Pi (2)

Var >WCLi @ IiV Pi (3)

In (3) the variance of WCLi is the product of two components: the variance func-
tion, V (Pi ) which is functionally dependent on the mean Pi ; and a dispersion pa-
rameter Ii which does not depend on the mean of WCLi .
The mean and dispersion models for the WCLi for i-th (minicar packaging) design
solution can be expressed:
Ki g Pi XiE (4)

9i h Ii Xi J (5)

where g and h , are two link functions, which provide the relationships be-
tween the linear predictor and the mean and dispersion of WCLi , respectively.
and X i is the i-th row of the model matrix.
The models (4) and (5) are interlinked: the deviance component from the model
for the mean becomes the response for the dispersion model, and the in-verse of
the fitted values for the dispersion model gives the prior weights for the mean
model.
A joint modeling of mean and dispersion allows to identify the relevant factors for
the two models and then to find the setting of experimental factors that minimize
the variance while holding the mean at a target.
The adopted strategy for analysis is articulated into four main phases:
1. selection of Variance Function (VF) and Link Function (LF) for the mean
and dispersion models;
2. selection of relevant design factors to be included in the mean and disper-
sion models;
Robust Ergonomic Optimization of Car … 1183

3. identification of the mean and dispersion models by iteratively re-weighted


least squares;
4. identification of the best setting of the significant main design factors and
second order interactions.

5 Results

The data analysis has been carried out coherently with the adopted strategy de-
scribed in previous section.
The choice of the essential elements for the mean model are constant VF and iden-
tity LF; whereas for the dispersion model the selected VF is the identity function
and the selected LF is logarithmic function. The above choices are summarized in
Table 3.

Table 3. Components of the models

Variance Function Link Function

Mean Model 1 Pi
Dispersion Model Ii log Ii

For each model, a half normal probability plot has been used in order to identify
relevant effects. Along the vertical axis, the half-normal plot shows, in increasing
order, the absolute value of the estimated effects and their corresponding half-
normal scores on the horizontal axis. The insignificant effects lay along the solid
straight line whereas the dashed line represents a significance upper limit to easily
detect the significant effects as those falling above the upper limit.

Fig. 2. Half Normal plot for the model of the mean


1184 A. Lanzotti et al.

Fig. 3. Half Normal plot for the model of the dispersion

The half normal plots in Fig. 2 and Fig. 3 highlight that D and BD may be signifi-
cant effects for the mean model; whereas, AD, AB, CE may be significant effects
for the dispersion model. Since the half normal plot may misclassify significant
effects as null effects, this graphical tool has been supplemented with a stepwise
test procedure, which confirmed the results of half normal plot for the dispersion
model and identified two other more significant effects (i.e. B, C) for the mean
model. The results of the above double selection procedure allowed to define the
mean and dispersion models as follows:

i i 0 BXB C XC DXD BD X BD (6)


log Ii J0  J AD X AD  J AB X AB  JCE XCE
(7)
The parameter estimates for the mean model in (6) and dispersion model in (7)
have been obtained by iteratively re-weighted least squares (10 cycles sufficed)
and they are reported in Table 4.
Robust Ergonomic Optimization of Car … 1185

Table 4. Analysis for the minicar packaging data

Mean Model Dispersion Model


Standard Ĵ Standard
Factor Ê error
t value Factor
error
z value

1 11.0374 0.1618 68.192 1 0.8257 0.25 3.303


B 1.1829 0.1618 7.3088 AD 1.7025 0.25 6.809
C 0.7547 0.1449 5.2070 AB -0.5174 0.25 -2.0695
D 1.3949 0.1618 8.6184 CE -0.6921 0.25 -2.7686
BD 2.672 0.1618 16.508

The values for the mean and dispersion of WCL obtained in correspondence of the
solution identified by the best setting of design parameters are reported in Table 5
together with the initial setting solution and the corresponding WCL. The ad-
vantage of the identified design solution over the initial setting is evident.

Table 5. Initial and best settings and related response (expected WCL and its dispersion) values

Initial setting Initial Response


A=-1;B=-1;C=1;D=-1;E=-1 11.16
Model Best setting Response
Mean Model B=1;C=-1; D=-1; BD=-1 7.40
Dispersion Model AD=-1; AB=1; CE=1 0.124

6 Conclusions

This paper focuses on the analysis of data from comfort improvement experi-
ments based on the joint modeling of the mean and dispersion of a discomfort in-
dex (i.e. WCL). The adopted approach overcomes some criticisms challenged on
the Taguchi’s signal to noise ratio functions by:
• characterizing the control factors in terms of their prevalent action over the mean
or dispersion which enables to tune the mean response on the target without af-
fecting dispersion, and vice-versa;
• relaxing the classical linear model assumptions which may be too restrictive and
unrealistic for the experimental context of interest (i.e. normality of the system
performance response to be optimized, additivity of the effects of the experimental
factors, homoscedasticity).
For the specific application, the results confirm in terms of relevant factors and in-
teraction the results obtained from previous analyses on the same experimental da-
ta [4], but in addition relevant information on the response (i.e. WCL) variation for
best setting is provided. Moreover compared to the results obtained from previous
analyses, the best minicar packaging setting identified by the adopted joint GLMs
1186 A. Lanzotti et al.

produces a slight increase in the mean of WCL (7.40%) but at the same time a
substantial reduction of dispersion (96.68%). The results of the case study refer to
a mixed population of users with gender composition coefficient equal to 50%, fu-
ture work will investigate the robustness of the optimal design solution for differ-
ent gender composition coefficients.

References

1. Taguchi, Genichi. Introduction to quality engineering: designing quality into products and
processes. 1986, Asian Productivity Organization.
2. Taguchi, G, Chowdhury, S. & Wu, Y., Taguchi’s Quality Engineering Handbook, 2005,
John Wiley & Sons, Hoboken.
3. Ling C., Taguchi Method for Ergonomic Design in International Encyclopedia of Ergonom-
ics and Human Factors, Second Edition, 2006, vol. 3, 3372-3375.
4. Lanzotti, A., Robust design of car packaging in virtual environment, International Journal on
Interactive Design and Manufacturing (IJIDeM), 2008, 2(1), 39-46.
5. Porter, J., Gyi D., Exploring the optimum posture for driver comfort, International Journal of
Vehicle Design, 1998, 19(3), 255-266.
6. Schmidt, S., Amereller, M., Franz, M., Kaiser, R., Schwirtz, A, A literature review on opti-
mum and preferred joint angles in automotive sitting posture, Applied ergonomics, 2014,
45(2), 247-260.
7. Parkinson, M., Reed, M., Optimizing vehicle occupant packaging, SAE Transactions: Jour-
nal of Passenger Cars–Mechanical Systems, 2006, 115(6), 890-901.
8. Kyung, G., Nussbaum, M. A., Specifying comfortable driving postures for ergonomic design
and evaluation of the driver workspace using digital human models, Ergonomics, 2009,
52(8), 939-953.
9. Reed, M. P., Manary, M. A., Flannagan, C. A., Schneider, L. W., A statistical method for
predicting automobile driving posture, Human Factors: The Journal of the Human Factors
and Ergonomics Society, 2002, 44(4), 557-568.
10. Park, S.J., Kim, C.B., Kim, C.J., Lee, J.W., Comfortable driving posture for Koreans, Inter-
national Journal of Industrial Ergonomics, 2000, 26(4), 489-497.
11. Lanzotti, A., Vanacore, A., An efficient and easy discretizing method for the treatment of
noise factors in Robust Design, The Asian Journal on Quality, 2007, 8(3), 188-197.
12. Dobson, A. J., Barnett, A., An introduction to generalized linear models, 2008, CRC press.
13. Lee, Y., Nelder, J. A., Robust design via generalized linear models, Journal of Quality Tech-
nology, 2003, 35(1), 2-12.
Human-centred design of ergonomic
workstations on interactive digital mock-ups

Margherita Peruzzini1*, Stefano Carassai1, Marcello Pellicciari1, Angelo


Oreste Andrisano1
1
Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, via
Vivarelli 10, 41125 Modena, Italy
* Corresponding author. Tel.: +39 059 2056259; fax: +39 059 2056129. E-mail address:
margherita.peruzzini@unimore.it

Abstract Analysis of human-related aspects is fundamental to guarantee workers’


wellbeing, which directly limits errors and risks during task execution, increases
productivity, and reduces cost [1]. In this context, virtual prototypes and Digital
Human Models (DHMs) can be used to simulate and optimize human performanc-
es in advance, before the creation of the real machine, plant or facility. The re-
search defines a human-centred methodology and advanced Virtual Reality (VR)
technologies to support the design of ergonomic workstations. The methodology
considers both physical and cognitive ergonomics and defines a proper set of met-
rics to assess human factors. The advanced virtual immersive environment creates
highly realistic and interactive simulations where human performance can be an-
ticipated and assessed from the early design stages. Experimentation is carried out
on an industrial case study in pipe industry.

Keywords: Human-Centred Design; Ergonomics; Sustainable Manufacturing;


Digital Human Model; Virtual Reality.

1 Introduction

In recent years a lot of attention is paid to sustainable products and processes by


mainly focusing on environmental and economic aspects to limit costs and re-
source consumption [2-3]. Although several studies demonstrated how much the
industrial plant’s costs and productivity highly depend on human physical and
mental efficiency, hazardous positions and uncomfortable tasks, and how much
the risk of developing musculoskeletal disorders for workers costs to the company
[4], the role of human factors on sustainability has been largely underestimated so
far. It has been demonstrated that improving human-related aspects in industry is
fundamental to guarantee workers’ safety and wellbeing, which directly affect

© Springer International Publishing AG 2017 1187


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_119
1188 M. Peruzzini et al.

company productivity, consumed resources, and expenses [1]. As far as the evalu-
ation of environmental and cost performance during the design stages, a huge lit-
erature has been developed in recent years [5-7]; contrarily, as far as early human
factors assessment, there are no structured and robust practices in industry. Due to
the importance of human-related issues to achieve high-quality and cost-effective
processes, and the lack of industrial practices in this direction, the present paper
focuses on the early assessment of human performance during manufacturing and
operation processes. It aims to provide an early assessment of physical and cogni-
tive ergonomics from the conceptual design stages, where cost for engineering
changes is lower and the innovation potential is higher. In particular, the paper de-
fines a human-centred methodology to be applied on digital mock-ups by interac-
tive, immersive and real-time virtual simulation. Within the virtual environment,
tasks can be simulated and, consequently, the workstation design optimized ac-
cording to ergonomics principles.

2 Related works

Human-Centred Design (HCD) consists of the application of human factors’ in-


formation (i.e., physical, psychological, social, biological human characteristics)
to the design of tools, machines, systems, tasks, jobs, and environments, for safe,
comfortable, and effective human use [8]. The analysis of human factors in manu-
facturing is a novel research trend, but it is revealing extremely important to opti-
mize human performances, health, and safety. Indeed, numerous studies have re-
cently demonstrated the great economic impact of work-related musculoskeletal
disorders for both companies and societies, all over the world [9-10]. From litera-
ture review, two main aspects emerged: 1) human-related topics are still poorly in-
troduced in industrial contexts; 2) the majority of works usually are verification
and validation studies, where human factors are measured at the end of the design
process after the system development, on real plants or machines by traditional
approaches. Traditional approaches are based on the observation of individual
workers performing their jobs in order to detect unnatural postures and define cor-
rective actions according to physical ergonomic guidelines. Different tools have
been defined during the years, based on the posture observation and assessment of
physical exposures (i.e., NIOSH, OWAS, RULA, and REBA) [11]. Analyses are
usually carried out on real workstations, when operators are already working on
them, and corrective actions are taken late, so they are limited and expensive.
Nowadays, virtual prototypes can validly reproduce industrial workplaces with
high fidelity, before their real creation, and simulate human performance by digi-
tal manikins replicating the human actions. Different digital models are today in-
cluded into several commercial software toolkits, from SAFEWORK (by Dassault
Systèmes) to JACK (by Siemens) available on Siemens / Technomatix products,
until specific software for particular applications (SANTOS, 3DSSPP or Anybody
Human-centred design of ergonomic … 1189

Modeling System) [12]. Human models can also be coupled with multi-body and
finite elements’ analyses to obtain quantitative information on human-object in-
teraction [13]. Such tools can support preventive design optimizations, but they
have some limitations:
x time-consuming, since preparing realistic and reliable simulations usually
takes time and significant effort;
x low realism, due to low aesthetic rendering and low realism of the simulated
scene (e.g. lack of physical object behavior, lack of collisions, etc.);
x lack of cognitive ergonomic evaluation, since they only include postural mod-
els and do not assess the subjective impressions of real workers.
In this context, VR technologies provide immersive environments that enable us-
ers to navigate the virtual scene and interact with products or systems [14]. Reach,
visibility, and visual inspection are possible using an immersive modality. The so-
called user experience, intended as the compendium of physical efforts and sub-
jective perceptions (e.g., predispositions, expectations, needs, motivation, mood,
etc.) that generates the human response according to the characteristics of the de-
signed system (i.e., complexity, purpose, usability, functionality, etc.) and the con-
text of use [15], is simulated by real-time task execution. However, a VR scene is
not enough to efficiently assess the user experience, since it requires to be coupled
with a proper assessment protocol analysis. Recent studies also proposed Aug-
mented Reality (AR) environments that support the user to interactively see virtual
and physical objects and to access additional data in real time for different appli-
cations (i.e., service and maintenance) [16]. Although results confirmed the feasi-
bility and exploitability of such methods from a technological point of view, they
do not support system design focusing of the human experience.

2 The human-centred design methodology

In order to support early assessment of human factors for industrial workstation on


virtual models, a human-centred methodology is firstly defined. It considers the
final response as the combination of both physical and cognitive aspects and de-
fines a set of evaluation metrics to elicit the user experience. Such metrics can be
measured on both traditional prototypes (physical mock-ups) and interactive and
immersive virtual mock-ups, where all the aspects characterizing the real experi-
ence lived by users can be properly reproduced. The novelty of the present study
is the adoption of the protocol analysis coupled with immersive virtual prototyp-
ing for the specific investigation of industrial workstations. It exploits 3D model-
ing tools and a VR laboratory to create an immersive and interactive digital envi-
ronment where users can simulate in real-time their tasks on an interactive
workstation model. The VR set-up used is described in the following paragraph.
The proposed method can be summarized in six main steps:
1190 M. Peruzzini et al.

1. Creation of a functional 3D mock-up of the workstation according to the


process requirements (e.g., plant layout, activity scheduling, etc.);
2. Definition of a set of design and task parameters: design parameters refer to
the workstation features (e.g., height of the working surface from the work-
er’s feet, distance of the working point from the worker body, inclination of
the working planes, etc.), and their a range of variation, while task parame-
ters refer to specific conditions (e.g., weights to move, intermediate posi-
tions, etc.);
3. Real time simulation of task execution within an immersive and interactive
VR environment, with the involvement of workers and experts;
4. Assessment of human factors by the investigation of both objective and sub-
jective metrics, as described in the protocol analysis (Table 1);
5. Definition of the most critical postures and tasks and recognition of the most
critical features of the workstation affecting the human factors’ assessment;
6. Design optimization by acting on the design parameters in order to limit the
impact on human factors and promote the system ergonomics.

Table 1. Protocol analysis for human factors’ assessment.

Analysis Metrics Unit of meas. Data collection techniques


Posture Physical comfort RULA and REBA score (no.) Ergonomic analysis
Heuristic evaluation
Occlusion Visibility View cones (deg.)
Ergonomic analysis
Accessibility Distance between the user and reached Heuristic evaluation
zones (mm)
Mental Ease of use Requests of support (no.)
load Action sequence steps (no.) Ergonomic analysis
Heuristic evaluation
Mental workload Fatigue or distraction behaviors (no.)
Interaction Feedback Visual, tactile and auditory feedbacks Ergonomic analysis
(no.) Heuristic evaluation
Interaction sup- Affordances (no.) Ergonomic analysis
port Task completion ratio (user/expert) (s) Heuristic evaluation
Information quali- Errors frequency (%) Ergonomic analysis
ty Heuristic evaluation
Emotions Satisfaction in use Subjective impression score (no.) Heuristic evaluation
Interview

The assessment of human factors is based on the Norman’s model of interaction


[17], assuming that any system, with which the user interacts, determines the user
actions and communicates with humans by means of two different types features:
“affordances”, which stimulate a precise action in the user (e.g. an handle that
suggest the action of handling), and “synaesthesias”, which stimulate visceral sen-
sations, emotions, memories and mental associations related to the human affec-
tive sphere. Five different analyses are defined: Posture, Occlusion, Mental Load,
Human-centred design of ergonomic … 1191

Interaction, and Emotions. Physical human responses are assessed by Postural and
Occlusion analyses; affordance support, which stimulates the behavioral response
by promoting specific actions to be taken related to task efficiency and effective-
ness, is measured by Mental Load and Interaction analyses; finally, synaesthesias
influence, which affects information comprehension and satisfaction in use, is
measured by Emotions analysis. For each analysis, a set of evaluation metrics has
been defined (Table 1). About data collection, postural data are calculated by
combining DHM tools export and post processing, using Excel and Matlab; di-
versely, user behaviors and reactions are investigated by heuristic evaluation and
interview, widely used in HCD. Heuristic evaluation is based on users’ direct ob-
servation to capture moment-to-moment interactions, verbal comments, and non-
verbal communication (gestures, expressions, etc.). Interview retrieves infor-
mation directly from users, who correlate their preferences with metrics values.
The inverse 1-5 points Likert scale is chosen to assign a value to each metrics in-
dicators both for heuristic evaluations and interviews (1 = good, 5 = bad). Experts
in ergonomics are involved to observe users and evaluate the specified metrics.
Collected data are analyzed according to the existing international regulations (i.e.
UNI EN ISO 9241-210, ISO/DIS 10075-1, UNI EN 894) [8, 18-19].

4 The VR immersive and interactive set-up

The research exploits the facilities of the ViP Lab (Virtual Prototyping Lab) of the
Modena Technopole, where a VR set-up allows creating high-fidelity virtual pro-
totypes of the entire production line and the specific workstations, in order to sim-
ulate them into an immersive and interactive environment. The adopted set-up
consists of a Steward large volume display (6x2 meters) with rear projection, two
high-performance Barco Galaxy NW-7 projectors, active stereo glasses with ac-
tive Volfoni Edge RF, an high-quality Vicon optical tracking system (with 8 cam-
eras), two Nintendo Wiimote devices for interactive navigation, a Denon AVR
sound system with Dolby surround, and advanced toolkits for system modeling
(CATIA and DELMIA by Dassault Systemès) and immersive virtual simulation
and human modeling (IC.IDO by ESI Group [20]). Such a set-up allows high-
quality stereoscopic and immersive virtual simulation to reproduce virtual objects
into a 1:1 scale, create highly realistic simulations, and validate static and dynamic
behaviors during preliminary design. The large volume display also support col-
laborative design activities. As far as digital mock-ups creation, the 3D models of
the industrial plant and the workstations can be realized by using different 3D
CAD software (e.g., CATIA V5, JT, PLMXML, Pro/Engineer, SolidWorks, etc.)
and imported directly into IC.IDO software. After that, the interactive scene is
created within IC.IDO by defining the “touchable” objects and the environmental
constraints (e.g., collisions, gravity, etc.). Multiple scenes can be created with dif-
ferent properties. The user can interact directly with the virtual objects by
1192 M. Peruzzini et al.

Wiimote devices, and in addition he/she can be linked to an avatar that reproduces
his/her movements in real time. It can be realized by connecting some points or
devices, tracked by the Vicon system, with the avatar joints. For instance, the
Wiimotes can be linked to the avatar’s hands or the handled devices (i.e., screwier,
gauge, etc.) while markers fixed on the user’s arms, legs, wrists and feet can be
linked to the same avatar’s body parts.

5 The industrial case study

The industrial case study has been developed in collaboration with Tenaris S.A., a
leading global manufacturer of steel pipe products for energy industry. The study
focuses on the optimization of the quality control workstation, where workers vis-
ually and dimensionally check pipes by different tasks (e.g., cleaning the pipe sur-
faces with compressed air and water, controlling the quality of threads, deburring
the black crest, grinding some surfaces, measuring the final dimensions). Pipes
can vary in diameter and position, and the worker can stand or seat depending on
the specific task. According to the company requirements, analyses focused on
grinding and control with gauge. CATIA and DELMIA toolkits were used to
model and simulate the plant layout and the workstation design features. Subse-
quently, the virtual scenes were prepared in IC.IDO considering the real sequence
of actions and interaction capabilities with the workstation items (device, pipe,
etc.). Finally, the user (simulating the real worker) was equipped with tracked ste-
reo glasses and devices (i.e., Wiimotes), which represent the handled tools, and
linked with the virtual manikin in IC.IDO. In this way, the user can directly inter-
act with the virtual scene, according to the real production constraints and events,
while the virtual manikin moves according to his/her positions (Figure 1)..

A� B�

Fig. 1. Task simulation (e.g. grinding): virtual manikin (A) and real user experience within the
VR set-up (B)
Human-centred design of ergonomic … 1193

Two experts in ergonomics were involved for users’ observation, interviews and
ergonomic analysis, to carry out the human factors’ assessment according to the
proposed protocol. 20 users were involved in the simulation. Table 2 shows the
summary of the results (average on 20 users) for some postures (P), on the original
workstation design. Scores in Table 2 are expressed according to an inverse 5-
point scale: they combine the results from ergonomic analysis and heuristic evalu-
ation, and interview to users as far as satisfaction metric is concerned, according
to the protocol presented in Table 1. For each posture (P), the score indicates the
average ergonomic performance for the specific metric. The last column (All P) is
the average on the 20 postures analyzed. Scores higher of 3 represent critical val-
ues and are marked in red. From Table 1, it can be inferred that grinding is par-
ticularly stressful for users due to uncomfortable positions (3,50 score on aver-
age), poor visibility (3,10 score on average), and mental workload (3,40 score on
average). Furthermore, also information quality and satisfaction in use could be
improved (respectively 2,95 and 2,70 scores on average). Consequently, also easy
to use (2,60 score on average) could be optimized. As far as the second task, con-
trol with gauge, the critical issues refer to the perceived comport (4,20 score on
average) and the mental workload (3,00 score on average).
According to the experimental results, the workstation design was optimized in
order to increase the user experience measured by the proposed protocol. Atten-
tion was paid to the critical issues emerged from Table 2, and the workstation was
re-designed by the following actions:
- height of the pipe was increased;
- distance to the pipe was reduced;
- the sequence of actions to be carried out were simplified (each task was
reduce to maximum 4 actions).

Table 2. Assessment of human factors within the VR immersive set-up (average on 20 users).

GRINDING (2 kg) CONTROL WITH GAUGE (1 kg)


Analysis Evaluation metrics P1 P7 P15 All P P1 P2 P4 All P
Posture Perceived comfort 4,3 4,1 2,4 3,50 5 4,5 4 4,20
Occlusion Visibility 4 4 1,5 3,10 2 2,5 2,4 2,25
Accessibility 2 2 1,5 1,80 1,5 2 2 1,80
Mental Ease of use 2,9 3,2 1,5 2,60 1 3,6 3 2,55
Load Mental workload 3,6 4 2,5 3,40 2 3,6 3,3 3,00
Interaction Feedback 2,6 2,8 1 2,10 2 2,8 2,2 2,30
Interaction support 2,4 2,7 1 2,00 2 3 2,6 2,50
Information quality 3,4 3,5 2 2,95 2 2,6 2,4 2,30
Emotional Satisfaction in use 3 3 2,2 2,70 2 3,1 2,7 2,60

Table 3 shows the comparison between the scores obtained on the original design
and the new design. In particular for grinding, the actions taken allowed to reduce
the impact on human factors in terms of comfort perceived (from 3,50 to 2,50
score on average), visibility (from 3,10 to 1,60 score on average) and mental
workload (from 3,40 to 2,50 score on average). Also other metrics will be posi-
1194 M. Peruzzini et al.

tively affected by design changes, such as Information quality and ease to use), By
measuring the process performance, the positive impact of such ergonomic im-
provements are elicited in terms of time (-10%), productivity rate (+5%) and costs
(-12%). The design changes positively affected also the second task, control with
gauge. In particular, the perceived comfort (from 4,20 to 3,00 score on average)
and the mental workload (from 3,00 to 2,50 score on average) were improved, but
also visibility, ease to use, feedback, interaction support and satisfaction in use
were improved. Also in this case, the new solution allowed improving the ergo-
nomic performance and consequently the process sustainability in terms of lower
execution time (-12%), higher productivity (+8%) and lower global production
costs (-15%).

Table 3. Result comparison between original design (D1) and optimized design (D2) (average on
20 users).

GRINDING CONTROL WITH GAUGE


Analysis Evaluation metrics D1 D2 (new) D1 D2 (new)
Posture Perceived comfort 3,50 2,50 4,20 3,00
Occlusion Visibility 3,10 1,60 2,25 1,60
Accessibility 1,80 1,80 1,80 1,80
Mental Load Ease of use 2,60 2,50 2,55 2,50
Mental workload 3,40 2,50 3,00 2,50
Interaction Feedback 2,10 2,10 2,30 2,00
Interaction support 2,00 2,00 2,50 2,30
Information quality 2,95 2,50 2,30 2,30
Emotional Satisfaction in use 2,70 1,50 2,60 1,80
Execution time* 30 sec -10% 45 sec -12%
Sustainability Productivity* NDA** +5% NDA** +8%
Production costs* NDA** -12% NDA** -15%
*simulated values, to be confirmed on plant **data cannot be published due to NDA (Non Disclosure Agreement)

6 Conclusions

The research combined a protocol to assess human factors with an advanced VR


set-up to support the design of ergonomic manufacturing workplaces the early de-
sign stages on digital mock-ups. The protocol is based on the evaluation of physi-
cal and cognitive aspects of the user experience, while the VR set-up allows creat-
ing highly realistic and interactive virtual environments where users can interact
with the digital workstation. An industrial case study is presented. Experimental
results demonstrated how the improvement of the workstation ergonomics brings
to a higher process sustainability, and how the joint action of a proper methodolo-
gy and a suitable VR set-up can effectively support companies in designing ergo-
nomic workstations. Future work will be focused on verifying the results achieved
by simulation on the real plant and to define a design set of guidelines for sustain-
able manufacturing.
Human-centred design of ergonomic … 1195

Acknowledgments The authors want to acknowledge ESI Group (www.esi-group.com) and


Tenaris (www.tenaris.com) for their precious collaboration.

References

1. MECSD, The Middle East Centre for Sustainable Development, 2010.


2. Chang, D., Leeb, C.K.M. and Chen, C.H. Review of life cycle assessment towards sustaina-
ble product development, Journal of Cleaner Production, 83, 2014, pp. 48-60.
3. Finkbeiner M., Schau E., Lehmann E. and Traverso M. Towards life cycle sustainability as-
sessment, Sustainability, 2(10), 2010, pp. 3309-3322.
4. Zink, K. J. From industrial safety to corporate health management. Ergonomics, 48(5), 2005,
pp. 534-546.
5. Kicherer A., Schaltegger S., Tschochohei H. and Ferreira Pozo B. Eco-efficiency, combining
life cycle assessment and life cycle costs via normalization, Int. J. Life Cycle Assessment,
12(7), 2007, pp.537–543.
6. Parent J., Cucuzzella C. and Revéret, J.P. Impact assessment in SLCA: sorting the sLCIA
methods according to their outcomes, Int. J. Life Cycle Assessment, 15(2), 2010, pp.164-171.
7. Peruzzini M. and Germani M. Design for sustainability of product-service systems, Int. J. Ag-
ile Systems and Management, 7(3/4), 2014.
8. ISO 9241-210, Ergonomics of human system interaction - Part 210: Human-centered design
for interactive systems, International Organization for Standardization (ISO), 2009.
9. Maudgalya, T., Genaidy A. and Shell R. Productivity-quality-cost-safety: a sustained ap-
proach to competitive advantage e a systematic review of the National Safety Council’s case
studies in safety and productivity, Human Factors and Ergonomics in Manufacturing, 18(2),
2008, pp.152-179.
10. Fisk, M. People, Planet, Profit: how to embrace sustainability for innovation and business
growth, Kogan Page Limited, London, 2010.
11. Li G. and Buckle, P. Current techniques for assessing physical exposure to work-related mus-
culoskeletal risks, with emphasis on posture-based methods, Ergonomics, 42(5), 1999, pp.
674-695.
12. Chaffin D.B. Human motion simulation for vehicle and workplace design, Human Factors
and Ergonomics in Manufacturing, 17(5), 2007, pp.475-484.
13. Volpe Y., Governi L. and Furferi R. A computational model for early assessment of padded
furniture comfort performance, Human Factors and Ergonomics In Manufacturing, 25 (1),
2015, pp. 90-105.
14. Hu B., Ma L., Zhang W., Salvendy G., Chablat D., Bennis F. Predicting real-world ergonom-
ic measurements by simulation in a virtual environment, Int. J. Industrial Ergonomics, 41,
2011, pp.64-71.
15. Hassenzahl M. and Tractinsky, N. User experience - a research agenda, Behaviour and In-
formation Technology, 25(2), 2006, pp. 91-97.
16. De Marchi L., Ceruti A., Marzani A., Liverani A. Augmented Reality to Support On-Field
Post-Impact Maintenance Operations on Thin Structures, Journal of Sensors, Volume 2013
(2013), Article ID 619570.
17. Norman D.A. The Design of Everyday Things, New York, Doubleday, 1988.
18. ISO/DIS 10075-1, Ergonomic principles related to mental work-load - Part 1: General con-
cepts, terms and definitions, International Organization for Standardization (ISO), 2015.
19. UNI EN 894, Safety Of Machinery - Ergonomics Requirements For The Design Of Displays
And Control Actuators, Italian National Organization for Standardization, 2009.
20. IC.IDO, accessed in April 2016, available at: https://www.esi-group.com/software-
solutions/virtual-reality/icido
Ergonomic-driven redesign of existing work
cells: the “Oerlikon Friction System” case

Alessandro NADDEO1*, Mariarosaria VALLONE1, Nicola CAPPETTI1,


Rosaria CALIFANO1, Fiorentino DI NAPOLI2
1
Department of Industrial Engineering, University of Salerno
2
Oerlikon METCO Friction Systems consultant
* Corresponding author. Tel.: +39-089-964061. E-mail address: anaddeo@unisa.it

Abstract The application of ergonomic principles to the design of processes,


workplaces and organizations is not only a way to respond to legal requirements
but also an indispensable premise for any company seeking to pursue a business
logic. This paper shows a cheap and effective method to perform the ergonomic
analysis of worker postures in order to optimize productivity and obtain the high-
est ergonomic ratings. Evaluations were performed for the 5°, 50° and 95° percen-
tiles according to OCRA and NIOSH methods of biomechanical risk assessment.
The results highlighted the need for improvements. A virtual simulation using
DELMIA® software and the use of workers’ checklists drew attention to problems
causing significant physical stress, as identified by ergonomic tools. An ergonom-
ic/comfort-driven redesign of the work cell was carried out, and CaMAN® soft-
ware was used to conduct a final comfort-based analysis of the worst workstation
in the work cell.

Keywords: Ergonomic evaluation, Redesign, OCRA Index, NIOSH analysis,


Comfort rating.

1 Introduction

Ergonomic data, in the same way as technological, aesthetic and productive data,
has become an indispensable element of product design. In industrial environ-
ments, ergonomic factors are taken into account in product and process develop-
ment, as they are key components of the human-machine interface.
During the last two decades, the market has been impacted by several laws and
Standards (e.g. EN ISO 14738, 2002 [1], ISO 11226/2000 [2] and EN 1005-
3/2009 [3]) that set geometric parameters for machine design, the aim of which is
to improve workers’ safety. The three-part ISO Normative series 11228 [4,5,6]

© Springer International Publishing AG 2017 1197


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_120
1198 A. Naddeo et al.

deals with ergonomics in the manual handling of objects. For plant layout and
process design, ISO 11228-3 is the most frequently applied standard, as it deals
with risk evaluation in cases that require repetitive movement. Risk evaluation is
based on two procedures: first, an initial screening of the ISO standards checklist;
second, a detailed evaluation procedure based on international standard methods
of ergonomic analysis, such as RULA [7], REBA [8], LUBA [9], STRAIN
INDEX [10], OCRA [11], OREGE [12] and others, with preference given to
OCRA [13,14]. These standards act as a reference point for developing ergonomi-
cally driven design/redesign methods for processes and plant layouts.
The most frequently used method for in-process ergonomic analysis is based on
the following three steps: 1) direct and indirect (videotaped) observations of users
and workplaces [15,16]; 2) information collection regarding work cells and work
cycles [17,18,19]; and 3) data analysis and ergonomic ratings synthesis. Digital
human models allow engineers to simulate the worker and the work environment
during the execution of a task [14]. They allow for the appropriate use of tools to
be verified by workers, as well as the workers’ ability to support these uses ac-
cording to their anthropometric characteristics. Such models allow us to perform
efficient ergonomic analyses, in terms of both design efforts and development
times.
This paper focuses on workers’ postures (types and times), which have been di-
rectly linked to an observed rise in musculoskeletal disorders. Several studies cite
the need to understand the behavior of joints in terms of Range of Motion (ROM)
[20,21,22], neutral (zero) positions (defined as those allowing the maximum state
of comfort [21]), and Comfort Range Of Motion (CROM) [23,24]. Postural analy-
sis in the current study was performed in a way that does not affect the efficiency
of the job/task in question – for instance, the cheap and effective method based on
photogrammetry 4D, as shown in [16,25]. Cameras and video recorders are the
least expensive tools for acquiring data and calculating positions, as they make it
possible to avoid using markers that may be invasive and alter test results. Produc-
tivity is not modified by the use of video recorders, and information is easily ex-
tracted from the tapes. Finally, a comfort analysis was conducted using CaMAN®
software [26] to asses upper-limb comfort in the redesigned work cell. This test
case was carried out in the Oerlikon Friction System company.

2 Method

The study of a work activity should consider risk factors and variables relating
to the work in question. It should be both detailed and able to summarize the work
from a single perspective. It is therefore important to identify the cycles and repre-
sentative periods of a repetitive task that characterize the work, in order to de-
scribe and quantify these cycles and periods and evaluate the risk factors involved.
Ergonomic-driven redesign of existing work cells … 1199

There are several methods of biomechanical risk assessment. Those utilized by


the present study are the OCRA checklist and the NIOSH index. The OCRA
checklist [27] has been used as an instrument to develop a risk map that defines
risk zones for workstations. The checklist allows risk levels to be assessed in the
following three bands: green (no risk), yellow (possible or slight risk) and red
(high risk). For “mura-muri” operations (i.e. non-repetitive tasks), the ergonomic
analysis was performed using the NIOSH method [28]. This method is aimed at
evaluating actions relating to load lifting and transportation. Specifically, the
method is able to determine the so-called “recommended weight limit” (RWL) for
each act of lifting.
The proposed ergonomic analysis method consists of the following steps: mod-
eling the work cell in SolidWorks; analysis of videos and photos to measure the
joints’ angles using Kinovea®; using measured angles as an input for DELMIA®
simulation software; evaluation of simulation results; synthesis of results and for-
mulation of improvement hypotheses for the work cell by redesign.
All the ergonomic analyses in the present study were based on videos and pos-
tural analysis carried out for 5°, 50° and 95° percentiles of southern-European
males (this was due to the fact that only male workers are employed in the compa-
ny’s manual work cell). A photogrammetric acquisition of most workers’ postures
was made with the use of cameras. The pictures were analyzed, and the angles of
the joints (shoulder flexion/extension, shoulder abduction/adduction, elbow flex-
ion, radio-ulnar deviation, frontal neck flexion, trunk flexion) were detected using
Kinovea® software rel. 0.8.7 (see Figure 1). The geometric-zero position was the
reference for the detection and measurement of joint angles. In addition, two video
recorders were used to analyze the timing of tasks performed by the workers.

Fig. 1. Examples of photo processing using Kinovea®.

3 Test case

The Oerlikon Friction Systems company produces synchronization rings for motor
vehicles. Our study was performed in the so-called “blue area”, in which several
manual tasks are performed by workers. The working day is eight hours long, with
a 10-minute break every two hours and a 30-minute lunch break. At the end of the
1200 A. Naddeo et al.

shift, there is a 10-minute rest period. The work cycle is composed of 32 cyclical
operations involving standard equipment. Each shift contains about 315-320 cy-
cles, and each cycle lasts from 70-80 seconds, with an average duration of 76 se-
conds. Examples of “mura-muri” (non-repetitive) operations are: 1) to take, open
and mark the box with inside- and outside-facing rough rings; 2) to replace the
abrasive paper used for sanding; 3) to print and place labels to signify a completed
pallet; 4) to remove empty pallets; and 5) to deposit and place completed pallets.

Fig. 2. Work cell layout: a 3D SolidWorks model.

The work cell (Figure 2) is a double-cone cell. It consists of a synchronization ring


with two coatings – one internal and one external glued carbon strip. The main job
tasks are: a) sandblasting the surface of the ring; b) pre-gluing the heating plate; c)
thermo-pressing to create adhesion between the carbon strips and the rings; d)
smoothing to eliminate excess; e) measurement control, performed by a compara-
tor.
The OCRA checklist gave the following results:
5° percentile: high risk (score = 29.06) for right-side limbs and very mild risk
(score = 8.84) for left-side limbs;
50° percentile: high risk (score = 23.37) for right-side limbs and acceptable risk
(score = 2.53) for left-side limbs;
95° percentile: high risk (score = 23.37) for right-side limbs and acceptable risk
(score = 3.79) for left-side limbs.
Analysis of the cycle and of the work tasks shows that the differences in rela-
tive risk were due to the presence of the three mechanical presses (pre-gluing, as-
sembly and measurement) for which an incorrect movement of the shoulder joint
(the arm has to be raised higher than its shoulder) is needed. The high level of risk
is due mainly to the action performed by the pre-gluing press, which requires a
high level of physical effort, as confirmed by the operators themselves. The re-
quired force was measured during the pre-gluing operation, and results were sig-
nificant for the 5° percentile workers, due to the height of the press.
Due to the layout and positioning of the machines, workbenches and tools, the
NIOSH analysis was insensitive to the percentile in question. The calculated RWL
was 6.522 Kg – an amount higher than the value of the mass to be lifted (1.5 Kg).
Ergonomic-driven redesign of existing work cells … 1201

This implies that there is no risk to the operator. However, despite the fact that the
vertical distance is at the limit of the permitted value (170 cm vs 175 cm), the fre-
quency with which the operation is repeated (four times per shift) gave us a posi-
tive result.
Based on these results, a process simulation was created. To ensure a realistic
simulation, about 120 intermediate positions along the work cycle were included.
Analysis of these results revealed some interesting information about the cur-
rent task schedule. In the case of the 5° percentile simulation:
- The heating plate is too high for the workers, who are forced to raise their arms
excessively, resulting in high flexion in this limb;
- The lever of the pre-gluing press is too high for the workers, who can reach the
pommel only with full extension of their right-hand fingers;
- The ramp located to the left of the pre-gluing stand is too high, forcing workers
to move to the left to avoid touching the stand with their arm when it extends to
place the ring in the top of the ramp;
- The stand for smoothing operations is too high, forcing workers into excessive
flexion and abductions in their right arm.
In the case of the 95° percentile simulation:
- Magnum presses have insufficient space to insert the ring with the hands; the
worker knocks the tool;
- During the smoothing operation, the worker’s right arm assumes bad positions to
avoid touching the measurement device;
- During the measurement phase, with the machine on the left, the left hand is un-
able to insert the ring under the machine, which is blocked due to the presence of
the finished parts roller assembly.
During our simulations, small discrepancies between measured angles and simu-
lated angles were detected. These were due to:
- The morphology of the dummy. Despite being created using real anthropometric
properties, there are differences between the dummy and the bodies of the work-
ers;
- The software is unable to process the deformation of the skin/flesh for different
postures;
- The measurement process is affected by imprecision in the photographic process
(parallax, perspective, hidden parts, etc...).
Despite the presence of such deviations (up to 8%), the numerical/experimental
correlation showed some very promising results.

4 Ergonomic-driven redesign

Having completed the ergonomic analysis and simulation and watched the video,
we formulated a checklist that focused on certain aspects of the process. Our aim
was to highlight improvements that could be implemented in the future.
1202 A. Naddeo et al.

Our results showed the need to redesign the pre-gluing press – an improvement
previously indicated in the OCRA ergonomic checklist, along with the need to re-
distribute breaks throughout the working day.
Analyses showed that the use of the small press causes the operator to experi-
ence a “strong” stress level that receives a score of five on the Borg scale [29].
The pre-gluing press is used to adhere the outside facing to the lateral outside sur-
face of the ring. The operator grasps the lever to lower it, applying a closing force.
To ensure good contact between the two elements, he then applies an additional
force (driving force) that completes the compression of the cylinder on the ring.
The closing force was measured using a mechanical dynamometer. Its value was
about 15 Newton, meaning a small amount of effort was required. However, it was
not possible to use the dynamometer to measure the force as this is an angular
momentum rather than a force, and the perceived effort depends on the worker’s
physical condition and anthropometric characteristics. However, when watching
the video recordings, we noticed that operators created this force by hanging from
the lever, using their bodyweight as an active force. This force was measured by
taking the difference of weight-on-foot (measured using a house scale) while act-
ing on the lever and the perceived effort was evaluated via the questionnaire. It
was clear that operators nearing the end of the working day tended to use their
bodyweight to apply the force with greater comfort. While this method of meas-
urement does not provide the exact value of the force, it nevertheless gives an in-
teresting indication. The measured force was between 52 and 79 Newton. Also of
note was that while performing the action with a single movement of the upper
limb, the effort was not perceived by the operator as a single force but as the sum
of the forces involved, as shown in Table 1 below.

Fig. 3. Examples of virtual simulated tasks.

Table 1. Minimum and maximum values of closing/driving/perceived forces.

Size Closing Force Driving Perceived force Perceived force


(N) Force (N) by operator (N) by operator (kg)
Minimum Value 15 52 67 6.83
Maximum Value 15 79 94 9.58
Ergonomic-driven redesign of existing work cells … 1203

The strength limit accepted by UNI EN 1005-2 (2004) [30] is 25 Kg.


It was hence not necessary to alter the use and the design of the press. Rather,
modifications were made to the existing structure to make the operation more
comfortable. To enable the operator to “hang” from the press, the lever was
equipped with a handle. In addition, the inclination of the lever was changed to fa-
cilitate this “hanging” action. Figure 4 shows the lever before and after the rede-
sign.

Fig. 4. Pre-gluing model lever before and after redesign.

The re-modeled lever was attached to the existing structure. This design allows
the operator to pull the lever down with less effort, and to assume a better posture
when the lever is lowered. The organization and distribution of breaks during the
work shift was in line with the standard. Breaks of 10 minutes were scheduled
every two hours, in accordance with the 5:1 work/recovery ratio. However, since
operators perceived the effort associated with using the press as “strong”, some
changes may be considered necessary.
The workers began to feel fatigue in the last hour of their shift. The number of
breaks could hence be increased. One solution may be to add two breaks of 10
minutes each – one in the fifth hour and another in the seventh. As such, the last
hours of work would be less tiring. Figure 5 gives two examples of how breaks
could be reorganized and redistributed to give workers more rest time. However,
this change may cause a decrease in productivity (by decreasing the net duration
of the work shift by 20 minutes in a working day, the number of pieces produced
would fall from 636 to 610). Before making any changes to the reorganization of
the breaks, it is therefore necessary to assess their compatibility with the compa-
ny's production rhythm and production targets.
1204 A. Naddeo et al.

Fig. 5. Recovery time distributions.

A simulation was performed to check the redesigned pre-gluing and the redis-
tribution of recovery times. It was carried out in the new conditions for all three
percentiles, and the OCRA Index was recalculated.

Table 2. OCRA Index scores after redesign and redistribution of recovery times.

OCRA Index score before and after redesign and redistribution of recovery times
s5° Right 29.06 10.30
Left 8.84 7.45
50° Right 23.37 8.51
Left 2.53 2.13
95° Right 23.37 8.51
Left 3.79 2.13
Before After

As is clear from Table 2, the indexes have significantly dropped. The number
of breaks is increased from two to four and the Borg value showing physical effort
fell from five to four after redesigning the lever of the pre-gluing press. A comfort
assessment of the redesigned pre-gluing press was also made using CaMAN®
[26] software. Comfort indexes for each joint were calculated for the following
two tasks: lifting the lever (task 1 in Table 3) and lowering the lever (task 2 in Ta-
ble 3). A comparison of the indexes was made before and after redesign.

Table 3. Comparison of comfort index before and after redesign

Before redesign After redesign


Task 1 Task 2 Task 1 Task 2
Neck Flexion 9.98 9.98 9.98 9.98
Lateral flexion 9.90 9.90 9.90 9.90
Shoulder Frontal flexion 5.74 6.80 5.95 697
Add/abduction 10.00 10.00 5.28 10.00
Elbow Flexion/extension 7.10 3.70 5.92 1.00
Pronation/supination 5.70 5.70 9.55 6.36
Wrist Flexion/extension 9.54 9.51 4.41 9.79
Radial ulnar deviation 2.90 2.90 1.20 4.40
Global index 7.61 7.31 6.52 7.30
Ergonomic-driven redesign of existing work cells … 1205

The table clearly shows that the comfort indexes decrease. This is due to the
type of movement required by the lever being somewhat constrained. After chang-
ing the lever’s handle, some joints assume different postures, leading to variations
in the comfort indexes. However, the machine still complies with all standards and
laws relating to operators’ health and safety.

5 Conclusions

The function of the ergonomic evaluation was to analyze the entire cycle of a
work cell. The aim was to allow workers to carry out all tasks with less incongru-
ous and/or bad postures. In this paper, the use of the pre-gluing press was identi-
fied as the workstation causing workers the most difficulty in terms of effort.
However, the perceived effort was still in accordance with existing standards,
meaning that the redesign did not change the way the press was used; rather, it
modified and improved the existing structure, allowing workers to perform the
task with less effort.
Our cheap method of analysis, combined with the powerful simulation meth-
ods, allowed us to easily identify the critical issues. We were then able to solve
them by virtually redesigning the work cycle.
As this is a repetitive process, a re-organization of the pauses was considered
useful. Along with the redesign of the press arm, this was a way to improve the
entire cycle to benefit the operator in terms of comfort and ergonomics. It also
provided greater rest time, with workers experiencing less fatigue as a result.

Acknowledgments The research work reported here was made possible by collaboration with
Oerlikon METCO Friction Systems, production site of Caivano (Italy).

References

1. UNI EN ISO 14738:2009, Safety of machinery – anthropometric requirements for the design
of workstations at machinery.
2. ISO 11226:2000, Ergonomics - Evaluation of static working postures.
3. UNI EN 1005-3:2009, Safety of machinery – Human physical performance – part3.
4. ISO 11228-1, Ergonomics - Manual handling - 1: Lifting and carrying.
5. ISO 11228-2, Ergonomics - Manual handling - 2: Pushing and pulling.
6. ISO 11228-3, Ergonomics - Manual handling - 3: Handling of low loads at high frequency.
7. McAtamney L. and Corlett E.N. RULA: a survey method for the investigation of work-
related upper limb disorders, 1993, Applied Ergonomics, 24(2).
1206 A. Naddeo et al.

8. Hignett S. and McAtamney L. Rapid Entire Body Assessment (REBA), 2000, Applied Ergo-
nomics. 31, Issue 2, 201-205.
9. Kee D. and Karwowski W. LUBA: an assessment technique for postural loading on the upper
body based on joint motion discomfort and maximum holding time, 2001, Applied Ergonom-
ics, 32(4), 357-366.
10. Moore J.S. and Garg A. The strain Index: a proposed method to analyze jobs for risk of distal
upper extremity disorders, 1995, Am Ind Hyg Assoc J., 56(5):443-58.
11. Occhipinti E. and Colombini D. Proposta di un indice sintetico per la valutazione
dell’esposizione a movimenti ripetitivi degli arti superiori (Ocra index), 1996, Medicina del
Lavoro, vol. 87, n. 6, pp. 526-548.
12. Valentin L., Gerling A. and Aptel M. Validité opérationnelle d'OREGE (Outil de Repérage et
d'Evaluation des Gestes), 2004, Laboratoire de Biomécanique et d'Ergonomie: Nancy(FR).
13. D'Oria C., Naddeo A., Cappetti N. and Pappalardo M. Postural analysis in HMI design: an
extension of OCRA standard to evaluate discomfort level, 2010, Journal of Achievements in
Materials and Manufacturing Engineering. 39, 60-70, ISSN 1734-8412.
14. Annarumma M., Pappalardo M. and Naddeo A. Methodology development of human task
simulation as PLM solution related to OCRA ergonomics analysis. In IFIP 2008 Int. Fed. Inf.
Process. 277, 19-29, doi:10.1007/978-0-387-09697-1.
15. Vallone M., Naddeo A., Cappetti N. and Califano R. Comfort Driven Redesign Methods: An
Application to Mattresses Production Systems, The Open Mechanical Engineering Journal,
2015, 9, 492-507.
16. Naddeo A., Barba S. and Ferrero Francia I.F. Propuesta de un nuevo método no invasivo para
el análisis postural con aplicaciones de fotogrametría 4d. In CIBIM 2013, XI Congreso Ibero-
Americano de Ingegnieria Mecanica, 11-14 Nov. 2013, La Plata, Argentina.
17. Di Pardo M., Riccio A., Sessa F., Naddeo A. and Talamo L. Methodology development for
ergonomic analysis of work cells in virtual environment, 2008, SAE Technical Papers, DOI:
10.4271/2008-01-1481.
18. Todisco V., Vallone M., Clemente V. and Califano R. The effect of wearing glasses upon
postural comfort perception while using multi-tasking electronic devices in sitting position. In
“Advances in Human Factors and Ergonomics” Conference, 27-31/ July 2016, Orlando
(USA).
19. Volpe Y., Governi L., Furferi R. A computational model for early assessment of padded fur-
niture comfort performance. 2015, In Manufacturing, 25 (1), pp. 90-105.
20. Thompson Jon C. Netter's Concise Atlas of Orthopaedic Anatomy, 2001, Publisher: Saun-
ders.
21. Apostolico A., Cappetti N., D’Oria C., Naddeo A. and Sestri M. Postural Comfort Evalua-
tion: Experimental Identification of Range of Rest Posture for human articular joints, 2013,
Int J Interact Des Manuf, pp.1-14, doi: 10.1007/s12008-013-0186-z.
22. Naddeo A., Apicella M., Galluzzi D. Comfort-¬Driven Design of Car Interiors: A Method to
Trace Iso¬Comfort Surfaces for Positioning the Dashboard Commands, 2015, SAE Technical
Papers, 2015-April, DOI: 10.4271/2015-01-1394.
23. Fagarasanu M., Kumar S. and Narayan Y. Measurement of angular wrist neutral zone and
forearm muscle activity, 2004, Clinical Biomechanics 19, 671-677.
24. Christensen H.W. and Nilsson N. The ability to Reproduce the Neutral Zero Position of the
Head. 1999, Journal of Manipulative and Physiological Therapeutics, 22(1):26-28.
25. Naddeo A., Cappetti N. and Ippolito O. Dashboard reachability and usability tests: A cheap
and effective method for drivers' comfort rating. 2014, SAE Technical Papers, DOI:
10.4271/2014-01-0455.
26. Naddeo A., Cappetti N. and D'Oria C. Proposal of a New Quantitative Method for Postural
Comfort Evaluation, 2015, International Journal of Industrial Ergonomics 48: 25-35.
27. Colombini D., and Occhipinti E. The OCRA Method (OCRA Index and Checklist). Updates
with special focus on multitask analysis, 2008, in AHFE 2008, Las Vegas (USA). ISBN 978-
1-60643-712-4.
Ergonomic-driven redesign of existing work cells … 1207

28. NIOSH Manual of Analytical Methods (NMAM™), 4th ed.


29. Borg G. Psychophysical Bases of Perceived Exertion. 1982, Medicine and Science in Sports
and Exercise (14), 377-381.
30. UNI-EN 1005-2, Sicurezza del macchinario; Prestazione fisica umana: Movimentazione
manuale di macchinario e di parti componenti il macchinario.
Section 8.3
Image Processing and Analysis
Error control in UAV image acquisitions for 3D
reconstruction of extensive architectures

Michele Calì1*, Salvatore Massimo Oliveri1, Gabriele Fatuzzo1 and Gaetano


Sequenzia1

Dipartimento di Ingegneria Elettrica, Elettronica e Informatica, V.le A. Doria, 6 – 95125


1

Catania (Italy)
*Corresponding author: Calì Michele Tel.: +39.095.738.2400; fax: +39.095.33.79.94. E-mail
address: mcali@dii.unict.it

Abstract This work describes a simple, fast, and robust method for identifying,
checking and managing the overlapping image keypoints for 3D reconstruction of
large objects with numerous geometric singularities and multiple features at dif-
ferent lighting levels. In particular a precision 3D reconstruction of an extensive
architecture captured by aerial digital photogrammetry using Unmanned Aerial
Vehicles (UAV) is developed. The method was experimentally applied to survey
and reconstruct the 'Saraceni' Bridge' at Adrano (Sicily), a valuable example of
Roman architecture in brick of historical/cultural interest. The variety of features
and different lighting levels required robust self-correlation techniques which
would recognise features sometimes even smaller than a pixel in the digital images
so as to automatically identify the keypoints necessary for image overlapping and
3D reconstruction. Feature Based Matching (FBM) was used for the low lighting
areas like the intrados and the inner arch surfaces, and Area Based Matching
(ABM) was used in conjunction to capture the sides and upper surfaces of the
bridge. Applying SIFT (Scale Invariant Feature Transform) algorithm during cap-
ture helped find distinct features invariant to position, scale and rotation as well as
robust for the affinity transformations (changes in scale, rotation, size and posi-
tion) and lighting variations which are particularly effective in image overlapping.
Errors were compared with surveys by total station theodolites, GPS and laser sys-
tems. The method can facilitate reconstruction of the most difficult to access parts
like the arch intrados and the bridge cavities with high correlation indices.

Keywords: Architectural reconstruction, Photogrammetry, Feature Based Match-


ing, Area Based Matching, SIFT algorithm.

© Springer International Publishing AG 2017 1211


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_121
1212 M. Calì et al.

1 Introduction

The acquisition of photographic images from Unmanned Aerial Vehicles (UAV)


is widely used for 3D reconstruction through photogrammetry of large-scale ob-
jects, natural environments, architectural works and industrial installations. When
such objects have numerous geometrical singularities, multiple variously illumi-
nated features and continually changing perspectives, reconstruction is particularly
complex. This work usually has two distinct phases: the acquisition of photo-
graphic images from Unmanned Aerial Vehicles (UAV) and processing them with
dedicated software. The precision of 3D reconstruction using digital photogram-
metry depends mainly on image definition (resolution, micro-contrast and how
granular the image is) and their correct overlapping [1-3]. Any shortage of images
with effective common key points leads to having to acquire more or integrate
them with other images [4].
This work describes a method which can identify and check image key points
right from acquisition in flight thereby reducing the number of images and flights
necessary to guarantee optimum 3D reconstruction. The method indicates the nec-
essary criteria to choose acquisition frame rate depending on lighting, detail com-
plexity and the most appropriate algorithms to process the image. The method's ef-
ficiency was evaluated during the reconstruction of the ‘Saraceni’ Bridge
(Adrano, Sicily) which has numerous singularities, diverse features and different
lighting levels. The bridge, in fact, is over 60m long and on average over 6m wide
supported by 4 arches of varying geometry and size. The main arch apex is over
13m from the river surface; it is also characterized by foundations, piles, abut-
ments, ornamental parapets and decorative components of various types of stone
with different surface finishes and lighting conditions. In this application, Feature
Based Matching (FBM) [5] was used in low lighting for the arch eaves and inner
walls, whereas to acquire the sides and upper areas of the bridge Area Based
Matching (ABM) was used alongside FBM. SIFT algorithm [6-9] was used to find
robust key points compared to affinity transformations (scale changes, rotations,
sizes and positions) and lighting variations.

2 Methods

The geometry variety and diverse degrees of lighting in various parts of the bridge
gave rise to photographic survey strategies and correlation techniques which could
recognise features sometimes even smaller than a pixel within the digital images.
These recognitions were necessary for overlapping images and reconstructing the
bridge in 3D. The number of common keypoints and features shared between two
forward overlapping images is the main indicator for success in a reconstruction of
the required precision.
Error control in UAV image acquisitions … 1213

Automatically identifying common keypoints and features in images depends


on the effectiveness of the algorithm but also on the overlap percentage between
two photographs. To guarantee the best forward overlap of two successive images,
and having established the most appropriate characteristics for the UAV platform,
the points grid was chosen for acquiring the photographs taking into account the
possible drone sweeps appropriate for the bridge [10-12].

2.1 UAV Platform

An amateur Drone Hexacopter with Lipo 4S cells (4000mha, 14.4V, 576Wh)


providing over 20min autonomy was used for the survey (Fig. 1 (a)). The control
board was an Ardupilot APM 2.6 with Arducopter 3.1.5 flying software and a PC
Mission Planner groundstation. The board was equipped with a gyroscope, an ac-
celerometer, a magnetometer, a barometer which supplied the processor with 3D
data about position and acceleration and an electric speed controller (ESC)
HobbyKing F30 for brushless motor.

Antenna GPS
Receiver

Control board

ESC

(a) (b)

Lipo Batteries

Camera

(c) (d)

Fig. 1. (a) UAV platform; (b) Gimbal and GoPro Hero 4 Black Edition action camera; (c) bat-
teries and camera; (d) LCD screen on the radio control.

The drone was also equipped with uBlox LEA6H GPS. Beneath the drone on a
'Gimbal' support was a Canon Powershot S100 digital camera (80-6400 ISO). The
Gimbal support allowed the camera to rotate along 3 axes (pitch, roll, yaw) con-
1214 M. Calì et al.

trolled by a digital board and manoeuvred by brushless motors which could damp-
en any drone shifts keeping the camera immobile. On the Gimbal an action camera
(GoPro Hero 4 Black Edition) was also added (Fig. 1 (b)). The other UAV plat-
form and camera characteristics are shown in Tables 1 and 2.

Table 1. UAV Platform. Table 2. Canon Powershot S100 parameters.


Technical specifications Value/Typology Parameter Value
Frame Hexacopter CMOS 12.1
Sensor
Engine T-Motors 2216 MP 1/1.7”
Engine size [mm] Φ27.8 x 34 Focal length [mm] 24-120
Engine Weight [g] 75 ISO sensitivity 80-6400
Idle current@10v [A] 0.04 Lens range f/2.0 – f/5.9
Batteries Lipo 4S – 4000 mha Pixel size [Pm] 1.86
Max Power [Wh] 576 Burst shooting [fps] 2.3
Rotors Nylon10x5 pitch Weight [g] 198
GPS uBlox LEA6H
Flying software Arducopter 3.1.5
UAV Weight [g] 1120

The camera has a 7.44 x 5.58mm solid-state sensor which can even sense
electromagnetic data, so it was also possible to check the radiometric data in the
image pixels. A video transmitter provided real time shots on a 7” LCD monitor
incorporated into the radio control unit (Fig. 1 (d)).

2.2 Mission planner and waypoint acquisition grid

The UAV platform and its GPS uBlox LEA6H with software Pix4Dmapper pro-
vide many possible flight plans. Among them, the photographic surveys obtained
via rectangular grids (Fig. 2 (a)) and elliptical orbits (Fig. 2 (b)) proved the most
appropriate and effective.

(a) (b)
(c)
Fig. 2. Mission planner: (a) rectangular grid; (b) elliptical orbit; (c) flying beneath the arch.
Error control in UAV image acquisitions … 1215

Photographs of the upper and lateral bridge used these grids at various heights
(15, 20, 30, 40 & 50m) flying automatically with great precision. Photographing
the insides of the arches meant flying under them. This was indispensable because
the main arch is over the River Simeto and inaccessible by land (Fig. 2 (c)). The
flight and landing precision during acquisition missions was due to GPS uBlox
LEA6H triangulating with orbiting satellites [13-17].
By identifying a discrete number of equidistant points at constant pitch from
each mission which the Groundstation PC Mission Planner software converts into
geo-referenced coordinates, a waypoint acquisition grids are obtained (Fig. 3),
from each point of which a photograph is acquired [18-19]. The number of points
in the grids it's function only of acquisition height.

(a) ((b)
Fig. 3. Waypoint acquisition grid: (a) rectangular; (b) elliptical.

The elliptical orbit acquisitions are shot with the camera inclined to the hori-
zontal at an angle which varies with altitude to optimise the shot line-up. The best
compromise between distortion, definition and image overlap percentage is ob-
tained when the camera angle is -45°, sweep distance is 5m and height 30m. Even
in these conditions, image overlap varies somewhat (40% - 66%) (Fig. 4 (a)).

(a) (b)
Fig. 4. Overlap percentage: (a) with elliptical orbit; (b) with rectangular grid.
1216 M. Calì et al.

The best results in terms of overlap percentage uniformity and the consequent
high definition of the points cloud model are obtained with a rectangular acquisi-
tion grid (Fig. 4 (b)). With a constant sweep distance of 5m and varying camera
inclination from -45° (furthest points) to -90° to the vertical, frontal and side over-
lap values better than 66% (Fig. 5 (a)) are obtained in the two mutually perpen-
dicular directions. These values guarantee that each point in the 3D model is
available in at least 3 different shots (Fig. 5 (b)).

(a) (b)

Fig. 5. (a) Rectangular grid at 30m; (b) a keypoint in 3 forward overlaps.

2.3 Active control and real time overlap detection

As soon as an image is acquired, Feature Based Matching and Area Based Match-
ing algorithms using Least Squares Matching in the Pix4Dmapper software iden-
tify in real time the number of keypoints and features in common with a back-
wards overlap. If the overlap is greater than a set threshold, the UAV platform
moves towards the next waypoint, otherwise it acquires an extra image at the mid-
point between the two following waypoints. Even the new image will have the
backwards overlap value calculated and if necessary the UAV platform will go
back to acquire other intermediate images along the planned trajectory to guaran-
tee the set overlap threshold between all the pairs of forward overlap. Thus, the to-
tal number of acquired images is greater than or equal to those set in the waypoint
acquisition grid. The acquisitions at constant height allow a manual setting of
24mm aperture thereby avoiding autofocus which lengthens or shortens the focal
length which in turn hinders overlap.

3 Data Processing

Applying SIFT algorithm [5-6] during the acquisition phase provides robust key-
points compared to finite transformations (scale, rotation changes etc.) and light-
Error control in UAV image acquisitions … 1217

ing variations. In particular, it solves the problem of repeated lighting changes in


various parts of the bridge (external areas, inside arches, internal areas) without
resorting excessively to the acquisition grid.
By recognising radiometric values on the images like greyscale or simple fea-
tures (points, lines, areas), autocorrelation occurs when two small image portions
or patches, one the template, is held on an image while the other, the search ma-
trix, is manoeuvred until it coincides exactly with the first image [7]. This correla-
tion process re-defines the position of the sub-pixels with a precision of 0.5 pixel
therefore less than a micron (acquired pixels 1.86μm).
Once the reconstruction was completed, the survey and reconstruction pro-
duced a massive points cloud which was difficult to deal with using commercial
software. By iterative decimation of the vertex pairs, an acceptable approximation
is obtained creating a slimmer mathematical model. The decimation algorithm
contracts a series of vertices (Qn) into one, respecting the following conditions for
each vertex pair QQ:
ሺߥଵ Ǣ ߥଶ ሻ is one edge;
ԡߥଵ െ ߥଶ ԡ ൏ ‫ݐ‬where t is a threshold parameter.
Decimation allows a reduction of 5 times the number of polygons and vertices
in the flat polygon mesh model obtained from the points cloud (from 1338439 to
261413 polygons; from 678117 to 135623 vertices). Generating the polygonal
mesh from an algorithm based on a Delaunay triangulation any definition loss is
neglectable.

4 Results

In the elliptical orbit acquisitions, the longitudinal parts of the bridge have a low
percentage of overlap (Fig. 4 (a)), the optimum heights being when the camera
angle was 45° (30m) and the sweep distance 5m. Increasing the number of acqui-
sitions along the elliptical trajectory, the overlap percentage rises but without
providing significant improvements in reconstruction quality. The study has de-
termined the best combination of typology, frequency and acquisition number.
Table 3 shows a summary of the results and measures acquisition effectiveness in
average overlap percentage and average keypoint number per image.

Table 3. Overlap percentage in different waypoint acquisition grids.

W.P. Acquisition grid Height (m) Mean Overlap per- Mean Keypoints per
centage (%) image
Rectangular grid 30 67 33856
Rectangular grid 40 58 28608
Elliptical orbit 30 53 25986
Elliptical orbit 40 48 23283
1218 M. Calì et al.

The best results arise from the rectangular grid at 30m height taking 11 photo-
graphs for each side of the grid using a set sweep distance. Fig. 6 shows the recon-
struction phases and the complete 3D reconstruction of the bridge.

(a) (b)

(c) (d)
Fig. 6. Phases in the 3D reconstruction: (a) sparse 3D points cloud; (b) shaded polygonal mesh
(c) nurbs surface; (d) mapped texture on 3D geometric surface.

The quality of the bridge reconstruction in terms of integrity and its particulars
was evaluated by comparing decimated 3D reconstruction with total station, GPS
and laser surveys. Total lengths and partial longitudinal and transverse measure-
ments of the deck and sides errors were made with 15 pairs of points Ai and Bi
(Fig. 7 (a)). The maximum errors in transverse Ai-Bi measure (width between the
outer upper edges of the parapets) have a mean value (μ) close to zero and a stan-
dard deviations less than 3 cm. Fig.7 (b) shows the error cumulative distribution
function (cdf) versus Gauss cumulative distribution function (mean value μ = 0
and standard deviations V= 3cm). When it is centered on 0 as in this case in corre-
spondence of 0.5 (50%) means that there are no systematic errors.

. (a)

. (b)

Fig. 7. (a) Comparing the reconstructed plant profiles with the surveyed plant profiles; (b) Error
cumulative distribution function vs Gauss cumulative distribution function.

Similarly, the profile error of the arches was evaluated. Fig.8 (a) shows the er-
ror cumulative distribution function (cdf) versus Gauss cumulative distribution
Error control in UAV image acquisitions … 1219

function for the first north-east arch (south side). The errors in radial measure are
evaluated in terms of mean arithmetic value (μ) and standard deviations (V). The
mean arithmetic value is close to zero, standard deviations less than 1cm. There-
fore differences are completely negligible. Fig.8. (b) shows the error cumulative
distribution function versus Gauss cumulative distribution function (mean value μ
= 0 and standard deviations V= 1cm). Also in this case there are no systematic er-
rors.

(a)
(b)
Fig. 8. (a) Comparing the reconstructed profile with the south side of the north-east surveyed
arch; (b) Error cumulative distribution function vs Gauss cumulative distribution function.

5 Conclusion

This work has illustrated an error control methodology during UAV image acqui-
sition to 3D reconstruct an extended architecture characterised by numerous geo-
metrical singularities and multiple features with different lighting levels. The
technique establishes the flight plan type and the most effective acquisition grid
for various types of object. These photographic survey strategies together with the
techniques of Feature Based Matching (FBM) and Area Based Matching (ABM)
as well as SIFT algorithm revealed the percentages of forward overlapping re-
quired to guarantee the target reconstruction precision.
The method can identify the threshold values for image overlap which guaran-
tee enough robust keypoints in forward overlaps compared to affinity transforma-
tions (scale changes, rotations, sizes and positions) and lighting variations. The
opportune choice of flight maps, acquisition grids and acquisition thresholds can
avoid the need to repeat or integrate other acquisition images.
Experimental survay on the ‘Saraceni’ Bridge at Adrano has shown how effec-
tive this methodology can be in the highly accurate 3D reconstruction of an archi-
tecturally complex structure.
1220 M. Calì et al.

References

1. Barazzetti L., Remondino F., Scaioni M., Brumana R. Fully automatic UAV image-based sen-
sor orientation. International Archives of Photogrammetry, Remote Sensing and Spatial In-
formation Sciences, Vol. 38(1), 2012. ISPRS Commission I Symposium, Calgary, Canada.
2. Nocerino E., Menna F., Remondino F., Saleri R. Accuracy and Block Deformation Analysis in
Automatic UAV and Terrestrial Photogrammetry - Lesson Learnt – In: ISPRS Annals of Pho-
togrammetry, Remote Sensing and Spatial Information Sciences, Volume II-5/W1, 2013.
3. Bitelli G., Girelli V.A., Tini M.A., Vittuari L., 2004. Low-height aerial imagery and digital
photogrammetrical processing for archaelogical mapping. Proceedings of ISPRS 2004 ISSN
1682-1777, Istanbul, Turkey.
4. Liénard J., Vogs A., Gatziolis D. & Strigu, N. Embedded, real-time UAV control for im-
proved, image-based 3D scene reconstruction. Measurement, 81, 2016. pp. 264-269.
5. Cheng Gong and Junwei Han. A survey on object detection in optical remote sensing images
ISPRS Journal of Photogrammetry and Remote Sensing 117 (2016). pp. 11-28.
6. Lowe, David G. Object recognition from local scale-invariant features. Proceedings of the In-
ternational Conference on Computer Vision 1999 pp. 1150–1157.
7. Lowe D. G. Distinctive Image Features from Scale-Invariant Keypoints. International Journal
of Computer Vision 60 (2) 2004. pp. 91–110.
8. Zhang Q et al. Matching of images with projective distortion using transform invariant low-
rank textures. Journal of Visual Communication and Image Representation 38 (2016): 602-
613.
9. Lamis G., Draa A. and Chikhi S.. An ear biometric system based on artificial bees and the
scale invariant feature transform. Expert Systems with Applications 57 (2016): 49-61.
10. Javier F. L. and Gutiérrez-Alonso G.. Improving archaeological prospection using localized
UAVs assisted photogrammetry: An example from the Roman Gold District of the Eria River
Valley (NW Spain). Journal of Archaeological Science: Reports 5 (2016): 509-520.
11. Takimoto R. Y. et al. 3D reconstruction and multiple point cloud registration using a low
precision RGB-D sensor. Mechatronics (2015).
12. Minglei L. et al. Reconstructing building mass models from UAV images. Computers &
Graphics 54 (2016): 84-93.
13. Ceruti A. et al. An Integrated Software Environment for UAV Missions Support. No. 2013-
01-2189. SAE Technical Paper, 2013.
14. Ceruti A., Liverani A. and Marzocca P. A 3D User and Maintenance Manual for UAVs and
Commercial Aircrafts Based on Augmented Reality. No. 2015-01-2473. SAE Technical Pa-
per, 2015.
15. Li, Meng, et al. 3D human motion retrieval using graph kernels based on adaptive graph con-
struction. Computers & Graphics 54 (2016): 104-112.
16. Bagassi S., Francia D., Persiani F. Preliminary study of a new uav concept: the variable ge-
ometry vehicle. International Congress of the Aeronautical Sciences 2013.
17. De Crescenzio F. et al. A first implementation of an advanced 3d interface to control and su-
pervise uav missions. Teleoperators and Virtual Environments 18.3 (2009): pp. 171-184.
18. Longuet-Higgins H. A computer algorithm for reconstructing a scene from two projections
Nature, vol. 293, no. 10, 1981.
19. Anand A., and Venkatesh K. S. Planar epipolar constraint for UAV navigation. Signal Pro-
cessing, Computing and Control (ISPCC), 2015 International Conference on. IEEE, 2015.
Accurate 3D reconstruction of a rubber
membrane inflated during a Bulge Test to
evaluate anisotropy

Michele Calì1*, Fabio Lo Savio1


1
Dipartimento di Ingegneria Industriale, Viale A. Doria, 6 – 95125 Catania (Italy)
*Corresponding author: Calì Michele Tel.: +39.095.738.2400; fax: +39.095.33.79.94. E-mail
address: mcali@dii.unict.it

Abstract This paper describes a methodology for carrying out an accurate me-
chanical characterization of an amorphous hyperelastic rubber-like material (car-
bon black filled natural rubber) by a custom-made experimental setup for bulge
testing. Generally, during sample testing the slight anisotropy of the internal pol-
ymer structures, primarily due to the calendering process is neglected. This meth-
odology is able to evaluate these effects. A hydraulic circuit inflates a thin disk of
rubber blocked between two clamping flanges with adjustable flow rate, thus con-
trolling the speed of deformation of the sample. The device has a sliding crossbar,
which moves proportionally as the membrane inflates. A stereoscopic technique is
able to capture with pixel precision and identify the strain on a silk-screen grid
printed on the upper surface of the sample. For each acquisition step, the epipolar
geometry of the image pairs is represented in a single absolute reference system
integral to the experimental setup. The acquired images are processed using geo-
metrical algorithms and different filters. In this way an extremely precise 3D re-
construction of the sample is created during the bulge test. Slight anisotropic be-
haviors due to the rubber calendering process have been observed and measured
since the first steps of the bulge test, where the strains are minimal and principal
strain direction in equibiaxial tension test are determined.

Keywords: Stereoscopic Method, Hyperelastic Materials, Calendered rubber,


Transverse Isotropy, Equibiaxial Bulge Test.

1. Introduction

In recent years, the way in which molecular structure and micro-domain orienta-
tion in polymers and especially in elastomers significantly affect their mechanical
and chemo-physical characteristics, as well as providing good weathering resis-
tance, has been studied [1-2]. Similar studies relate the mechanical properties of

© Springer International Publishing AG 2017 1221


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_122
1222 M. Calì and F. Lo Savio

biological tissues to their histology [3]. Elastomers are widely used in industry for
hydraulic or pneumatic drive units, hydraulic hoses, vibration dampers, pneumatic
and hydraulic shock dampers [4]. These materials exhibit large deformations with
a highly nonlinear stress-strain behaviour. The standard testing methods for me-
chanical analysis, as required by ASTM D412/D695, and DMA dynamic me-
chanical analysis, provide valuable information about their mechanical properties
and polymer chain movement on a molecular scale. Of these tests, the Bulge Test
subjects the sample to an equibiaxial stress state to determine the hyperelastic con-
stitutive model constants [5-6]. Since it is based on defining the different density
functions of deformation energy, it can reproduce the quasi-static behaviour of
elastomers. In this test, due to the axial-symmetry of the experimental configura-
tion, the stress and strain states are always considered axial-symmetric. However,
as in oriented or semi-crystalline polymers and even in amorphous polymers, there
may be slight anisotropy of the internal polymer structures, primarily due to the
calendering process, which produced them. The method for measuring the sample
strain and the experimental test apparatus described in this paper can accurately
measure the anisotropy during failure and creep tests.

2. The transversely isotropic elastomers

As is known, the behavior of hyperelastic materials, such as elastomers, was rep-


resented in terms of a strain energy density [7] which, in turn, was written as a
function of the right invariants of the Cauchy-Green tensor. However, such mod-
els are unable to take in account the anisotropy that may be induced by the manu-
facturing process of calendering [8]. For this reason, Robisson [9] investigated the
effect of the calendering process on SBR by uniaxial tension test, while Diani et
al. [8] studied such effect on natural rubber filled with silica particles by using
uniaxial and biaxial tension tests. As already mentioned, the strain energy for iso-
tropic materials is a function of the invariants of the right Cauchy-Green defor-
mation [7]: all three (I1, I2, I3) if the material is compressible, and only the first two
(I1, I2) if the material is incompressible, i.e., if it satisfies the kinematical condition
I3 = 0.
On the other hand, Itskov and Aksel [10] proposed a set of orthotropic and
transversely isotropic strain energy functions proved to be polyconvex [11] and
coercive, satisfying the condition of the stress-free natural state. These functions
matched Diani et al. experimental data [8] on calendered rubber sheets revealing
transverse isotropy with respect to the calendering direction. In this paper, we pro-
pose that a transversely isotropic material, to a first approximation, can be dealt
with as a case of plane stresses V3=W13=W23=0) referring to stresses relative to a sol-
id with a smaller size than the other two. With 1 and 2 as the main directions of
symmetry and writing separately the equation H3= S31×V1+ S32×V2 then:
Accurate 3D reconstruction of a rubber … 1223

(1)

by inverting (2) we can get the reduced stiffness matrix:

(2)

where invariants of the tensor Q are:

(3)

and, for transversely isotropic material with respect the z-axis (detQ = 1), we
have:

(4)

From the eq. (3) e (4), with experimental values for W12 and J12 obtained with addi-
tional pure shear tests performed by the authors, the elastic and tangential module
values for the carbon black filled natural rubber are obtained.

3. Experimental Setup

3.1 Test samples

The test samples were 180 mm × 180 mm (Fig. 1a) and made from a single sheet
of carbon black filled natural rubber (Fig. 1b). Fig.1c shows the calendering (CD)
and transverse (TD) directions. The silk-screen printed grid on the sample (Fig.1c)
consists of five concentric circumferences, a small central circle whose centre d
corresponds with the top of the dome, and seventy-three equidistant segments
(meridians) radiating out from d.
1224 M. Calì and F. Lo Savio

(b
) (c
(a) )

Fig. 1. (a) Sample sizes; (b) whole sheet of natural rubber; (c) zoom of the grid.

3.2 Testing device

The experimental test apparatus can run creep or failure tests on sheets of material
undergoing equibiaxial stress in bulge tests [6]. The device also has a sliding
crossbar, which holds two fixed-focus cameras, which move proportionally as the
dome inflates. Thus, the image size increase is due solely to dome inflation and
not to the d point approaching the camera lens. Movement is regulated by an opti-
cal system integral to the crossbar (Fig.2b). It includes a laser diode and photodi-
ode arranged laterally to the dome, along the tangent to d. When the laser beam in-
tercepts the inflating dome, the system electronically signals the crossbar to shift
upwards, stopping when the sensor spots the beam again.
Laser-Diode Photodiode

Sliding Crossbar

Video cameras

(b)

Pressure
Regulator
y
z

Vessel x
(a)

(c)
Fig. 2. (a) Testing Device; (b) Movement Optical System; (c) Detail of the imprinted grid.

The stepper motor and drive chain shift the crossbar vertically by 0.125
Accurate 3D reconstruction of a rubber … 1225

mm/step, with a 0.03% error in focal length. This ensures a negligible focussing
error and very accurate 3D reconstruction. In particular, the precision matches the
size of the captured pixels (37.7μm). The photodiode (Hamamatsu Si S5973-01)
detector diameter is I = 0.2mm so the laser beam (diameter I = 1mm) does not in-
fluence the spatial resolution of vertical motion (0.2mm).
Creep is obtained by keeping the pressure constant within the bulge chamber
by a pressure regulator. In the creep test, after a transient inflation (1s), the sample
is kept at a constant pressure of 0.75bar long enough for complete relaxation.
In the failure test the pressure increases by 6 s/steps until the sample breaks.
This pressure grows linearly with a velocity of 660Pa/s up to sample break.
The failure and creep tests were carried out on 3mm thick samples (Fig.1a)
held between two drilled flanges on a cylindrical vessel suitably designed for pres-
sure testing. As shown in Fig.2 the vessel consists of a 100mm diameter die cavity
of specific depth capable of inflating a sheet of material into a right circular cylin-
drical die using pressurized gas. The tests were filmed by two monochrome video
cameras (DMK 23G445) with a GigE interface, a Sony CCD ICX445ALA 1/3"
sensor with a resolution of 1280 × 960 pixels and a HF35HA-1B da 1:1.6/35mm
lens.
In the creep test, the video cameras acquire continuously at 30Hz during infla-
tion and at 0.033Hz (2 images/min) during the relaxation phase. The acquisition
frequency during the failure test is 1Hz. These sample frequencies provided the
correct evaluations for the phenomena studied.

3.3 Calibration and image pre-processing

The video cameras were stereoscopically calibrated by Camera Calibration's


Toolbox, which uses 3D target calibration on the pinhole model [12]. The three in-
trinsic parameters (focal length, image centre and distortion factors) and the two
extrinsic factors (rotation and translation vectors), which define the position of
right camera with reference to left camera are in table 1. These parameters are ob-
tained acquiring images of a flat grid (15mm x 15mm with 1mm squares) vari-
ously oriented to the video cameras (Fig.3).

Table 1. Stereo calibration parameters after optimization.

Intrinsic parameters Left camera Right camera


Focal Length f0 [px.] [10535.54÷10626.09] [10451.10÷10534.34]
Principal point cc [px.] [1290.33÷626.85] [1605.89÷869.49]
Distortion factors kc [1.12 -73.28 0.017 0.015 0.00] [0.29 -12.38 0.01 0.014 0.00]
Extrinsic parameters Value
Rotation vector [0.02077 -0.11465 -0.00699]
Translation vector [38.24327 -0.02457 2.24076]
1226 M. Calì and F. Lo Savio

A camera separation of less than 10% of focal length (f0) ensures the whole grid
is acquired. Lighting is provided vertically from a single source.

Fig. 3. Calibration grids.

Image pre-processing during the tests was carried out with Matlab® tool kits and
by applying convolution, median and edge-detection filters to the images (Fig. 4).

(a) (b) (c)

Fig. 4. Images filters: (a) original image; (b) after convolution filter; (c) after median filter.

The convolution filter, which highlights details was required to sharpen the grid
edges during the test. The filter can recoup the light intensity attenuated by the de-
formations (Fig. 4b). A kernel [m x n] whose nucleus contains an impulse function
transforms image pixels into the weighted sum of the values of [m x n] input im-
age pixels. Applying a convolution filter to the image introduces some 'noise' so
subsequently a smoothing filter is applied which each pixel with the median value
of neighbouring pixels with the effect of eliminating the noise and balancing the
image. Next, the edge detection filter is applied which precisely determines the
marker and edge positions by identifying: dark to light and light to dark (Fig.5a).
Based on Robert Cross operators, the filter calculates the digital gradient modulus
and therefore relatively ignores directional orientation [13].
The set of adjacent pixels with a light transition identifies an edge. The pixels
common to two edges identify markers ie. the intersections between the 73 me-
ridians and 5 parallels. The edge-detection filter identifying light transitions from
light to dark and vice versa can identify four markers for each meridian-parallel
intersection. Fig. 5a shows height light transitions generated at the intersection of
one meridian with two parallels. Sweeping image pixels from inside to outside an-
ticlockwise, and selecting those with a double dark-to-light transition, homothetic
grid markers are obtained, which can evaluate deformation (Fig.5b). Fig. 5c shows
two homothetic markers in successive images (10 and 11 test step).
th th
Accurate 3D reconstruction of a rubber … 1227

(a) (b) (c)

Fig.5. (a) Light transitions: dark to light (red point) and light to dark (blue point); (b) homothetic
markers; (c) markers on the grid.

4. 3D Grid Reconstruction

To reconstruct the grid in 3D, the origin coincides with the centre of the clamping
flange, the vertical z-axis passes through the mean point between the focal centres
of the two video cameras, x-axis is oriented right along the median plane of the
test machine chassis and y-axis is perpendicular to x and z (Fig. 2c). The xz-plane
contains OL and OR centres of the video cameras and, at t=0, the centre d of the
grid. The sample is oriented so that CD is along the x-axis. The 73 meridians are
numbered anti-clockwise from 1 at the x-axis. The rear-projection beams of the
images pair acquired simultaneously by the two video cameras reconstruct the 3D
position of the markers. Fig.6 and Table 2 show the main geometric variables of
the acquisition system.

Table 2. Geometric Parameters.

Geometric Parameter Value


Inter-camera distance b 40mm
Focal Length 10600 px
Minimum Focusing Range 250mm ÷ ∞
Minimum distance from
400mm
sample
Fig. 6. Epipolar stereographic reproduction.

Symmetrical to z-axis of the reference system, the video cameras have the same
focal length (fL = fR) and optical axes ZL and ZR. Given the generic coordinates UL,
VL and UR, VR, of point P on the left and right focal planes respectively, then:

UL = f · X/Z (5)
UR = f · (X-b)/Z (6)

Distance d between these two projected points enables the calculation of depth
ie. distance Z between point P and the stereo vision system by means of:

d = UL – UR = f · b/Z (7)
1228 M. Calì and F. Lo Savio

Z = f · b/d (8)

Fig.7 shows a block scheme of the reconstruction process.

Fig.7. Reconstruction phases.

5. Results and Discussion

In Fig. 8 are shown graphs of the tests performed: (a) and (b) show stress and
strain as a function of the time for the creep bulge test, respectively. In (c) and (d)
are reported the stress-strain curves for the failure bulge test and the shear stress-
strain curves for the planar test (pure shear), respectively. In (b), (c) and (d) can be
seen pairs of curves in calendering and transverse directions.

(a) (c)

(b) (d)

Fig. 8. (a,b) Stress and strain as a function of the time for the creep bulge test; (c) Stress-strain
curve for the failure bulge test; (d) Shear stress-strain curve for planar test (Pure Shear test).
Accurate 3D reconstruction of a rubber … 1229

Up to now, the grid markers have been 'tracked' during the bulge test. Each grid
marker is associated with a set of Cartesian coordinates, which accurately recon-
struct the spatial form, and position of the grid at each test step:

(9)

index i being the parallel (i=1,..,5), j the meridian (j=1,..,73) and t the instant of
the test (Fig. 1c).

(a) (b)
Fig. 9. Insufflated membrane 3D reconstruction: (a) shaded polygonal mesh; (b) wireframe grid.
Using the markers, an interpolated polygonal mesh surface was constructed
(Fig. 9 a) [14-16]. In both the creep and failure tests, it can be deduced that the
edges of the surface's transverse sections, corresponding to the parallels, reveal
slightly anisotropic behaviour of the material.
The pseudo-hemispherical shape of the surfaces is echoed in the strain calcula-
tion, which is a function of the meridian on which it is carried out. This is high-
lighted in the polar diagrams of radial and tangential strains (Fig.10), where the
lowest strain values are in calendering direction and the ones highest in transverse
direction. This leads to a failure crack that, in all tests, was always presented along
the calendering direction (Fig. 11).

Fig. 10. The polar deformation diagram. Fig.11. Failure crack parallel to CD (x-axis).

The edges of the surfaces' transverse sections, which correspond to the parallels,
approximate to ellipses with an eccentricity, depending on height z and on the test
step. This eccentricity rises steadily in both the creep and failure tests, which
clearly identifies the preferred failure direction just from the first test steps.
1230 M. Calì and F. Lo Savio

The stress and strain values, measured by the 3D reconstruction in the failure
bulge test and by the DIC technique in the pure shear test, were placed in eq. (2).
Assuming a value of 0.5 for Poisson ratios of the tested natural rubber [17], the eq.
(3) and (4) have led to the following values for elastic and tangential modules:

E1 = 1.105 MPa; E2 = 3.417 MPa; G = 0.495 MPa (Transverse Direction)

E1 = 1.277 MPa; E2 = 3.950 MPa; G = 0.393 MPa (Calendering Direction)

6. Conclusion

This work illustrates an accurate methodology to evaluate the deformation in


bulge tested carbon black filled natural rubber. Using stereoscopic surveys,
screen-printed grid marker shifts on the sample were acquired. The accurate 3D
reconstruction of the pseudo-hemispherical sample geometry provided an effective
evaluation of the effects of slight anisotropy primarily due to the calendering
process. In particular, it was noted how, right from the beginning of the bulge test
when any deformations were moderate, there were small anisotropic behaviours of
the material. This involved preferential failure direction in equibiaxial tension
tests. The results obtained provide suitable values with that present in the litera-
ture.

References

1. Urayama K. Network Topology–Mechanical Properties Relationships of Model Elastomers.


Polymer Journal. (2008). 40(8). 669-78.
2. Mark J.E., Erman B., Eirich E.F. Science and Technology of Rubber, second ed. Academic
Press. 1996.
3. Natali A.N., Audenino A.L., Artibani W., Fontanella C.G., Carniel E.L., Zanetti E.M. Bladder
tissue biomechanical behavior: Experimental tests and constitutive formulation. J Biomech.
2015. 48(12). 3088-96.
4. Calì, M., Sequenzia, G., Oliveri, S.M., & Fatuzzo, G. Meshing angles evaluation of silent
chain drive by numerical analysis and experimental test. Meccanica 51.3 (2016) pp. 475-489.
5. Sasso M, Palmieri G., Chiappini G., Amodio D. Characterization of hyperelastic rubber-like
materials by biaxial and uniaxial stretching tests based on optical methods. Polymer Testing.
2008. 27(8). 995-1004.
6. Capizzi G., La Rosa G., Lo Savio F., Lo Sciuto G. Creep Assessment in Hyperelastic Material
by 3D Neural Network Reconstructor using Bulge Testing. Advanced Methods of the Theory
of Electrical Engineering. Trebíc, Czech Republic. Sept 2015.
7. Rivlin R.S. Large Elastic Deformations of Isotropic Materials: I. Fundamental Concepts. II.
Some Uniqueness Theorems for Pure Homogeneous Deformations Philos. Trans. R. Soc.
London, Ser. A. (1948) 240(835). 459-90.
8. Diani J., et al. Directional model for isotropic and anisotropic hyperelastic rubber-like materi-
als. Mechanics of Materials. (2004). 36(4). 313-21.
9. Robisson A. Comportement Visco-Hyperélastique Endommageable d’Elastoméres SBR et
PU: Prévision de la Durée de Vie en Fatigue. PhD Thesis Ecole Nationale Supérieure des
Mines de Paris, France. 2000.
Accurate 3D reconstruction of a rubber … 1231

10. Itskov M., Aksel N. A class of orthotropic and transversely isotropic hyperelastic constitutive
models based on a polyconvex strain energy function. International Journal of Solids and
Structures. (2004). 41. 3833-48.
11. Ball J.M. Convexity conditions and existence theorems in non-linear elasticity. Archive for
Rational Mechanics and Analysis (1977). 63(4). 557-611.
12. Zhang Z. A Flexible New Technique for Camera Calibration. Technical Report MSRTR-98-
71, Microsoft Research, December 1998.
13. Atkinson, K., Close Range Photogrammetry and Machine Vision. Whittles Publishing, 2001.
Egels Y., Kasser, M. Digital Photogrammetry. CRC Press, 2001.
14. Koch R. 3–D Surface Reconstruction from Stereoscopic Image Sequences, Computer Vision,
1995. Proceedings of the Fifth International Conference, Cambridge, MA, 109-14.
15. Lanzotti A., Renno F., Russo M., Russo R., Terzo M., Virtual prototyping of an automotive
magnetorheological semi-active differential by means of the reverse engineering techniques,
Engineering Letters, 2015, 23(3), 01, pp. 115-124.
16. Martorelli M., Lepore A., Lanzotti A., Quality Analysis of 3D Reconstruction in Underwater
Photogrammetry by Bootstrapping Design of Experiments, International Journal of Mechan-
ics, ISSN: 1998-4448, Vol. 10, 2016, pp.39-45.
17. Omnès B., et al. Effective properties of carbon black filled natural rubber: Experiments and
modeling. Composites Part A: Applied Science and Manufacturing. (2008). 39(7). 1141-49.
B-Scan image analysis for position and shape
defect definition in plates

Donatella CERNIGLIA, Tommaso INGRASSIA, Vincenzo NIGRELLI and


Michele SCAFIDI*

Dipartimento di Ingegneria Chimica, Gestionale, Informatica e Meccanica, Università degli


Studi di Palermo, Palermo - Italy.
*Corresponding author. Tel.: +39-3400075659; E-mail address: michele.scafidi@unipa.it

Abstract Definition of size, shape and location of defects into a mechanical com-
ponent is of extreme importance in the manufacturing industry in general and par-
ticularly in high-tech applications, and in applications that can become dangerous
due to the structural failure of mechanical components. In this paper, a laser-UT
system has been used to define position and shape of internal defects in aluminum
plates. An infrared pulsed laser is used to generate ultrasonic waves in a point of
the plate and a CW laser interferometer is used as receiver to acquire the out-of-
plane displacements due to the ultrasonic waves in another point of the plate. The
method consists of acquiring a B-Scan map on which some information on the de-
fects in the mechanical component are visible. Storing the characteristics of the
wave reflected by the defect and acquired in the B-Scan, the detection and the
drawing of the defect is possible. The acquisition of the times of arrival of the
waves reflected by the defect from the B-scan allows defining large parts of the
shape of the defect. The times of arrival are acquired from the B-scan by analyz-
ing the colour variations due to the wave reflected by the defect. The experiments
operated from both sides of the plate allow drawing the defect in a virtual image
of the plate section, from which the definition of defect shape and position can be
determined.

Keywords: B-scan image analysis; defect definition; NDE; laser Ultrasonic


Testing.

1 Introduction

The laser Ultrasonic Testing (UT) systems are becoming more common among
the Non-Destructive Evaluation (NDE) techniques thanks to the possibility to car-
ry out non-contact inspections [1-4]. One of the most important advantages of this
technique is related to the use of high frequency ultrasonic waves that allow the

© Springer International Publishing AG 2017 1233


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_123
1234 D. Cerniglia et al.

detection of defects with very fine spatial resolution. Moreover, laser UT systems
can be effectively used for remote inspections with no contact conditions influ-
ence and, if proper delivery optics are used to guide the laser beam, they can oper-
ate also in hostile environments.
The presence of defects, corners and curved surfaces, modifies the waves propaga-
tion, causing reflection and mode conversion. The waves resulting from different
sources (i.e. reflected or converted waves) can interfere each other, generating
very complex patterns. For this reason, the inspection of complex structures by
means of laser UT systems can become extremely hard. Nevertheless, by knowing
the analytical models of the wave propagation in solid structures, the experimental
layout can be designed to optimize the post-processing analysis.
With this purpose, laser UT systems can be used to automate the scanning proce-
dure and to make a rapid acquisition of the ultrasonic data by creating B-scan
maps in real time [1]. The analysis of the B-scan image allows determining in an
automated way the presence of defects in the tested component as well as the
characteristics of the defects. A technique of particular importance in the analysis
of defects in plates is the Time Of Flight Diffraction (TOFD) [5-10]. This tech-
nique allows to determine the presence of cracks in the material and to determine
the position and length of the crack even in components of complex geometry [11-
14]. To facilitate the automation of the analysis, TOFD B-scan maps with selected
wave types can be obtained by optimizing the layout of the laser system [15].
The aim of this work is the automatic definition of size, shape and position of the
defects in plates of known thickness. A virtual image of the section of the ana-
lyzed component is created by an algorithm that locates the defects, drawing the
boundary shape of the defect. The analysis has been made from both sides of the
panel and the defect has been drawn overlaying the results obtained by the analy-
sis of the two B-Scans acquired from both sides.

2 Description of Ultrasonic system and Laser Lay-out

The laser system used in this work, shown in Fig. 1, consists of:
- an IR Nd:YAG pulsed laser at 100 mJ for the ultrasonic wave generation with a
convergent lens to focalize the laser on the panel surface;
- a CW laser interferometer at 1W and 532 nm as receiver system for the out-of-
plane displacement measurement;
- a motorized linear micro-slide.
The pulsed laser generates wideband ultrasonic waves by nanosecond pulses in
ablation regime. The pulsed laser produces a trigger signal to start the out-of-plane
surface displacement measurement by means of the laser receiver. A time-varying
analog signal proportional to the instantaneous nanometric displacement is pro-
duced by the laser receiver by means of a multi-channel random quadrature inter-
ferometer set-up [16, 17]. A compromise of lasers energy and material surface has
B-Scan image analysis for position and shape … 1235

to be found since the laser transmission is more efficient on opaque surfaces,


whereas the laser receiver on shiny surfaces.
The slightly high cost of the equipment hardware, mostly because of the laser re-
ceiver, can limit the market uptake of this system, although the method is quite
sensitive to the detection of micro-defect [1, 2] and to the defect drawing.

Fig. 1. Laser system: (A) IR Nd:YAG pulsed laser for ultrasonic wave generation; (B) CW laser
interferometer receiver; (C) motorized linear micro-slide; (D) sample.

In the application here considered, the wave generation and detection are made on
the same side of a panel of thickness T at a defined distance D (see Fig. 2). The la-
ser source generates longitudinal, shear and surface waves in the ablation regime
whose angular dependence is reported in ref. [5].

Fig. 2. Cross-section of a plate with indication of laser source, laser receiver and wave paths in
presence of a defect.

As shown in Fig. 2, the longitudinal wave, L-wave, travels just below the surface
of the plate; the longitudinal back-wall wave, LL-wave, reflected by the opposite
plate surface in accordance with the Snell’s law, travels with an orientation T that
depends on the distance D. The LR-wave path is also shown in Fig. 2; the LR-
wave is the longitudinal wave reflected by a point of the defect boundary. Being
the longitudinal wave velocity vL about twice than the shear and surface wave ve-
locity [18] and considering that the L-wave path (length D) is the shortest between
the generation point and the receiver point, the L-wave is the first that reaches the
receiver. To obtain a good laser system layout and optimize the laser wave acqui-
sition, T=45° has been chosen [15], then, the distance D between the two lasers is
1236 D. Cerniglia et al.

2T and the LL-wave path is D√2 long. By using this laser layout, the LR-waves
with a path no longer than D√2 can be easily stored and elaborated.
For each wave acquisition, the first LR-wave recorded by the laser receiver is the
one with the shortest path, that is the reflection of the longitudinal wave takes
place in the nearest defect boundary point.
To better define the dimension and the position of the defect, the analysis has been
made using the data obtained by scanning both sides of the plate. To maintain the
same reference for both scans, the left edge of the plate has been considered as the
zero position for both sides, as shown in Fig. 3.

Fig. 3. Cross-section of the plate and the laser lay-out for the front and rear scan with indication
of reference position.

The plate used in the experiments is a T=10 mm tick aluminum plate. The distance
between the lasers has been set at D=20 mm. To simulate a defect, a circular hole
with d=2.5 mm diameter was drilled in the plate; the position of the center of the
hole is xg=185 mm, yg=2.5 mm.

3 B-Scan acquisition and analysis

The B-scan map is shown in false-colours to highlight the peak-to-peak ampli-


tude of the signals. Similarly to the case of the waves diffracted by the crack tip
[7, 8, 14], the perturbation shows a like-parabolic pattern in the B-scan. In general,
the shape of the defect affects the shape of the perturbation then, analyzing this
last, the defect characteristics can be determined.
Figure 4 shows the B-Scan image of the front panel scan made from xG = 157 to
191 (for a total of 34 mm) at 0.1 mm scan step. The figure reports also the indica-
tions of the front-wave relative to the LL-wave, LR-wave and L-wave.
Although the interruption of the LL-wave visible on the B-Scan can be effectively
used to determine position and size of the defect [15], in the present paper, only
the LR-wave has been considered. The procedure to determine the position and
the shape of the defect consists of:
x determining the times of arrival of the LR-wave for each step;
B-Scan image analysis for position and shape … 1237

x known the longitudinal wave velocity vL, determining the LR-wave path
length covered by the LR-wave;
x determining the points of the plate section that can be “reflection points”;
x determining the points of the plate section not contained in any defect.

LL-wave

LR-wave
L-wave

Fig. 4. B-Scan image of the front panel scan with wave-front indications.

3.1 Determination of the LR-wave times of arrival and of the


path length

The acquisition of the time of arrival is not simple due to the overlapping of the
LR-wave and the L-wave in the central part of the B-Scan. To highlight the pres-
ence of the LR-wave, a “subtraction” of the waves, acquired in absence of defect,
has been operated. In particular, a simulated B-Scan built considering the average
of the signals in the external parts of the B-Scan (absence of perturbations) has
been subtracted to the acquired B-Scan. The resulting map is shown in figure 5.
The L-wave has been “removed” in Fig. 5, so that, the presence of the LR-wave is
clearer and the determination of the time of arrival, by picking the points directly
in the map, is simpler. To acquire the time of arrival of the LR-wave, an opportune
number of points have been picked (a greater number of points have been picked
in presence of curvature of the wave-front in the B-scan) and an interpolation by a
cubic spline has been operated to obtain a continuous curve of times of flight. The
map of figure 5 is shown with the indication of the selected points and of the in-
terpolation curve.
Known the velocity vL of the longitudinal wave, the length of the path LL(xG) cov-
ered by the LR-wave can be determined [15].
1238 D. Cerniglia et al.

Fig. 5. B-Scan image resulting from the simulated B-Scan subtraction with the indication of the
time of arrival points and the relative interpolation curve.

3.2 Determination of the reflection points

As seen in Fig.2, the path of the LR-wave is composed of two parts: the first one
goes from the generation point to the reflection point, the second one from this last
point to the receiver. Since the initial direction of the LR-wave and the defect
shape and position are unknown, the lengths of the two parts of the path cannot be
determined separately. Under these conditions, all the points for which the sum of
the distance from two fixed points (generation and receiver points) is LL(xG) could
be reflection points. In particular, all the points on the semi-ellipse with generation
and receiver points as foci and with the sum of the distances from them equal to
LL(xG) could be reflection points [15]. Since the generation and receiver points are
on the x-axis (fig. 2), the generic equation of the ellipse having the possible reflec-
tion points is:

x  xc 2  y2 1 (1)
a2 b2
where a and b are, respectively, the major and minor semi-axes of the ellipse,

a LL xG 2 § LL xG ·  D 2
2
; b ¨ 2 ¸¹ 2 (2,3)
©

and xc is the abscissa of the center of the ellipse (central point between the two fo-
ci at distance D). Finally, for each scan step, knowing xG and calculating LL(xG),
the ellipse passing through the reflecting point of the defect can be drawn.
B-Scan image analysis for position and shape … 1239

All the points of the section lying inside the ellipse cannot be point of the defect,
because they are closer to the foci points, then they can be selected as points that
do not belong to the defect. Starting from a section image with all pixels colored
as black and coloring as white all the pixels inside of the ellipses determined for
each step, the pixels remaining black define the defect.
Operating from one side of the panel, the definition of the defect is half done,
since the boundary points facing the scanning line can be reflection points (see fig.
2). Fig. 6 shows the section image after the application of the procedure from the
front side of the panel.
Applying the same procedure for the rear side of the panel, the entire boundary of
the defect has been drawn. The result is shown in figure 7.

Fig. 6. Section image after front side scanning.

Fig. 7. Section image after scanning from both sides.

The coordinates xg, yg of the center of gravity of the defect have been determined
from the figure. The results of the analysis are: xg=184.7 mm (-0.3 mm); yg=2.3
mm (-0.2 mm). The shape of the defect is almost circular, except for the bounda-
ries points along the central x-axis of the hole. To limit this deformation on the de-
fect determination the LL-wave analysis can be used [15], but this aspect is not
considered in this paper.

4 Conclusions

A new procedure of analysis of the B-scan image obtained by laser UT for


NDE has been presented in this paper. It has been shown that, using an appropriate
lay-out optimized to detect the longitudinal wave reflected by the opposite side of
the panel, it is possible to define the main characteristics of the defects. In particu-
lar, the position and the shape of the defect can be determined. The analysis of the
perturbation in the B-scan allows determining the shape and the position of the de-
fect, building a section virtual image. The definition of the shape of the defect is
achieved by drawing the ellipses on the image section, by measuring the time of
arrival of the wave reflected by the defect from the B-scan, defining as not-
defected the area inside the ellipses. The research requires the automation of the
1240 D. Cerniglia et al.

points of the wave-front from the B-scan, although the results obtained are enough
in agreement with the defect created into the aluminum panel used as case study.
The next step of the research is the integration of the LL back-wall and LR re-
flected waves analyses to determine the size of the defect.

References

1 D. Cerniglia, M. Scafidi, A. Pantano, J. Rudlin, Inspection of additive-manufactured layered


components, Ultrasonics, 2015, 62, 292-298.
2 J. Rudlin, D. Cerniglia, M. Scafidi, Inspection of laser powder deposited layers, Proceedings
of 52nd Annual Conference of the British Institute of Non-Destructive Testing, BINDT 2013
– Telford (UK), September 2013.
3 D. Cerniglia, B.B. Djordjevic, Ultrasonic detection by photo-EMF sensor and by wideband
air-coupled transducer, Research in Nondestructive Evaluation, 2004, 15, 111-117.
4 D. Cerniglia, N. Montinaro, V. Nigrelli, Detection of disbonds in multi-layer structures by la-
ser-based ultrasonic technique, Journal of Adhesion, 2008, 84(10), 811-829.
5 C.B. Scruby, L. Drain, Laser Ultrasonics: Techniques and Applications, Adam Hilger, Bris-
tol, 1990.
6 A.N. Sinclair, J. Fortin, B. Shakibi, F. Honarvar, M. Jastrzebski, M.D.C. Moles, Enhance-
ment of ultrasonic images for sizing of defects by time-of-flight diffraction, NDT&E Interna-
tional, 2010, 43, 258-264.
7 P.A. Petcher, S. Dixon, A modified Hough transform for removal of direct and reflected sur-
face waves from B-scans, NDT&E International, 2011, 44, 139-144.
8 P.A. Petcher, S. Dixon, Parabola detection using matched filtering for ultrasound B-scans,
Ultrasonics, 2012, 52, 138-144.
9 T. Merazi-Meksen, M. Boudraa, B. Boudraa, Mathematical morphology for TOFD image
analysis and automatic crack detection, Ultrasonics, 2014, 54, 1642-1648.
10 M. G. Silk, The transfer of ultrasonic energy in the diffraction technique for crack sizing. Ul-
trasonics, 1979, 17, 113–121.
11 S.K. Nath, K. Balasubramaniam, C.V. Krishnamurthy, B.H. Narayana, Reliability assessment
of manual ultrasonic time of flight diffraction (TOFD) inspection for complex geometry
components, NDT&E International, 2010, 43, 152-162.
12 S. K. Nath, Effect of variation in signal amplitude and transit time on reliability analysis of
ultrasonic time of flight diffraction characterization of vertical and inclined cracks, Ultrason-
ics, 2014, 54, 938-952.
13 T. Ingrassia, V. Nigrelli, Design optimization and analysis of a new rear underrun protective
device for truck, Proceedings of the 8th International Symposium on Tools and Methods of
Competitive Engineering, TMCE 2010, Ancona (Italy), April 2010, 713-725.
14 A. Ferrand, M. Darmon, S. Chatillon, M. Deschamps, Modeling of ray paths of head waves
on irregular interfaces in TOFD inspection for NDE, Ultrasonics, 2014, 54, 1851-1860.
15 M. Scafidi, D. Cerniglia, T. Ingrassia, 2D size, position and shape definition of defects by b-
scan image analysis, Frattura ed Integrità Strutturale, 2015, 9, 622-629.
16 B. Pouet , A. Wartelle & S. Breugnot, Multi-detector receiver for laser ultrasonic measure-
ment on the run, Nondestructive Testing and Evaluation, 2011, 26:3-4, 253-266.
17 B. Pouet , A. Wartelle & S. Breugnot, Recent progress in multi-channel laser-ultrasonic re-
ceivers, AIP Conference Proceedings, 2012, 1430(31), 259-266.
18 J. Krautkramer, H. Krautkramer, Ultrasonic Testing of Materials, Springer-Verlag Berlin
Heidelberg GmbH, New York, 1977.
Author Index

A Biancolini, Marco Evangelos, 537


Abidin, Shahriman Zainal, 1159 Bici, Michele, 289
Achard, Victor, 501 Biedermann, Anna Maria, 951, 1083
Aguilar, Fernando J., 881 Blanchard, Antoine, 1023
Aguilar, Manuel A., 881 Blanco, José L., 881
Ahmed, Ahmed, 853 Blázquez Parra, E. Beatriz, 941
Aifaoui, Nizar, 81 Bluntzer, Jean-Bernard, 861
Aldanondo, M., 1114 Bonazzi, Enrico, 1013
Alix, Thecle, 111 Bonisoli, Elvio, 665
Ambu, R., 777 Bonnot, Vivien, 313
Andrés Díaz, José R., 697 Borchi, Francesco, 621
Andrisano, Angelo Oreste, 1187 Bosch-Mauchand, M., 101
Anselmetti, Bernard, 993 Brentegani, Andrea, 1131
Anwer, Nabil, 191, 241, 223 Bricogne, Matthieu, 829
Arancón, D., 923 Brino, Marco, 665
Arbelaez, Alejandro, 25 Broggiato, Giovanni B., 289
Argenti, Fabrizio, 621 Brognara, L., 457
Arias, Agustin, 811 Brown, Christopher A., 271
Arroyave-Tobón, Santiago, 1003 Bruno, Fabio, 157, 353
Brun, Xavier, 639
B Brutto, Mauro L.O., 555
Baldini, N., 457 Buonamici, Francesco, 841
Ballu, Alex, 1023, 1053
Barattin, Daniela, 373 C
Barbagallo, R., 167, 611 Cagin, Stéphanie, 45
Barbieri, Loris, 157, 353 Cahuc, Olivier, 647
Barone, Sandro, 91, 405, 437, 415 Calabretta, Michele, 709
Baronio, Gabriele, 905 Calamaz, Madalina, 647
Barrenetxea, Lander, 397 Calì, Michele, 167, 585, 1221, 1211, 675
Belhadj, Imen, 55 Califano, Rosaria, 1197
Belkadi, Farouk, 139, 871 Caligiana, Gianni, 329
Benama, Youssef, 111 Cammarata, A., 71, 611
Benamara, Abdelmajid, 55 Campana, Francesca, 289
Benito-Martín, M.A., 923 Cannizzaro, Luigi, 575
Bernard, Alain, 139, 871 Capizzi, Giacomo, 789
Berselli, Giovanni, 655 Caporaso, T., 479
Betancur, Esteban, 25 Cappetti, Nicola, 799, 1197
Biagini, Massimiliano, 621 Carassai, Stefano, 1187

© Springer International Publishing AG 2017 1241


B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9
1242 Author Index

Carfagni, Monica, 387, 621, 841 Etxaniz, Olatz, 811


Carniel, Xavier, 687 Eynard, Benoît, 223, 687, 829
Casanova, Juan A., 881
Castillo Rueda, Francisca, 941 F
Cavallaro, Daniela, 709 Fadon, Fernando, 767
Cella, Ubaldo, 537, 547 Fadon, Laida, 767
Cerniglia, Donatella, 1233 Fantini, M., 425, 457
Ceron, Enrique, 767 Fatuzzo, Gabriele, 167, 675, 1211
Ceruti, Alessandro, 727 Favi, Claudio, 63
Chen, Jing-tao, 631 Feno, Mahenina Remiel, 1151
Cherfi, Z., 101 Fernández, Ismael, 881
Chindamo, Daniel, 341 Fernández-Vázquez, Aranzazu, 363, 1083
Chirol, Clément, 501 Fichera, G., 71
Cicconi, Paolo, 1097 Filippi, Stefano, 373, 905
Coco, Salvatore, 789 Fischer, Xavier, 45
Colombo, Giorgio, 1141 Fleche, Damien, 861
Concheri, Gianmaria, 213 Francia, Daniela, 329
Contreras López, Miguel A., 697 Frizziero, Leonardo, 597, 727
Corral-Bobadilla, Marina, 447, 489 Furet, Benoît, 321
Correia, Nuno, 1053 Furferi, Rocco, 819
Coudert, T., 1114 Furini, Francesco, 1141
Cremonini, Marco, 597
Cristovao, Claudia, 1053 G
Cucinotta, Filippo, 91, 547, 509 Gadaleta, Michele, 655
Curto, M., 425 Gadola, Marco, 341
Gallego-Alvarez, Javier, 961
D Gallo, Alessandro, 353
Dagnes, Nicole, 747 García Ceballos, María L., 697
Daidie, Alain, 501, 517 Gardoni, Mickaël, 1167
Daille-Lefevre, Bruno, 1151 Garnier, Sébastien, 321
Darnis, Philippe, 647 Gaudy, Rémy, 853
De Crescenzio, F., 251, 425, 457 Geneste, L., 1114
De La Morena-De La Fuente, Eduardo, 757 Genovesi, Andrea, 655
De Martino, G., 479 Gerbino, Salvatore, 201
De Napoli, Luigi, 35, 353 Germani, Michele, 63, 1097, 1107
de Vaujany, Jean-Pierre, 739 Giallanza, Antonio, 575
Dekhtiar, Jonathan, 829 Giannese, Michele, 437
Delos, Vincent, 1003 Gimena, Faustino, 973, 981
Deneux, Dominique, 891 Gimena, Lázaro, 973, 981
Denoix, Henri, 993 Gitto, J-P., 101
Desfontaines, Vincent, 321 Giurgola, Stefano, 1063
Di Angelo, L., 1033, 1043 Goikoetxea, Nestor, 811
Di Gironimo, G., 479 Gomes, Rui, 1053
Di Ludovico, M., 479 Gómez-Jáuregui, Valentín, 719, 915
Di Napoli, Fiorentino, 1197 Goñi, Mikel, 973, 981
Di Paola, Francesco, 555 Gonzaga, Pedro, 973, 981
Di Stefano, P., 1033, 1043 Gorozika, Jokin, 397
Doutre, Pierre-Thomas, 233 Governi, Lapo, 819
Durupt, Alexandre, 223, 829 Grandvallet, Christelle, 281
Gritti, Giovanni, 341
E Groth, Corrado, 537
Elipe, María, 1083 Guillemot, Mady, 639
Es-Soufi, Widad, 1123 Guingand, Michèle, 739
Etienne, Alain, 1151 Guivarch, I., 101
Author Index 1243

H Marsot, Jacques, 1151


Hamieh, Ahmed, 891 Martin-Doñate, Cristina, 119, 961
Hassan, Halim, 1159 Martínez-Calvo, María Ángeles, 447, 489, 923,
Hattali, Lamine, 271 1073
Houssin, Rémy, 1167 Martin, Patrick, 1151
Humbert, Gaël, 639 Martorelli, Massimo, 201, 585
Mathieu, Luc, 241
I Matthews, Geoffrey S., 5
Ingrassia, Tommaso, 15, 261, 469, 555, 1233 Mehdi-Souzani, Charyar, 191
Íñiguez-Macedo, Saúl, 447, 489 Mejía-Gutiérrez, Ricardo, 25, 147
Iturrate, Mikel, 397 Meneghello, Roberto, 213
Mercado-Colmenero, Jorge Manuel, 119
K Mimoso, Pedro, 1053
Keimasi, Safa, 241 Minguez, Rikardo, 811
Khatib, Ahmad A.L., 861 Mitrouchev, Peter, 631
Kheder, Maroua, 81 Mohamed, Wan Asri Wan, 1159
Kiritsis, Dimitris, 829 Mollo, Fabrizio, 157
Kirytopoulos, K., 1114 Montes Tubio, Francisco, 941
Moos, Sandro, 747
L Morabito, A.E., 777, 1033, 1043
Lacasa, Enrique, 129 Morais, Fabio, 933
Ladrón de Guevara Muñoz, M. Carmen, 941 Moreno, José C., 881
Lagresle, Charly, 739 Morretton, Elodie, 233
Laheurte, Raynald, 647 Motyl, Barbara, 905
Landi, Daniele, 1097 Mouton, Serge, 1023
Landon, Yann, 313 Muñoz López, Natalia, 951
Lanzotti, Antonio, 201, 479, 1177 Muriana, José Angel Moya, 119
Lapini, Alessandro, 621 Muzzupappa, Maurizio, 157, 353
Larat, Bertrand, 853
Lartigue, Claire, 271, 303 N
Le Duigou, Julien, 223 Naddeo, Alessandro, 799, 1197
Le, Van Thao, 181 Nalbone, L., 469
Leali, Francesco, 1013, 1131 Neri, Paolo, 415, 437
Liverani, Alfredo, 329, 727 Ngo, Thanh-Nghi, 871
Lizcano, Piedad Eliana, 719 Niandou, Halidou, 1023
Lo Sciuto, Grazia, 789 Nigrelli, Vincenzo, 261, 469, 509, 1233
López-Forniés, Ignacio, 363 Noppe, Eric, 687
Lostado-Lorza, Rubén, 447, 489, 1073 Noterman, Didier, 639
Lucchi, F., 251
Lucena-Muñoz, Fermín, 961 O
Oliveri, S.M., 167, 611, 675, 1211
M Olivi, Andrea, 341
Mahdjoub, Morad, 861 Orlandi, Stefano, 341
Maillard, Arnaud, 687 Osorio-Gómez, Gilberto, 25
Maleki, Elaheh, 139 Otero, César, 719, 915
Manchado, Cristina, 719, 915 Othman, Azlan, 1159
Mancuso, Antonio, 15, 527, 555
Mandil, Henri Paris Guillaume, 181 P
Mandolini, Marco, 63 Pailhés, Jérôme, 147
Marannano, Giuseppe, 575 Paladino, Giorgio, 15
Marano, D., 71 Panari, Davide, 1013
Marcolin, Federica, 747 Paoli, Alessandro, 405, 437, 415
Marin, Philippe, 233 Paquet, Elodie, 321
1244 Author Index

Paramio, Miguel Angel Rubio-, 119 Sanz-Adán, F., 447, 923, 1073
Paredes, Manuel, 501, 565 Sanz-Peña, I., 1073
Patalano, Stanislao, 201 Sanz, Rosana, 129
Pellicciari, Marcello, 655, 1187 Savignano, Roberto, 405
Percuoco, Chiara, 1177 Savio, Fabio Lo, 1221
Pérez Delgado, Belén, 697 Savio, Gianpaolo, 213
Peroni, Mariele, 1131 Savoretti, Andrea, 1107
Perry, Nicolas, 111 Scafidi, Michele, 1233
Peruzzini, Margherita, 655, 1187 Segonds, Stéphane, 313
Peverada, Franco, 341 Seguy, Sébastien, 565
Phan, Nguyen Duy Minh, 303 Sequenzia, G., 167, 611, 675, 1211
Piancastelli, Luca, 597 Serrano Tierz, Ana, 951
Pierre, Laurent, 993 Sfravara, Felice, 91, 547, 509
Piratelli-Filho, Antonio, 191 Shamsuddin, Zafruddin, 1159
Pisciotta, D., 469 Shikler, Raphael, 789
Pitarresi, G., 527 Siadat, Ali, 1151
Ponchet Durupt, A., 101 Silio, Delfin, 767
Porretto, Mario, 575 Sinatra, R., 71
Pourroy, Franck, 233, 281 Sitta, Alessandro, 709
Prati, D., 71 Solaberrieta, Eneko, 397
Prudhomme, Guy, 233, 281 Somovilla-Gómez, Fátima, 447, 489, 1073
Puggelli, Luca, 1063 Speranza, Domenico, 585, 747, 905
Sun, Xiaoguang, 1167
Q Sylla, A., 1114
Qiao, Lihong, 241
Qiu, Donghai, 565 T
Quinsat, Yann, 271, 303 Tahon, Christian, 891
Tarallo, A., 479
R Tartamella, C., 261
Raffaeli, Roberto, 1107 Teissandier, Denis, 1003
Ramos, Francisco J., 881 Thanwerdas, Rémi, 517
Razionale, Armando Viviano, 405, 415, 437 Thomnn, G., 933
Razzoli, Roberto, 655 Tobalina-Baldeon, Daniel, 489
Renaud, Jean, 1167 Tobalina, D., 1073
Renna, Marco, 709 Tornincasa, Stefano, 665, 747
Renzi, Cristina, 1013 Trigui, Moez, 55, 81
Ricotta, V., 261, 469 Trinca, G.B., 527
Ríos-Zapata, David, 147 Tumino, D., 527
Ritou, Mathieu, 321 Tu Pham, Minh, 639
Rizzuti, Sergio, 35
Rodriguez, Emmanuel, 517 U
Rojas-Sola, José Ignacio, 757 Uberti, Stefano, 341
Romano, Matteo, 341 Uccheddu, Francesca, 387, 819
Rossoni, Marco, 1141
Roucoules, Lionel, 853, 1123 V
Rouetbi, Oussama, 993 Vallone, Mariarosaria, 1197
Rowson, Harvey, 829 Vanacore, Amalia, 1177
Russo, Anna Costanza, 1097 Vareilles, E., 1114
Vergnano, Alberto, 1013, 1131
S Vezzetti, Enrico, 747
Sagot, Jean-Claude, 861 Vignat, Frédéric, 233, 281
Samsudin, Zaidi, 1159 Villa, Valerio, 905
Santamaría-Peña, J., 923, 1073 Violante, Maria Grazia, 747
Santolaya, José Luis, 129 Vitolo, Ferdinando, 201
Author Index 1245

Volpe, Yary, 819, 841, 1063 Y


Vo, Thanh Hoang, 233 Yahia, Esma, 1123
Yan, Xingyu, 1023
W Yousfi, Wadii, 647
Wang, Cheng-gang, 631
Werba, Christine, 933 Z
Zhang, Yicha, 139
X Zhu, Zuowei, 241
Xiao, Jinhua, 223 Zuazo, Inaki, 811

S-ar putea să vă placă și