Sunteți pe pagina 1din 187

BOOK OF ABSTRACTS

ESCO 2018
6th European Seminar on Computing
Pilsen, Czech Republic
June 3 - 8, 2018
This is a publication of the University of West Bohemia (Pilsen, Czech Republic).
ESCO 2018
6th European Seminar on Computing

Editors: Pavel Solin (University of Nevada)


Pavel Karban (University of West Bohemia)
Jaroslav Kruis (Czech Technical University)
Publisher: University of West Bohemia
Univerzitnı́ 8, 306 14 Plzen̆
Czech Republic
Year: 2018

Contact Information
Mailing address:
ESCO 2018 Conference
FEMhub Inc.
2295 Titleist Ct
Reno, NV 89523
U.S.A.
E-mail: conference@esco2018.femhub.com
Web page: http://esco2018.femhub.com/
Phone: 1-775-848-7892

GDPR
You might be photographed during the conference. The photographs will only be shared with
conference participants. If you have any questions or concerns please contact the organizing
committee.

2
ESCO 2018
6th European Seminar on Computing, Pilsen, Czech Republic, June 3 - 8, 2018

Main Thematic Areas


Computational electromagnetics and coupled problems; Fluid-structure interaction and multi-
phase flows; Computational chemistry and quantum physics; Computational civil and struc-
tural engineering; Computational biology and bioinformatics; Computational geometry and
topology; Hydrology and porous media flows; Wave propagation and acoustics; Climate and
weather modeling; Petascale and exascale computing; GPU and cloud computing; Uncertainty
quantification; Open source software.

Application Areas
Theoretical results as well as applications are welcome. Application areas include, but are
not limited to: Computational electromagnetics, Civil engineering, Nuclear engineering, Me-
chanical engineering, Nonlinear dynamics, Fluid dynamics, Climate and weather modeling,
Computational ecology, Wave propagation, Acoustics, Geophysics, Geomechanics and rock
mechanics, Hydrology, Subsurface modeling, Biomechanics, Bioinformatics, Computational
chemistry, Stochastic differential equations, Uncertainty quantification, and others.

Scientific Committee
– Valmor de Almeida (Oak Ridge National Laboratory, Oak Ridge, USA)
– Zdenek Bittnar (Faculty of Civil Engineering, CTU Prague)
– Alain Bossavit (Laboratoire de Genie Electrique de Paris, France)
– John Butcher (Auckland University, New Zealand)
– Antonio DiCarlo (University Roma Tre, Rome, Italy)
– Ivo Dolezel (Czech Technical University, Prague, Czech Republic)
– José Luis Galán Garcı́a (Universidad de Málaga, Spain)
– Stefano Giani (University of Nottingham, UK)
– Mahdi Homayouni (Azad University, Isfahan, Iran)
– Pavel Karban (University of West Bohemia, Pilsen, Czech Republic)
– Tzanio Kolev (Lawrence Livermore National Laboratory, USA)
– Darko Koracin (Desert Research Institute, Reno, USA)
– Dmitri Kuzmin (University of Erlangen-Nuremberg, Germany)
– Stephane Lanteri (INRIA, Sophia-Antipolis, France)
– Shengtai Li (Los Alamos National Laboratory, Los Alamos, USA)
– Alberto Paoluzzi (University Roma Tre, Rome, Italy)
– Francesca Rapetti (University of Nice, France)
– Eugenio Roanes-Lozano (Universidad Complutense de Madrid, Spain)
– Sascha Schnepp (ETH Zurich, Switzerland)
– Irina Tezaur (Sandia National Laboratories, USA)
– Stefan Turek (Technical University of Dortmund, Germany)
Organizing Committee
– Pavel Solin (University of Nevada, Reno, Prague)
– Pavel Karban (University of West Bohemia, Pilsen)
– José Luis Galán Garcı́a (Universidad de Málaga, Spain)
– Petr Lukas (Charles University, Prague)
– Jaroslav Kruis (Czech Technical University, Prague)
– Eugenio Roanes-Lozano (Universidad Complutense de Madrid, Spain)

4
5
Table of Contents

I Keynote Lectures

Efficient Discretizations for Exascale Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21


Tzanio Kolev

Heterogeneous Computing – It’s Here to Stay, and Your Science Will Depend on It . . . 22
Rob Neely

Computation of Distributions of Samples Taken at Random Events . . . . . . . . . . . . . . . . . 23


Krzysztof Podgorski

Computational Modelling Using Outer Loop Techniques, With Applications to


Bio-Mechanics and Fracture Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
John Whiteman

Adaptive Surrogate Construction in Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26


Barbara Wohlmuth

II Abstracts
A High-Order Elliptic PDE Based Level Set Reinitialisation Method Using a
Discontinuous Galerkin Discretisation With Applications to Topology Optimisation . . 29
Thomas Adams

Optimizing Microwave Bandgap Filter Structures by Minimizing Radiation-Loss . . . . . 30


Jorge Aguilar Torrentera

A Fast Functional Approach to Personalized Menus Generation Using Set Operations . 31


Eugenio Roanes-LozanoGabriel Aguilera-Venegas

A Class Experience of Statistics for the Degree of Health Engineering . . . . . . . . . . . . . . . 32


Gabriel Aguilera-Venegas

The Effect of the Seams on a Baseball. Simulations and a Mathematical Model for
the Lift and Lateral Forces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Mario Alberto Aguirre López

On Continuous Inkjet Systems: A Printer Driver for Expiry Date Labels on


Cylindrical Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Mario Alberto Aguirre López

Finite Difference Solutions of 1D Magnetohydrodynamic Channel Flow With


Slipping Walls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Sinem Arslan
Antarctic Ice Shelf-ocean Interactions in High-resolution, Global Simulations Using
the Energy Exascale Earth System Model (E3SM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Xylar Asay-Davis

Server Information Mechanism in a Discrete-time Queueing System . . . . . . . . . . . . . . . . 37


Ivan Atencia-Mckillop

On Univalent Harmonic Mappings With Analytic Parts Starlike of Some Order . . . . . . 38


F. M. Sakar

Finite Element Approximation of an Elliptic Problem With a Nonlinear Newton


Boundary Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Miloslav FeistauerOndrej Bartos

Optimizing the Fractionation of Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40


Nourredine Ben tahar

Optimization of Coke Production From Algerian Oil Residues . . . . . . . . . . . . . . . . . . . . . 41


Nourredine BEN TAHAR

DEM Calibration Procedure for Bulk Physical Tests: A Case Study Using the
Casagrande Shear Box Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Ben Turkia Salma

A Discontinuous Galerkin Hp-adaptive Finite Element Method for Accurate Brittle


Crack Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Robert Bird

GPU Solver for SPD Matrices vs. Traditional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44


Jan Bohacek

The Rational SPDE Approach for Gaussian Random Fields With General Smoothness 45
David Bolin

A Generalized Muffin Tin Augmented (Plane)Wave Method . . . . . . . . . . . . . . . . . . . . . . . 46


Moritz Braun

An IMFES Formulation for the 2D Three-phase Black-oil Equations . . . . . . . . . . . . . . . . 47


Saúl E. Buitrago Boret

Stochastic Simulation Modeling of Quality Assessment, a Forecast Approach in


Maquila Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
José Roberto Cantú-González

Simulation of Structural Applications and Sheet Metal Forming Processes Based on


Quadratic Solid-Shell Elements With Explicit Dynamic Formulation . . . . . . . . . . . . . . . . 49
Hocine Chalal

A FEniCS-based Solver to Predict Time-dependent Incompressible Flows . . . . . . . . . . . 50


Alexander Churbanov

7
An Explicit Algorithm for the Simulation of Multiphase Fluid Flow in the Subsurface 51
Natalia Churbanova

Using a Hybrid Mixing in Fixed-Point Self-Consistent Iterations to Accelerate


Electronic Structure Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Matyáš Novák

Improving the Representation of Solid Ice Mass Flux From Ice Sheets to Ocean in
the Energy Exascale Earth System Model (E3SM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Darin Comeau

Heat Modelling Methods Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54


Miklós Csizmadia

An Iterative Phase-Space Explicit Discontinuous Galerkin Method for Stellar


Radiative Transfer in Extended Atmospheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Valmor F. de Almeida

Quadratic Raviart-Thomas Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56


Eduardo De Los Santos

A Performance-Portable C++ Implementation of Atmospheric Dynamics . . . . . . . . . . . 57


Andy SalingerMichael Deakin

On the Distribution of Real Roots in Quadratic and Cubic Random Polynomials: A


Theoretical Numerical Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Irene Sarahi Del Real Vargas

Growth of Crystals in Adiabatic Crystallizers Depending on the Characteristics of


the Raw Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Carlos Destenave

A Shared-Memory Parallel Multi-Mesh Fast Marching Method for Full and Narrow
Band Re-Distancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Georgios Diamantopoulos

Robust Discrete Laplacians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61


Antonio DiCarlo

T-H-M-C Modelling of Geothermal Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62


Alain Dimier

Enhancement of the Localization Precision of RTLS Used in the Intelligent


Transportation System in Suburban Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Marzena Banach

Goal-oriented Anisotropic Mesh Adaptation Method for Linear Convection-diffusion-


reaction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Vit Dolejsi

8
Coupling Ultrasonic Wave Propagation With Fluid-Structure Interaction Problem . . . . 65
Bhuiyan Shameem Mahmood Ebna Hai

Non-uniform Grid in Preisach Model for Soft Magnetic Materials . . . . . . . . . . . . . . . . . . 66


Jakub Eichler

Performance Engineering for Tall & Skinny Matrix Multiplication Kernels on GPUs . . 67
Dominik Ernst

Effects of Coupling on Firing Patterns in Thermally Sensitive Neurons . . . . . . . . . . . . . . 68


Gerardo Escalera Santos

A Comparative Study Between D2Q5 and D2Q9 Lattice Boltzmann Scheme for
Mass Transport Phenomena in Porous Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Mayken Espinoza-Andaluz

A Permeability Correlation for a Medium Generated With Delaunay Tessellation


and Voronoi Algorithm by Using OpenPNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Mayken Espinoza-Andaluz

Optimal Control for the MHD Flow and Heat Transfer With Variable Viscosity in a
Square Duct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Cansu Evcin

Computational Synthesis of Artificial Neural Networks Based on Partial Magma Rings 72


Raul M. Falcón

Numerical Simulation of Two-Phase Flow by the FE, DG and Level Set Methods . . . . 73
Miloslav Feistauer

Numerical Simulation of Flows Through a Radial Turbine . . . . . . . . . . . . . . . . . . . . . . . . . 74


Jiřı́ Fürst

Introducing Probabilistic Cellular Automata. A Versatile Extension of Game of Life . . 75


Gabriel Aguilera-VenegasJosé Luis Galán-Garcı́a

A Discontinuous Galerkin Method for Solving Elliptic Eigenvalue Problems on


Polygonal Meshes With Hp-adaptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Stefano Giani

Multiscale Hybrid-Mixed Method for the Simulation of Nanoscale Light-Matter


Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Alexis Gobé

Industrial Particle Simulations Using the Discrete Element Method on the GPU . . . . . 78
Nicolin govender

On the Numerical Solution of Non-Equilibrium Condensation of Steam in Nozzles


and Cascades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Jan Halama

9
Acceleration of Stochastic Boundary Inverse Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Jan Havelka

Modelling of Chemical Ageing and Fatigue in Rubber and Identification of Parameters 81


Jan Heczko

Coupling of Algebraic Model of Bypass Transition With EARSM Model of Turbulence 82


Jiřı́ Holman

HDG Method for the 3d Frequency-Domain Maxwell’s Equations With Application


to Nanophotonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Mostafa Javadzadeh Moghtader

A New Mixed Stochastic-Deterministic Simulation Approach for Particle Populations


in Fluid Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Volker John

Comparative Cancer Genomics via Multiresolution Network Models . . . . . . . . . . . . . . . . 85


Rebecka Jörnsten

Towards a Monolithic Discrete Element and Multi-physics Solver Utilising the GPU . . 86
Johannes Joubert

Application of an Intelligent Control on Economics Dynamic System Described by


Differential Algebraic Equation as a New Management Strategy . . . . . . . . . . . . . . . . . . . . 87
Raymundo Juarez-Del-Toro

Application of an Intelligent Control on Economics Dynamic System Described by


Ordinary Differential Equation as a New Management Strategy . . . . . . . . . . . . . . . . . . . . 88
Raymundo Juarez-Del-Toro

Restricted Boltzmann Machine for Binary Patterns Aggregation for Image Object
Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Rafal Kapela

Asphalt Pavement Surface Objects Detection - Denoising Concept . . . . . . . . . . . . . . . . . 90


Rafal Kapela

Induction Brazing Process Control Using Reduced Order Model . . . . . . . . . . . . . . . . . . . . 91


David PánekPavel Karban

A Look at the Challenges Of, and Some Solutions To, Evaluating Next-generation
Earth System Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Joseph H. Kennedy

Numerical Modelling of Newtonian Fluids in Bypass Tube . . . . . . . . . . . . . . . . . . . . . . . . . 93


Radka Keslerova

A Pseudo Cell Approach for Hanging Nodes in Unstructured Meshes . . . . . . . . . . . . . . . 94


Margrit Klitz

10
Sensor Failure Detection in Selftesting Navigation System . . . . . . . . . . . . . . . . . . . . . . . . . 95
Krzysztof Kolanowski

Distributed Implicit Discontinuous Galerkin MHD Solver . . . . . . . . . . . . . . . . . . . . . . . . . 96


Lukas Korous

Applicability and Comparison of Surrogate Techniques for Modeling Selected


Heating Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Vaclav Kotlan

Time Integration of Hydro-Mechanical Model for Bentonites . . . . . . . . . . . . . . . . . . . . . . . 98


Tomáš Koudelka

A Self-calibrating Method for Heavy Tailed Data Modeling. Application in


Neuroscience and Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Marie Kratz

Efficient Assembly of BEM Matrices Using ACA on Distributed Systems . . . . . . . . . . . . 100


Michal Kravčenko

Quick Method of Creation of 3D Treatment Volume Margin. . . . . . . . . . . . . . . . . . . . . . . 101


Zuzanna Krawczyk

CT Data Segmentation Based on Reference Skeleton Model . . . . . . . . . . . . . . . . . . . . . . . 102


Zuzanna Krawczyk

Homogenization of Masonry and Heterogeneous Materials on PC Clusters . . . . . . . . . . . 103


Tomáš Krejčı́

Electro-Mechanical FEM Simulations With “General Motion“ . . . . . . . . . . . . . . . . . . . . . 104


Fritz Kretzschmar

AR/VR at Google . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105


Vladimir Krneta

Loss Approximation in Induction Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106


Miklós Kuczmann

Temperature and Frequency Dependence of Hysteresis Characteristics . . . . . . . . . . . . . . 107


Miklós Kuczmann

Multiplicative Schwarz Method for Asynchronous Temporal Integration of Governing


Equations for Transport Processes in Porous Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Michal Kuraz

Improving Algorithms for Particle Simulation on Modern GPUs . . . . . . . . . . . . . . . . . . . 109


Hermann Kureck

eXtended Particle System (XPS) - High-Performance Particle Simulation . . . . . . . . . . . 110


Hermann Kureck

11
Cuts for 3D Magnetic Scalar Potentials: Visualizing Unintuitive Surfaces Arising
From Trivial Knots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Alex StockrahmValtteri Lahtinen

A Novel Weighted Likelihood Estimation With Empirical Bayes Flavor . . . . . . . . . . . . . 112


Tomasz Kozubowski

Fractional Modeling of Anomalus Difussion in Plants Cells . . . . . . . . . . . . . . . . . . . . . . . . 113


Raúl Lamadrid Chico

On Multi-channel Stochastic Networks With Markov Control . . . . . . . . . . . . . . . . . . . . . . 114


Hanna Livinska

Air Pollution Estimation Based on the Intensity of Received Signal in 3G/4G Mobile
Terminal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Grazia Lo Sciuto

Optimization of Error Indicators for SOLD Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116


Petr Lukas

Modelling of Large Deforming Fluid Saturated Porous Media Using Homogenization


Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Vladimı́r Lukeš

On the Robustness of Hierarchical Bayesian Models for Uncertainty Quantification


in Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Aaron Luttman

Analysis and Optimisation of Inductive-Based Wireless Power Charger . . . . . . . . . . . . . . 119


Dániel Marcsa

Quantifying the Skill of Sea Ice Simulations in Earth System Models Using a
Variational ICESat-2 Emulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Andrew RobertsWieslaw Maslowski

Autoscaling Localized Reduced Basis Methods With pyMOR and EXA-DUNE . . . . . . 121
René Milk

Analysis for Carbon Combustion and Energy Loss by Chemical Reactions Into a
Rotatory Cement Kiln by CFD: A Study Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Marco Antonio Merino-Rodarte

Comparing Hysteresis Characteristics Using Finite Element Method (FEM) With


Different Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Zoltan Nemeth

Performance Predictions for Storm-Resolving Simulations of the Climate System . . . . . 124


Philipp Neumann

12
High-order Wave-based Laser Absorption Algorithm for Hydrodynamic Codes . . . . . . . 125
Jan Nikl1,2
Tuning the Electronic and Magnetic Properties of ReS2 by Lanthanide Dopants
Ions: A First Principles Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Kingsley Onyebuchi Obodo
A Computational Approach to Confidence Intervals and Testing for Generalized
Pareto Tail Index Based on Greenwood Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Anna Panorska
Sensitivity Analysis of Droplets Distribution to Test Conditions in Wind-tunnel
Icing Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Giulio GoriGianluca Parma
Ice Sheet Initialization and Uncertainty Quantification of Sea-level Rise . . . . . . . . . . . . . 129
John JakemanMauro Perego
DEM GPU Simulations With Convex and Non-convex Particles for Railway Ballasts . 130
Patrick Pizette
Using Deep Learning for Detection and Correction Speeches Based on Human Emotions 131
Mateusz PóltorakJanusz Pochmara
Spatial Wave Size for Gaussian Random Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Krzysztof Podgorski
Geometric Multigrid Methods for Darcy–Forchheimer Flow in Fractured Porous Media 133
Andrés ArrarásLaura Portero
An Anisotropic Adaptive, Particle Level Set Method for Moving Interfaces . . . . . . . . . . 134
Juan Luis Prieto
Improving Data Imputation for High Dimensional Datasets . . . . . . . . . . . . . . . . . . . . . . . . 135
Neta Rabin
Anisotropic Goal-oriented Error Estimates for HDG Schemes . . . . . . . . . . . . . . . . . . . . . . 136
Ajay Mandyam Rangarajan
Two Approaches for the Potential/field Problem With High Order Whitney Forms
and New Degrees of Freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Francesca Rapetti
Mechanical Modeling of Edema Formation Applied to Bacterial Myocarditis . . . . . . . . . 138
Ruy Freitas Reis
Impact of Vegetation on Dustiness Produced by Surface Coal Mine in North Bohemia. 139
Hynek Řeznı́ček
A Knowledge-Based System for DC Railway Electrification Verification . . . . . . . . . . . . . 140
Eugenio Roanes-Lozano

13
A Brief Reflection About Iterative Sentences and Arithmetic . . . . . . . . . . . . . . . . . . . . . . 141
Eugenio Roanes-Lozano

Using Extensions of the Residue Theorem for Improper Integrals Computations


With CAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
José L. Galán-Garcı́aPedro Rodrı́guez-Cielos

Reiterated Homogenization of Flows in Deforming Double Porosity Media . . . . . . . . . . . 143


Eduard Rohan

Multi-model Approach for Rotor Dynamics in Helicopter and Wind Turbine Simulation 145
Melven Röhrig-Zöllner

Dimensionality Reduction of Hybrid Rocket Fuel Combustion Data . . . . . . . . . . . . . . . . . 146


Alexander Rüttgers

Machine Learning Techniques for Global Sensitivity Analysis in Earth System Models 147
Cosmin Safta

On Initial Coefficient Inequalities for New Subclasses of Meromorphic Bi-univalent


Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
F. Müge Sakar

Concatenation Operator for Piecewise-defined Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 149


José Alfredo Sánchez de León

Accelerating Multivariate Simulation Using Graphical Processing Units With


Applications to RNA-seq Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
A. Grant Schissler

Phase-field Formulation of Brittle Damage With Application on Laminated Glass


Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Jaroslav Schmidt

Polyharmonic Splines Generated by Multivariate Smooth Interpolation . . . . . . . . . . . . . 152


Karel Segeth

DRBEM Solution to MHD Flow in Ducts With Thin Slipping Side Walls and
Separated by Conducting Thick Hartmann Walls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
P. Senel

Simulating Complex Shaped Particles With DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154


Eva Siegmann

TiGL, an Open Source Computational Geometry Library for Parametric Aircraft


Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Martin Siggel

14
Statistical Test for Fractional Brownian Motion Based on Detrending Moving
Average Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Grzegorz Sikora

A Novel Approach for Detecting Unexpected Model Results in E3SM Climate Model . 158
Balwinder Singh

Calculation of Linear Induction Motor Features by Detailed Equivalent Circuit


Method With Taking Into Account Non-linear Electromagnetic and Thermal Properties 159
Fedor Sarapulov

Numerical Analysis of Induction Heating by 3D Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 160


Vaclav Kotlan

Industry 4.0 Requires Education 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


Pavel Solin

Generalized Tellegen’s Theorem and Its Applications in System Theory . . . . . . . . . . . . . 162


Milan Stork

Numerical Solution of Nonlinear Aeroelastic Problems Using Linearized Approach


and Finite Element Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Petr Sváček

Gradient Methods of Training and Generalization Ability of a Biological Neural


Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Aleksandra Świetlicka

Non-intrusive Parameter Identification of Transport Processes . . . . . . . . . . . . . . . . . . . . . 165


Jan Sýkora

Parallel, High Performance, Fuzzy Logic Systems Realized in Hardware . . . . . . . . . . . . . 166


Tomasz Talaśka

Towards Performance-Portability of the Albany/FELIX Land-Ice Solver to New and


Emerging Architectures Using Kokkos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Irina Tezaur

The Jacobi-Davidson Eigensolver on GPU Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168


Jonas Thies

Fast, Flexible Particle Simulations: An Introduction to MercuryDPM . . . . . . . . . . . . . . . 169


Anthony Thonrton

Simulation of Chloride Migration in Cracked Concrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170


Pavel Trávnı́ček

Finite Volume Methods for Numerical Simulation of the Discharge Motion Described
by Different Physical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Jaroslav FortDavid Trdlicka

15
Constrained Derivative-free Optimization of a Two Shaft Generic Turbofan Engine . . . 172
Anke Tröltzsch

Modelling of Quazistationary Ionic Transport in Fluid Saturated Deformable Porous


Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Jana Turjanicová

A Discrete Element Sea-Ice Model for Climate Applications . . . . . . . . . . . . . . . . . . . . . . . 174


Adrian Turner

Two-level Schemes for Solving Transient Problems of Barotropic Fluid . . . . . . . . . . . . . . 175


Petr Vabishchevich

Estimation of Parameters by Theory of Inverse Problems and Search Metaheuristics


for the Inversion of the Zoeppritz Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Gerardo Alfredo Vargas-Contreras

A Computational CSP Model for Generation Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . 177


Felipe Dı́az

Multiobjective Optimisation of a Wave Energy Farm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178


Isabel VillalbaFelipe Dı́az

Treatment of Grounding Line Migration for Efficient Paleo-ice Sheet Simulations . . . . . 179
Lina Von Sydow

Adaptive Markov Chain Monte Carlo Methods in Infinite Dimensions . . . . . . . . . . . . . . . 180


Jonas Wallin

MercuryCG - From Discrete Particles to Continuum Fields . . . . . . . . . . . . . . . . . . . . . . . . 181


Thomas Weinhart

Discrete Element Simulation Based Investigation Into Statistical Inference Problems


for SAG Mill Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Daniel N. Wilke

A Geometric Partitioning Scheme for the Direct Parallel Solution of Steady CFD
Problems on Staggered Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Sven Baars

How to Recognize the Anomalous Diffusion? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184


Agnieszka Wylomanska

Efficient Evaluation of Space-Time Boundary Integral Operators on SIMD Architectures 185


Jan Zapletal

Algorithmic Patterns for H-matrices on Many-core Processors . . . . . . . . . . . . . . . . . . . . . 186


Peter Zaspel

16
Sensitivity Analysis of a Non-Ideal Expanding Flow to Perturbations of the Design
Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Giulio GoriMarta Zocca

17
18
Part I

Keynote Lectures
Efficient Discretizations for Exascale Applications

Tzanio Kolev
Lawrence Livermore National laboratory
tzanio@llnl.gov

Abstract

Efficient exploitation of exascale architectures requires rethinking of the numerical algorithms


used in many large-scale applications. These architectures favor algorithms that expose ultra
fine-grain parallelism and maximize the ratio of floating point operations to energy intensive
data movement. One of the few viable approaches to achieve high efficiency in the area of
PDE discretizations on unstructured grids is to use matrix-free/partially-assembled high-
order finite element methods, since these methods can increase the accuracy and/or lower
the computational time due to reduced data motion.
In this talk I will report on our work in the Center for Efficient Exascale Discretizations
(CEED), a co-design center in the US Exascale Computing Project that is focused on the de-
velopment of next-generation discretization software and algorithms to enable a wide range of
finite element applications to run efficiently on future hardware. CEED is a research partner-
ship involving 30+ computational scientists from two US national labs and five universities,
including members of the Nek5000, MFEM, MAGMA, OCCA and PETSc projects.
Topics of discussion will include recent progress in CEED packages and applications,
new miniapps, benchmarks and API libraries developed in the project, and our efforts in
scalable unstructured adaptive mesh refinement, matrix-free linear solvers and high-order
data analysis and visualization.

References
1. Center for Efficient Exascale Discretizations. . http://ceed.exascaleproject.org/.

21
Heterogeneous Computing – It’s Here to Stay, and Your
Science Will Depend on It

Rob Neely
Lawrence Livermore National Laboratory
neely4@llnl.gov

Abstract

For the past decade, hundreds or even thousands of talks and papers in the HPC world of
scientific computing have introduced their research with a paragraph outlining the pervasive
changes taking place in HPC architectures, and the impact that will have on application
developers. Those authors were right – there is serious change happening in how we approach
HPC unlike anything seen in the past 25 years, driven by power constraints with traditional
CPU-based clusters. But if you were to read that corpus of papers, you might still come
away feeling that confusion abounds in how to approach the problem. We can hopefully all
agree that more powerful HPC enables better science – the ultimate end goal we all seek. This
author has yet to see a compelling argument that once we attain a certain level of computing,
whether it’s exascale, zettascale, or beyond – that as domain scientists we will cease looking
beyond the horizon of what we can currently do and feel satisfied as a community that we
can peacefully sit back and declare an end to decades of relentless computing progress. While
much uncertainty still exists in exactly how the next several decades will play out, this talk
will argue that one certainty we can count on is that heterogeneous computing is here to stay,
that you must embrace it, and there’s no time like the present. In this talk, I will lay out the
story of how scientists at Lawrence Livermore National Laboratory came to this conclusion,
the difficulties that it entailed, the strategy we have pursued and continue to refine, and some
initial results from our GPU-based system Sierra that has convinced us that we’re on the
right path. I will conclude with some opinions on where HPC is headed in the next decade,
and how you as application and algorithm developers can and should begin now to prepare
yourself for the inevitable.

References
1. J. R. Neely. Heterogeneous Computing – It’s Here to Stay, and Your Science Will Depend on It. ESCO’18
Keynote Presentation.

22
Computation of Distributions of Samples Taken at Random
Events

Krzysztof Podgorski
Lund University
Krzysztof.Podgorski@stat.lu.se

Abstract
Analyzing statistical distributions of variables observed at random events of a stochastic
process or field is important in many practical problems. As a theoretical problem it has a
long history starting with the pioneering work of Kac (1943) and Rice’s (1944-45). Through
further theoretical development the generalized Rice formula approach has been adopted to
derive long-run distributions of characteristics taken at random events. While the approach,
in principle, is applicable in a quite general setup, even for the Gaussian models there are
some critical computational bottle necks. They arise from the integrals formulas which involve
high dimensional multivariate joint distributions that can be either close to singular or with
a complex correlation structure in high dimensions.
The problems can be best visualized for random surfaces and the Gaussian sea surface
model can serve as a classical example of empirical context. For example, one can ask about
statistical distribution of wave sizes, in particular, how distributed large waves are or how
steep they are. Dynamical evolution of the shapes and its statistical properties can be analyzed
through random velocities defined on the moving random surface. A method of measuring
three-dimensional spatial wave size can be also introduced and statistical distributions of the
size characteristics can be derived for Gaussian sea surfaces. All these distributions lead to
integral formulas involving continuous random fields and their computation requires some
efficient computational approach.
Things complicate computationally further if the underlying model is non-Gaussian. An
example of a non-Gaussian model is moving average process driven by a non-Gaussian noise.
The Slepian models that describe the distributional form of a stochastic process observed
at level crossings is a convenient way of representing the distributions at random events.
The Slepian model can be used for efficient simulations of the behavior of a random pro-
cess sampled at level-crossings. However, the effective utilization of such a model requires
producing samples which, especially in the non-Gaussian case, becomes a non-trivial compu-
tational problem. In some important cases, it can be solved by clever conditional sampling in
the spirit of Gibbs sampler. The practical application can be seen in analysis of mechanical
vehicle responses to a stochastic road surface.

References
1. M. Kac. On the average number of real roots of a random algebraic equation. Bull. Amer. Math. Soc.
vol. 49, 314-320, 1943.
2. S. O. Rice. The mathematical analysis of random noise. Bell Syst. Tech. J., vol. 23, no. 3, pp. 282–332,
1944, vol. 24, no. 1, pp. 46–156, 1945.

23
Computational Modelling Using Outer Loop Techniques,
With Applications to Bio-Mechanics and Fracture Mechanics

John Whiteman
Institure of Materials and Manufacturing, Brunel University London, UK
john.whiteman@brunel.ac.uk

Abstract

The Grand Challenge Problems facing the world are now so complicated that their successful
treatment using combined computational and experimental techniques often requires the use
of computational schemes of Outer Loop form. By this we mean that the overall computation
has an initial part where traditional numerical methods such as finite elements are employed
in a Primary calculation. This is followed by another Secondary (outer loop) computation,
where results from the primary computations are used in other numerical schemes to solve
an additional part of the overall problem. Typically the outer loop parts can require the solv-
ing of inverse problems or the use of stochastic analyses. We first illustrate the use of outer
loop schemes with an application for the acoustic localisation of coronary artery stenoses.
For contrast we outline a problem to find the tensile strength of glass panels with surface
fractures. Coronary Artery Disease (CAD) due to stenoses ( partially blocked coronary ar-
teries) is a world wide problem requiring ideally non-invasive methods for its detection, [1].
Our approach exploits the fact that, blood flow past a stenosis becomes disturbed, creat-
ing abnormal variations in wall shear stresses giving rise to acoustic waves. These can be
measures on the chest surface with sensors, giving a non-invasive means of CAD diagnosis.
To test the hypothesis we use a cylindrical phantom of tissue mimicking viscoelastic mate-
rial (TMM) to represent the chest. Combined use is then made of computational modelling
and experimentation; the computation treats the primary (forward) problem giving waves
on the phantom surface. With this output mathematical inverse problems are then solved to
locate wave sources within the phantom. In the primary problem high order spectral finite
element in space and high order discontinuous Galerkin finite element discretisations in time
are used. The linear systems of each time step are decoupled, see [2]. The localisation of the
wave sources in the outer loop calculations is done using the Matlab fminsearch function. All
the computational results are compared with others produced experimentally. In the second
problem we aim to calculate the tensile strength of glass panels. This can be done using
methods of computational mechanics when the panel surfaces are perfect, and similarly us-
ing computational fracture mechanics when the characteristics of surface flaws are known.
Each of the above cases constitutes a primary computation. However, when the characteris-
tics of the flaws are not known, following fracture computations stochastic methods have to
be employed to provide the probability of failure of the panel. This latter is the outer loop
calculation.

24
References
1. J Semlow and K Rahalkar . Acoustic Detection of Coronary Artery Disease. Annu. Rev. Biomed Eng
9, (2017) 449 - 469.
2. T Werder and K Gerdes and D Schoetzau and C Schwab. hp-discontinuous Galerkin Time Stepping
for Parabolic Problems. Comp. Meth. Appl. Mech. Engrg. 190, (2001) 6685-6708.

25
Adaptive Surrogate Construction in Simulation

Barbara Wohlmuth
Technical University of Munich
wohlmuth@ma.tum.de

Abstract

Surrogate models can significantly reduce the cost in compute intense simulations. In this talk,
we discuss several examples ranging from large scale finite element approximations to param-
eter dependent settings and to stochastic inversion. Introducing surrogates is, in general, a
trade off between accuracy and cost. In stochastic inversion such errors pollute predictions
and in finite element approximations the overall discretization error. To gain computational
speed-up and efficiency, the different error terms have to be balanced. Here we propose low
cost error control mechanism based on hierarchical structures and adjoint techniques. Starting
with low-fidelity surrogates and enhancing the local approximation properties of the surro-
gate allows us to construct a series of multi-fidelity surrogates. Alternatively we can use the
error indicators as a switch between low and high fidelity model. We focus on two different
scenarios for which we discuss the motivation and theory as well some performance aspects.
Numerical examples, including a a large scale Stokes type simulation and a orthotropic vibro-
acoustics setting, demonstrate the potential of surrogates in compute intense simulations. In
large scale simulations, a low-cost surrogate can replace the more computational and/or com-
munication intense PDE operator. In stochastic inversion, the adaptive strategy allows for
accurate predictions under uncertainty for a much smaller computational cost than uniform
refinement.

References
1. S. Mattis and B. Wohlmuth. Goal-oriented adaptive surrogate construction for stochastic inversion.
arXiv:1802.10487, to appear in CMAME.
2. M. Parente and S. Mattis and S. Gupta and C. Deusner and B. Wohlmuth. Efficient parameter
estimation for a methane hydrate model with active subspaces. arXiv:1801.09499.
3. S. Bauer and M. Mohr and U. Rüde and J. Weismüller and M. Wittmann and B. Wohlmuth.
A two-scale approach for efficient on-the-fly operator assembly in massively parallel high performance
multigrid codes. Applied Numerical Mathematics, 122 (2017) 14-38.
4. T. Horger and B. Wohlmuth and L. Wunderlich. Reduced basis isogeometric mortar approxi-
mations for eigenvalue problems in vibroacoustics. Model Reduction of Parametrized Systems, Springer,
91–106, 2017.
5. S. Bauer and D. Drzisga and M. Mohr and U. Rüde and C. Waluga and B. Wohlmuth. A
stencil scaling approach for accelerating matrix-free finite element implementations. arXiv:1709.06793.

26
Part II

Abstracts
A High-Order Elliptic PDE Based Level Set Reinitialisation
Method Using a Discontinuous Galerkin Discretisation With
Applications to Topology Optimisation

Thomas Adams, Stefano Giani, William Coombs


Department of Engineering, Durham University, Lower Mountjoy, South Road, Durham,
DH1 3LE, UK
thomas.d.adams@durham.ac.uk, stefano.giani@durham.ac.uk,
w.m.coombs@durham.ac.uk

Abstract
Level set reinitialisation methods are a group of methods which allow one, at any iteration
during the solution of a level set evolution problem, to rebuild the level set function such that
it becomes a close approximation to a signed distance function, which is often necessary to
ensure stability. Reinitialisation is considered a necessary evil however, as it both increases
the computational expense of the problem and can reduce the accuracy of the parent method
through shifts in the position of the zero isocontour of the level set function. It is the aim
of this work to advance the level set methodology through the adoption of a discontinuous
Galerkin (DG) discretisation. DG methods have a number of advantages when compared with
continuous Galerkin (CG) finite elements, including trivial implementation of parallelisation,
and hp-adaptivity, which means that one can improve the time requirements for expensive
problems as well as achieving high-order accuracy, which is particularly desirable in the
context of a level set reinitialisation method, as it may suffice to remedy some of the previously
stated issues. A number of the preferred methods of reinitialisation, do not trivially translate
to DG. For example, see previous work on the geometric method [1], fast marching methods
[2], fast sweeping methods [3], and the Hyperbolic PDE based reinitialisation method [4].
Where these methods have been applied successfully to discontinuous problems, they have
often not been shown to achieve the desired high-order accuracy. In this work we present a
fully DG level set method, with emphasis on an optimally convergent level set reinitialisation
method based on the solution of an elliptic PDE, a CG solution to which is presented in [5].

References
1. R. Saye. High-order methods for computing distances to implicitly defined surfaces. Comm. App. Math.
Com. Sc. 9 (2014) 107-141.
2. M. Sussman and M. Hussaini. A Discontinuous Spectral Element Method for the Level Set Equation.
J. Sci. Comput. 19 (2003) 479-500.
3. Y. Zhang and S. Chen and F. Li and H. Zhao and C. Shu. Uniformly Accurate Discontinuous
Galerkin Fast Sweeping Methods for Eikonal Equations. SIAM J. Sci. Comput. 33 (2011) 1873-1896.
4. R. Mousavi. Level Set Method for Simulating the Dynamics of the Fluid-Fluid Interfaces Application of
a Discontinuous Galerkin Method. PhD Thesis, Technische Universität Darmstadt, (2014).
5. C. Basting and D. Kuzmin. A minimization-based finite element formulation for interface-preserving
level set reinitialization. Computing 95 (2013) 13-25.

29
Optimizing Microwave Bandgap Filter Structures by
Minimizing Radiation-Loss

Jorge Aguilar Torrentera, Moisés Hinojosa Rivera , Javier Morales Castillo


Universidad Autónoma de Nuevo León
torrenteraj@yahoo.com, hinojosamoises@yahoo.fr,
tequilaydiamante@yahoo.com.mx

Abstract

Defected Ground Structures (DGS) become very interesting options for filter implementations
as the bandgap effect ensures a single-pole (Butterworth-type) filter over wide frequencies. A
less known characteristic of DGS is that the slot etched in the ground plane works as a radi-
ator (antenna) giving rise to unwanted energy coupled to nearby components at frequencies
beyond resonance. In this work, for the first time, the approach of minimal radiation-loss is
introduced to develop a new bandgap cell based on the Defected Microstrip Structure (DMS)
concept; a microstrip defect is included in the ground plane to further disturb the shield cur-
rent distribution. The complementarity between the DGS and DMS cells is evaluated in a new
unit structure that maintains the single pole response of DGS while providing better ground-
ing and diminished radiation losses. Full-wave electromagnetic simulations were achieved to
allow investigating of the grounding and radiation effects; density current distributions and
induced fields in the etched defects are carefully studied using electromagnetic modeling.
Unwanted couplings are reduced improving the filter response specified by lumped circuit
models. Through measured prototypes, the proposed cell scheme was analyzed confirming
frequency responses with improved attenuation levels.

References
1. A. Tirado-Mendez. Improving Frequency Response of Microstrip Filters Using Defected Ground and
Defected Microstrip Structures. Progress in Electromagnetics Research C 13 (2010) 77-90.
2. D. Ahn. A Design of the Low-Pass Filter Using the Novel Microstrip Defected Ground Structure. IEEE
Transactions on Microwave Theory and Techniques 49 (2001) 86-93.
3. A. Tirado-Mendez. A Proposed Defected Microstrip Structure (DMS) Behavior for Reducing Rectan-
gular Patch Antenna Size. Microwave and Optical Technology Letters 43 (2004) 481-484.

30
A Fast Functional Approach to Personalized Menus
Generation Using Set Operations

Eugenio Roanes-Lozano
Instituto de Matemática Interdisciplinar. Depto. de Álgebra, Geometrı́a y Topologı́a.
Universidad Complutense de Madrid, Spain
eroanes@mat.ucm.es

José Luis Galán-Garcı́a, Gabriel Aguilera-Venegas


Depto. de Matemática Aplicada, Universidad de Málaga, Spain
jlgalan@uma.es, gabri@ctima.uma.es

Abstract

The authors developed some time ago a RBES [1] devoted to preparing personalized menus at
restaurants according to the allergies, religious constraints, likes and other diet requirements
as well as products availability. A first version was presented at the “Applications of Com-
puter Algebra 2015” (ACA’2015) conference [2] and an improved version to the “5th Euro-
pean Seminar on Computing” (ESCO2016) [3]. Preparing personalized menus can be specially
important when traveling abroad and facing unknown dishes in a menu. Some restaurants
include icons in their menu regarding their adequateness for celiacs or vegetarians and vegans,
but this is not always a complete information, as it doesn’t consider, for instance, personal
dislikes or uncommon allergies. The tool previously developed can obtain, using logic deduc-
tion, a personalized menu for each customer, according to the precise recipes of the restaurant
and taking into account the data given by the customer and the ingredients out of stock (if
any). Now a new approach has been followed, using functions and set operations and the
speed has been increased by three orders of magnitude, allowing to deal with huge menus
instantly. Both approaches have been implemented on the computer algebra system Maple
and are exemplified using the same recipes in order to compare their performances.

References
1. E. Roanes-Lozano and L. M. Laita and A. Hernando and E. Roanes-Macı́as. An Algebraic
Approach to Rule Based Expert Systems. RACSAM Rev. R. Acad. Cien. Serie A. Mat. 104/1 (2010)
19-40. doi: 10.5052/RACSAM.2010.04.
2. E. Roanes-Lozano and J. L. Galán-Garcı́a and G. Aguilera-Venegas.
Computer Algebra-based RBES personalized menu generator (Abstract).
http://math.unm.edu/ aca/ACA/2015/Nonstandard/RoanesLozano.pdf.
3. E. Roanes–Lozano and J. L. Galán–Garcı́a and G. Aguilera–Venegas. A prototype of a RBES for
personalized menus generation. Appl. Math. Comput. 315 (2017) 615–624. doi: 10.1016/j.amc.2016.12.023.

31
A Class Experience of Statistics for the Degree of Health
Engineering

Gabriel Aguilera-Venegas, José Luis Galán-Garcı́a, M. Ángeles Galán-Garcı́a, Yolanda


Padilla-Domı́nguez, Pedro Rodrı́guez-Cielos
University of Malaga
gabri@ctima.uma.es, jlgalan@uma.es, magalan@ctima.uma.es,
ypadilla@ctima.uma.es, prodriguez@uma.es

Abstract

This talk is about the experience of teaching Statistics for the degree of Health Engineering
in the last four years.
At the beginning of the course, an statistic experiment about meditation is proposed to
the students:
– The first two minutes of every class time are spent in a very short meditation.
– The students choose if a control group is used in which case, they decide the proportion
of this group. In any case, the students decide the statistical variables, and the statistical
hypothesis tests that are going to be studied.
– The data of the variables are taken at the beginning and at the end of the course for every
student.
– The students make individual and collective works about regression and statistical hypoth-
esis tests about the variables before and after meditation, or group control vs. students
participating in meditations.
The motivation of the students in the experiment has been very high. The marks of the
students involved in the experiment are clearly higher than other students’ marks.

References
1. W. Navidi. Statistics for Engineers and Scientists. McGraw-Hill Education; 4th Edition. (2014).

32
The Effect of the Seams on a Baseball. Simulations and a
Mathematical Model for the Lift and Lateral Forces.

Mario Alberto Aguirre López, Javier Morales Castillo, Francisco Javier Almaguer
Martı́nez
Autonomous University of Nuevo Leon
marioal1906@gmail.com, tequilaydiamante@yahoo.com.mx,
francisco.almaguermrt@uanl.edu.mx

Filiberto Hueyotl Zahuantitla


Cátedra CONACyT-UNACH
filihz@gmail.com

Abstract

The trajectory of a baseball depends on the forces acting on the ball. Within these forces,
those caused by the asymmetry of the seams are so unpredictable that erratic movements are
produced in real trajectories of baseballs with slow or without rotation. Most of researches
about such effects consist on experimental measures in wind tunnels. There are not simu-
lations of the process and only one phenomenological model that explains those forces has
been reported in literature. We present an analysis of the surface (lift and lateral) forces pro-
duced by different seams’ configurations and their connection with the behavior of the ball’s
boundary layers, when considering a typical professional baseball that does not spin or spins
so slowly that Magnus force is not produced, at normal air conditions. Numerical proves was
carried out by solving the Navier-Stokes equations via ZEUS-3D software, and using finite
differences method with a uniform quadratic grid. Surface forces are computed by taking
the average of the pressure along the up and down boundary layers. Results are similar to
wind tunnel measurements for different ball velocities, which validates the experimentation.
This togheter with the visual information of the boundary layer, which is obtained from the
simulations, permit us to understand better the effect of a single seam, the set of seams and
the interaction between them. In turn, the model for surface forces mentioned above has been
improved from these observations by adding weights to the effect of each seam.

References
1. M.A. Aguirre-López and O. Dı́az-Hernández and F-J. Almaguer and J. Morales-Castillo and
G.J. Escalera Santos. A phenomenological model for the aerodynamics of the knuckleball. Applied
Mathematics and Computation 311 (2017) 58-65.
2. J.P. Borg and M.P. Morrisey. Aerodynamics of the knuckleball pitch: Experimental measurements
on slowly rotating baseballs. American Journal of Physics 82 (2014) 921-927.
3. H. Higuchi and T. Kiura. Aerodynamics of knuckle ball: Flow-structure interaction problem on a
pitched baseball without spin. Journal of Fluids and Structures 32 (2012) 65–77.

33
On Continuous Inkjet Systems: A Printer Driver for Expiry
Date Labels on Cylindrical Surfaces

Mario Alberto Aguirre López, Francisco Javier Almaguer Martı́nez, Javier Morales
Castillo
Autonomous University of Nuevo Leon
marioal1906@gmail.com, francisco.almaguermrt@uanl.edu.mx,
tequilaydiamante@yahoo.com.mx

Orlando Dı́az Hernández, Gerardo Jesús Escalera Santos


Autonomous University of Chiapas
orlandodiaz 22@hotmail.com, gescalera.santos@gmail.com

Abstract

Continuous inkjet systems are commonly used to print expiry date labels for food products.
These systems are designed to print on flat surfaces, however, a lot of food products package
have a cylindrical shape (e.g. bottled and canned products) which causes an enlargement
in characters at the ends of the label. In this work, we present an algorithm to correct this
defect by calculating the extra-distance that an ink drop travels when the printing surface
approaches an elliptic cylinder. Each charged ink drop is modelling as a solid particle which
is affected by the air drag, Earth’s gravitation and voltage due to the electrical field that
causes the perturbation in the ink drop path. Enlargement and interaction between ink drops
are so small that they can be omtted and then, equations of motion are simplified. Numerical
results show the correction of the enlargement mentioned above by varying the electric field
along the width of the label. In addition, the equation and the values of a second electric
field to correct the printing’s inclination caused by the method of the system’s operation are
presented.

References
1. G.D. Martin and S.D. Hoath and I.M. Hutchings. Inkjet printing - the physics of manipulating
liquid jets and drops. Engineering and Physics-Synergy for Success 105 (2008) 012001.

34
Finite Difference Solutions of 1D Magnetohydrodynamic
Channel Flow With Slipping Walls

Sinem Arslan, Münevver Tezer-Sezgin


Middle East Technical University, Department of Mathematics
arsinem@metu.edu.tr, munt@metu.edu.tr

Abstract

In this study, the fully developed magnetohydrodynamic (MHD) flow is considered in a pipe
along with the z-axis under an external magnetic field which is perpendicular to the pipe. So,
the relevant variables, the velocity u and the induced magnetic field b depend on the plane
coordinates x and y. When the flow is considered between two parallel plates (Hartmann flow)
the external magnetic field is perpendicular to the two channel walls and the lateral channel
walls are at infinity. Now, the variations of the velocity and the induced magnetic field are only
with respect to the coordinate y between the plates, [1]. The finite difference method (FDM)
is used to solve the governing equations with different type of boundary conditions such as the
slip boundary condition for the velocity u and insulated and/or conducting end conditions for
the induced magnetic field b. The numerical results obtained from FDM discretized equations
have been compared with the exact solution whenever it exists (i.e. the no-slip walls) for
the 1D MHD flow between parallel plates. The velocity and the induced magnetic field are
obtained at the mesh points and simulated for each case of boundary conditions. It is observed
from the profiles of the velocity that the velocity magnitude decreases as the Hartmann
number Ha increases, which is the well-known flattening tendency of MHD flow. Also, it is
seen that boundary layers for both the velocity and the induced magnetic field are formed
near the plates when Ha increases. Further, the effects of the increase in both the conductivity
parameter c and the slip length α on the flow and on the induced magnetic field are shown
for several values of Ha. The increases in the slip length and the conductivity parameter,
increase the magnitudes of the velocity and the induced current, respectively, which are both
weakened for large values of Ha. The volumetric flow rate decreases with an increase in the
wall conductivity whereas it increases with an increase in the slip length. Thus, the FDM
which is simple to implement, enables one to depict the effects of these type of boundary
conditions on the behaviour of both the velocity and the induced magnetic field at a small
expense.

References
1. Muller U. and Buhler L. Magnetohydrodynamics in Channels and Containers. Springer, New York,
2001.

35
Antarctic Ice Shelf-ocean Interactions in High-resolution,
Global Simulations Using the Energy Exascale Earth System
Model (E3SM)

Xylar Asay-Davis, Mathew Maltrud, Mark Petersen, Stephen Price, Luke Van Roekel
Los Alamos National Laboratory
xylar@lanl.gov, maltrud@lanl.gov, mpetersen@lanl.gov, sprice@lanl.gov,
lvanroekel@lanl.gov

Abstract

We use global simulations with the U.S. Department of Energy’s Energy Exascale Earth
System Model (E3SM) to explore ice shelf-ocean interactions and their effect on Antarctic
regional climate, focusing on the model’s sensitivity to uncertain parameters. We present the
results from a large number of simulations at modest resolution (∼30 km at the poles, ∼60 km
at mid-latitudes), most of which include static ice-shelf cavities. Based on these simulations,
we attain a tuned moderate resolution state that we use as a control configuration for a
smaller number of (much costlier) sensitivity experiments at higher resolution.
In these simulations, E3SM is configured with active ocean and sea-ice components with
CORE-2 interannual forcing. The simulations do not include an active land-ice component,
so the ice-shelf topography, derived from the Bedmap2 data set, is held fixed in time. The
simulations begin from a common state, spun up for ∼25 years using a set of control parameter
values. Following the work of Nakayama et al. (2017) and Urrego-Blanco et al. (2016), we
vary parameters involved in the following processes: ocean horizontal and vertical mixing; melt
ponds and sea-ice albedo; sea-ice ridging; ice shelf-ocean boundary conditions; and drag at
various component interfaces. From these runs, we determine a subset of parameters that most
strongly affect observable properties in the Antarctic and use these parameters to tune the
model to better match available observations, including mooring-, float- and ship-based ocean
tracers and velocity as well as satellite-derived sea-surface temperature, sea-ice thickness and
concentration, and sub-ice-shelf melt rates.
We conclude by presenting some preliminary results from an ongoing effort to implement
full ice sheet-ocean coupling in E3SM.

References
1. Y. Nakayama and D. Menemenlis and M. Schodlok and E. Rignot. Amundsen and Bellingshausen
Seas simulation with optimized ocean, sea ice, and thermodynamic ice shelf model parameters. J. Geophys.
Res. Oceans, 122 (2017) 6180-6195.
2. J. R. Urrego-Blanco and N. M. Urban and E. C. Hunke and A. K. Turner and and N. Jeffery.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model. J. Geophys.
Res. Oceans, 121(2016), 2709–2732.

36
Server Information Mechanism in a Discrete-time Queueing
System

Ivan Atencia-Mckillop, José L. Galán-Garcı́a, Gabriel Aguilera-Venegas, Pedro


Rodrı́guez-Cielos, Ma Ángeles Galán-Garcı́a
University of Málaga
iatencia@ctima.uma.es, jlgalan@uma.es, gabri@ctima.uma.es,
prodriguez@uma.es, magalan@ctima.uma.es

Abstract

This paper discusses a discrete-time queueing system in which an arriving customer may adopt
four different strategies depending on the income information, two of them correspond to a
LCFS discipline where displacements or expulsions occur, and in the other two, the arriving
customer decides to follow a FCFS discipline or to become a negative customer eliminating
the customer in the server, if any. The different choices of the involved parameters make this
model to enjoy a great versatility, having several special cases of interest.

References
1. I. Atencia. A discrete-time system queueing system with server break- downs and changes in the repair
times. Ann. Oper. Res. 235 (2015) 37-49.
2. A. Krishnamoorthy and P. Pramod and S. Chakravarthy. A survey on queues with interruptions.
TOP 22 (2014) 290-320.

37
On Univalent Harmonic Mappings With Analytic Parts
Starlike of Some Order

F. M. Sakar
Batman University
mugesakar@hotmail.com

S. M. Aydoğan
Istanbul Technical University
aydogansm@itu.edu.tr

Abstract

Let f (z) = h(z) + g(z) be the sense-preserving harmonic mapping, then it satisfies non-linear
elliptic partial differential equation f z = ω(z)fz . In the current study the solution of this
0
equation was investigated by using subordination method under the condition ω(z) = hg 0(z) (z) =
n
b1 1−z ψ(z)
1+z n ψ(z) , ψ(z) is analytic and |ψ(z)| < a, (0 < a ≤ 1) in the open unit disc D = {z| |z| < 1}.

References
1. J. Clunie and F. R. Keogh. On starlike and convex Schlicht functions. MR 22, 1682. ”J. Lond. Math.
Soc., 1960, 35, 229-233”..
2. P. L . Duren. Univalent Functions. ”Grundlehren der Mathematischen Wissenschaften (1983), Band 259,
New York, Berlin, Heidelberg and Tokyo, Springer-Verlag.”.
3. S. Fukui and K. Sakaguchi. An Extension of a Theorem of S. Ruschweyh. Bull. Fac. Edu. Wakayama
Univ. Nat. Sci. (1980), 29, 1-3..
4. A. W . Goodman . Univalent Functions. Volume I, II, Mariner publishing Company INC (1983), Tampa
Florida..
5. M. L. Mogra. On a Class of Starlike Functions in the Unit Disc. J. Indian Math. Soc. (1976), 40,
158-165..

38
Finite Element Approximation of an Elliptic Problem With a
Nonlinear Newton Boundary Condition

Miloslav Feistauer, Ondrej Bartos, Filip Roskovec


Department of Numerical Mathematics, Faculty of Mathematics and Physics, Charles
University, 18675 Prague, Czech Republic
feist@karlin.mff.cuni.cz, ondra.bartosh@seznam.cz, roskovec@gmail.com

Abstract

Many engineering problems can be described using elliptic partial differential equations
equipped with nonlinear Newton boundary conditions, e.g. electrolysis of aluminium or ra-
diation heat transfer problem. We look at a problem which contains (possibly non-integer)
powers in its boundary condition in a two-dimensional polygonal domain. The exact solution
loses regularity near boundary vertices and to some extent also near boundary edges. When
this problem is discretized to seek a piecewise polynomial approximate solution and solved
using a finite element method (FEM) this loss of regularity limits the theoretically expected
order of convergence of the approximate solution to the exact solution [1]. Additionally, the
order of convergence should be divided by the power appearing in the nonlinear boundary
condition [2]. In practise, FEM often converges with such a rate as if there was no nonlinearity
on the boundary. We attempt to explain this behaviour, whether it is the same if the error is
measured in different norms of Sobolev spaces and how this changes when the exact solution
is zero on a large part of the boundary. An interesting phenomenon is that the derivatives of
the approximate solution can converge faster to the exact solution than the function values
themselves.

References
1. Feistauer M. and Roskovec F. and Sandig A.-M.. Discontinuous Galerkin Method for an Elliptic
Problem with Nonlinear Newton Boundary Conditions in a Polygon. IMA, 2017, 00, 1-31.
2. Feistauer M. and Najzar K.. Finite element approximation of a problem with a nonlinear Newton
boundary condition. Numer. Math. 78, 1998, 403-425.

39
Optimizing the Fractionation of Gas

Nourredine Ben tahar


M’hamed Bougara University
benourdz@gmail.com

Abstract

In recent years, new energy has managed to work their way off of oil and natural gas; it is
liquefied petroleum gas (LPG). With their advantages of clean, transportable energy, LPG
managed to penetrate sectors as diverse as residential, petrochemicals, agriculture, industry
and automotive (LPG) [1], Purvin and Gertz estimates that the global market will grow
about 3.1% per year [2]. LPG occupies a place of great importance in the marketing strategy
of hydrocarbons Sonatrach. However the evolution experienced by the energy sector, now
offers better marketing opportunities, on the other hand, the production of LPG must meet
the marketing standards, and that is why we must optimize the operating parameters in the
fractionation units thereof. Nowadays, the simulation and optimization of chemical processes
require precise knowledge of the equilibrium properties of the blends over wide ranges of
temperatures, pressures and compositions, these phase equilibria can be measured by various
methods. The calculations balanced liquid - vapor are very often produced using state of cubic
equations. When these equations of state are applied to mixtures, molecular interactions are
taken into account by a binary interaction coefficient, called kij, the choice is very tricky, even
for simple mixtures [3], these methods represent models Thermodynamic who experienced
progressive development since their appearance.
Références [1] P. O’Connor, L. A. Gerritsen, J. R. Pearce, P. H. Desai, S. Yanik, and A.
Humphries, Hydrocarbon Process. 70(11), 76–84 (1991). [2] W. L. Nelson, Petroleum Refinery
Engineering, 4th Ed, (McGraw-Hill Book Company, New York, 1958). 759–810 [3] James H.
Gary, Glenn E. Handwerk. Petroleum Refining Technology and Economics Fourth Edition
1999 Marcel Dekker, Inc. New York • Basel [4] J.-P.Wauquier Le raffinage du Pétrole. Tome
1.Pétrole brut. Produits pétroliers. Schemas de fabrications Edition Technip, Paris 1994 p385
[5] P.Leprince, Petroleum refining, tome 3 conversion processes, edition Technip, Paris. [6]
Sadeghbeigi, R. 1995. Fluid Catalytic Cracking: Design, Operation, and Troubleshooting
of FCC Facilities. Gulf Publishing Company, Houston, TX. 2000. [7] Speight, J. G. The
chemistry and technology of petroleum 4th ed. Taylor and Francis Group, LLC 2007 [8]
Pines, H. 1981.The Chemistry of Catalytic Hydrocarbon Conversions. Academic Press, New
York. [9] Speight, J.G. 1999.The Chemistry and Technology of petroleum, 3nd ed. Marcel
Dekker, New York.

References
1. B. Ait Aissa and Ken Otto. Gas simutaion of separation. Geneus chemestry 2009.

40
Optimization of Coke Production From Algerian Oil Residues

Nourredine BEN TAHAR, Nourredine BEN TAHAR, Massiva MANSEUR, Amina


IZIMMEUR, Katia GHEZALI
M’hamed Bougara University
benourdz@gmail.com, nbentahardz@yahoo.fr, manseurm@yahoo.fr,
izimmeura@yahoo.fr, kghezali@yaoo.fr

Abstract

Coke quality depends essentially on the nature of the feedstock of the process of coking This
research was performed in order to allow the study of the chemical composition influence of
the coking process load on the efficiency and the quality of coke. For this reason, the coking
of the following loads was realized: Atmospheric residue (RAT), vacuum Residue (RSV) and
catalytic Residue of cracking (RCC). (The residues are obtained from an Algerian crude
oil).As the oil residues are rich for their strongly polar composition, such as the asphaltene
resins, and complex structures units (SCU), which has a role in the formation of coke, and
as the dispersion of these latter improves the quality of coke, a study on the stability of
aggregation was carried out by the addition of one stabilizer (oil Extract) in the coking process
load. The Compounding (Extracted from /RCC oil) has been derived to the best efficiency
of coke. The study consists of the influence. . . ., this is characterized by the analyses Infra-red
(IR) and x-ray diffraction (XRD).
Keywords: Coking; oil residue; dispersant; aggregation stability

References
1. S. Parkash and P. Le prince and ] E. Totten. George; R. Steven Westbrook; J. Rajesh. Shah
and undefined . Refining processes handbook, Fuels and lubricants Handbook technology properties
performance and testing, . Geneus chemestry 2009.

41
DEM Calibration Procedure for Bulk Physical Tests: A Case
Study Using the Casagrande Shear Box Test

Ben Turkia Salma, Pizette Patrick, Abriak Nor-Edine


IMT Lille Douai, Univ. Lille, Civil Engineering & Environmental Department, France
salma.ben-turkia@imt-lille-douai.fr, patrick.pizette@imt-lille-douai.fr,
nor-edine.abriak@imt-lille-douai.fr

Daniel .N Wilke
Centre for Asset and Integrity Management,University of Pretoria, Pretoria,South Africa
nico.wilke@up.ac.za

Govender Nicolin
Department of chemical engineering,University of Surrey,Guildford,UK
govender.nicolin@gmail.com

Abstract
Understanding the behavior of granular materials is critical in a variety of industries ranging
for powders in pharmaceuticals to gravel and sand for civil engineering applications. The
macroscopic behavior of granular media cannot be described by a rheological model. This
direct numerical simulation at the particle scale using the Discrete Element Method (DEM)
is required to simulate granular material. The discrete element method models (DEM) takes
into account the micro mechanical parameters at particle scale to predict the response at
the macroscopic scale. As the response depends on the material under consideration, the
calibration of the DEM is required. The applicability and usefulness of a calibrated discrete
element model (DEM) is highly dependent on the quality of the calibration process. This
calibration process often needs to be repeated between applications and even for the same
application at sufficiently distinct parameter subdomains. The current research paper is a
scientific exploration of the shear behavior of an idealized model of granular materials (glass
beads) using the Casagrande shear box test in order to calibrate the DEM models. Two DEM
contact models were calibrated using a systematic calibration process developed in this study.
The calibration process relies on design of experiments , radial basis function interpolations
and robust optimization strategies [1] to find suitable parameters. The predictability of the
two calibrated DEM models are investigated as the experimental parameters such as the
normal force are chosen further and further away from the experimental parameters used
during calibration. The aim of this study is to check the predictability of the calibrated DEM
models which can provide valuable information for additional laboratory results to finally
obtain a sufficiently general DEM model.

References
1. J.Snyman. Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and
Classical ans New Gradient –Based Algorithms. Springer (2005).

42
A Discontinuous Galerkin Hp-adaptive Finite Element
Method for Accurate Brittle Crack Modelling

Robert Bird, William Coombs, Stefano Giani


School of Engineering and Computing Sciences, Durham University, Science Site, South
Road, Durham, DH1 3LE, UK.
robert.e.bird@durham.ac.uk, w.m.coombs@durham.ac.uk,
stefano.giani@durham.ac.uk

Abstract

In this paper the discontinuous Galerkin symmetric interior penalty (SIPG) method is used
to determine the configurational force (CF) value at the crack tip for small strain linear
elastic problems. Current methods for calculating the CF are domain dependent, a novel do-
main independent method for calculating the CF has therefore been developed. Additionally,
mesh refinement strategies for problems with cracks are naı̈ve and are only able to achieve
accuracies in the region of 0.01% for the value of the CF at the crack tip [1]. This lack of ac-
curacy has prompted the derivation of an a posteriori error estimator for SIPG which drives
a hp-adaptive scheme which is simple to incorporate into the SIPG method. Here the error
estimator, which is shown to bound the error in the novel CF calculation, converges expo-
nentially with hp-adaptivity, hence accuracies for the CF at the crack tip are produced which
up to now have been unobtainable in literature. Further, the efficacy of the error estimator
for improving the accuracy of the CF is verified against the analytical double crack problem,
presented by Westergaard [2], for mode 1, 2 and mixed mode problems. The method is then
used to provide benchmarks for crack tip stress intensity factors for problems with no ana-
lytical solution, additionally domain independence is demonstrated for these more complex
problems.

References
1. R. Denzer and F. J. Barth and and P. Steinmann. Studies in elastic fracture mechanics based on
the material force method. Int. J. Numer. Anal. Meth. 58 (2003) 1817–1835.
2. H. Westergaard. Bearing pressures and cracks. SPIE Milestone Series MS. 137 (1997) 18–22.

43
GPU Solver for SPD Matrices vs. Traditional Codes

Jan Bohacek
Montanuniversitaet Leoben
bohacek.jan@gmail.com

Abstract

In our laboratory we have been extensively using the inverse task to reconstruct thermal
boundary conditions at hot surfaces of solid materials. More than a decade of experience and
cooperation with industries has proven our experimental/numerical technique to be reliable
and very accurate. Until now we have also believed our algorithm originally using the line-
by-line method is efficient and fast, regrettably we were wrong. The transient 2D (3D) heat
diffusion in a multi-material sample is the most computationally costly ingredient of the
algorithm. In the present paper, the potential for speeding up our calculations is manifested
by firstly benchmarking it against traditional CFD codes such as OpenFOAM (FDIC) and
ANSYS Fluent (AMG). Secondly, we also present a unique comparison with three in-house
GPU codes each written by a different PhD student/postdoc of ours. Chronological listed,
one student pushed his luck with a fully explicit scheme, while the other two bet on implicit
methods namely the line-by-line method in OpenCL and the conjugate gradient method with
the deflated truncated Neumann series preconditioner in CUDA.

References
1. M. Pohanka. Technical Experiment Based Inverse Tasks in Mechanics. PhD Thesis (2006) Brno, The
Czech Republic.
2. J. Ondrouskova. Development of inverse task solved by using the optimizing procedures and large
number of parallel threads. PhD Thesis (2015) Brno, The Czech Republic.
3. L. Klimes. Optimization of Secondary Cooling Parameters of Continuous Steel Casting. PhD Thesis
(2015) Brno, The Czech Republic.
4. R. Gupta. GPU acceleration of preconditioned solvers for ill-conditioned linear systems. PhD Thesis
(2015), Delft, The Netherlands.

44
The Rational SPDE Approach for Gaussian Random Fields
With General Smoothness

David Bolin, Kristin Kirchner


Chalmers University of Technology and University of Gothenburg
david.bolin@chalmers.se, kristin.kirchner@chalmers.se

Abstract

A popular approach for modeling and inference in spatial statistics is to represent Gaussian
random fields as solutions to stochastic partial differential equations (SPDEs) of the form
Lβ u = W, where W is Gaussian white noise, L is a second-order differential operator, and
β > 0 is a parameter that determines the smoothness of u (Lindgren et al. 2011). However, this
approach has been limited to the case 2β ∈ N, which excludes several important covariance
models and makes it necessary to keep β fixed during inference.
We introduce a new method, the rational SPDE approach, which is applicable for any
β > 0 and therefore remedies the mentioned limitation. The presented scheme combines a
finite element discretization in space with a rational approximation of the function x−β to
approximate u. For the resulting approximation, an explicit rate of strong convergence to
u is derived and we show that the method has the same computational benefits as in the
restricted case 2β ∈ N when used for statistical inference and prediction. Several numerical
experiments are performed to illustrate the accuracy of the method, and to show how it can
be used for likelihood-based inference for all model parameters including β.

References
1. F. Lindgren and H. Rue and J. Lindström . An explicit link between Gaussian fields and Gaussian
Markov random fields: the stochastic partial differential equation approach (with discussion). J. Roy.
Statist. Soc. Ser. B Stat. Methodol. 73 (2011) 423–498.

45
A Generalized Muffin Tin Augmented (Plane)Wave Method

Moritz Braun
University of South Africa
moritz.braun@gmail.com

Abstract

Since its invention the full potential LAPW method as implemented for example in the
Wien2k code[1] has been used to compute the properties of periodic structures, i.e. solids
quite successfully. However, this muffin-tin-type method is not very suitable for non-periodic
system, i.e. molecules. Although one can make the unit cell larger and larger in order to
get results for an isolated molecule this quickly blows up the number of degrees of freedom
NDOF .
In this contribution we proceed to replace the plane waves in this method by the tensor
products of sine functions in three dimensions, viz.
π
Fijk = sin(iqmin x) sin(jqmin y) sin(kqmin z) where qmin = and 1 ≤ i, j, k ≤ N − 1 ,
L
in terms of the linear system dimension L, while making use of the Fast Sine Transform[2] for
computational efficiency. These basis functions vanish on the surface of the domain Ω = [0, L]3
and thus make the method suitable for non-period systems.
The changes to the LAPW formalism are, that the matrices implementing the boundary
conditions at the surfaces of the atomic spheres will be calculated via numerical integration
instead of using the plane wave expansion formula in terms of spherical harmonics and that the
atomic orbitals will not be obtained at a initial energy together with their energy derivatives
but rather solved for as part of one eigenvalue problem with coefficients for all domains,
i.e. one interstitial domain and NA atomic spheres. The atomic orbitals will be expanded in
terms of real-valued spherical harmonics Rlm , and the radial channel functions in terms of
finite elements as in [3]. The implementation of the boundary conditions in the eigenvalue
problem is discussed in detail and some preliminary results for small molecules will be shown
and compared with results of competing methods.

References
1. P. Blaha and K. Schwarz and G.K.H Madsen and D. Vasnicka and J. Luitz. An Augmented Plane
Wave + Local Orbitals Program for Calculating Crystal Properties. Published by Technical University
Vienna, Austria.
2. http://scipy.org . Fourier Transforms ( scipy.fftpack). https://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html.
3. M Braun and K O Obodo . Multi-domain muffin tin finite element density functional calculations for
small molecules. Computers and Mathematics with Applications 74 (2017) 35–44.

46
An IMFES Formulation for the 2D Three-phase Black-oil
Equations

Saúl E. Buitrago Boret


Simón Bolı́var University, Caracas, Venezuela
sssbuitrago@yahoo.es

Abstract

A new approach for solving the PDEs corresponding to the flux equations governing a three-
phase black-oil model in a 2D porous media is proposed. The 2D physical domain is meshed
using non-rectangular quadrilaterals. The approach for solving the PDEs consists of solving
the total flux implicitly and oil, water and gas saturations explicitly. This formulation avoids
solving a costly second order differential equation in pressure. In the proposed approach,
the total flux is expressed as an asymptotic expansion of ascending powers of the total fluid
compressibility. Contributions to the total flux are obtained from solving first order differential
equations (gradient and divergence operators). Discretizing these operators by finite volume
or finite difference methods, the resulting linear system coefficients are fixed during the whole
simulation. Preliminary numerical results are consistent with the physical interpretation in
the case of one dimensional scenarios.

References
1. S. Buitrago and R. Manzanilla. A fast IMFES formulation for solv-
ing 1D three-phase black-oil equations. Boletı́n AMV, XVI (2) (2009), 65-80,
http://www.emis.de/journals/BAMV/conten/vol16/BAMV XVI-2 p065-080.pdf.
2. S. Buitrago and O. Jiménez. Quadrilateral grid generation with complex internal boundaries using
gradient techniques. Proceedings of the 6th International Conference on Approximation Methods and
Numerical Modelling in Environment and Natural Resources (MAMERN’15). Universidad de Granada,
ISBN: 978-84-338-5783-5, (June 2015) 245-264, http://dx.doi.org/10.13140/RG.2.1.2067.9841.
3. S. Buitrago and G. Sosa and O. Jiménez. An upwind finite volume method on non-orthogonal quadri-
lateral meshes for the convection diffusion equation in porous media. Appl. Anal. 95 (10) (2016) 2203-2223,
http://dx.doi.org/10.1080/00036811.2015.1064520.
4. S. Buitrago and O. Jiménez. Integrated framework for solving the convection diffusion equation on 2D
Quad mesh relying on internal boundaries. Computers & Mathematics with Applications (CAMWA) 74
(1) (2017) 218-228, http://dx.doi.org/10.1016/j.camwa.2017.03.001.

47
Stochastic Simulation Modeling of Quality Assessment, a
Forecast Approach in Maquila Industry

José Roberto Cantú-González


Escuela de Sistemas PMRV, Unidad Acuña, Universidad Autónoma de Coahuila.
roberto.cantu@uadec.edu.mx

Raymundo Juarez-Del-Toro
Facultad de Contadurı́a y Admón. Unidad Torreón. Universidad Autónoma de Coahuila.
r.juarez@uadec.edu.mx

F-Javier Almaguer
Facultad de Ciencias Fı́sico-Matemáticas, Universidad Autónoma de Nuevo León.
francisco.almaguermrt@uanl.edu.mx

Gustavo Roberto Illescas


Fac. De Ciencias Exactas. Universidad Nacional del Centro de la Provincia de Buenos Aires
illescas@exa.unicen.edu.ar

Norman Alexis Cantú-Delgado


Unidad Monterrey, Centro de Investigación y de Estudios Avanzados del IPN
normancantu@hotmail.com

Abstract

Because of the necessity to improve the quality level in manufacture operations, investments
in process are an imperative initiative to be considered in the yearly plan in the maquila
industry; moreover the essence of this industry orients its investment efforts to the workforce
using automation as a support tool, and rarely robotics is included in special operations. In
this context, an important concern is usually present: it is necessary to support the invest-
ments in quality assurance, including workforce and error proof devices, but the risk to do
a bad investment is highly potential while not be possible to have an approximation for the
quality performance indicator in the immediate future period. As an alternative to attend this
concern, this research work presents a stochastic simulation model based in the behavior of
historic data in order to forecast the quality performance indicator for the immediate future
period in manufacture operations of a determined maquila industry.

References
1. Soderberg undefined and R. undefined and Warmefjord undefined and K. undefined and
Carlson undefined and J. S. and & Lindkvist and L. undefined. Toward a Digital Twin for real-
time geometry assurance in individualized production. CIRP Annals - Manufacturing Technology 66-1
(2017), 137–140. doi:http://dx.doi.org/10.1016/j.cirp.2017.04.038.

48
Simulation of Structural Applications and Sheet Metal
Forming Processes Based on Quadratic Solid-Shell Elements
With Explicit Dynamic Formulation

Hocine Chalal, Farid Abed-Meraim


LEM3, UMR CNRS 7239, Arts et Metiers ParisTech, 4 rue Augustin Fresnel, 57078 Metz,
France
hocine.chalal@ensam.eu, farid.abed-meraim@ensam.eu

Abstract

In the engineering industry, thin structures are commonly used to save material, reduce
weight and improve the overall performance of products. The finite element (FE) analysis
of such thin structural components has become a powerful and useful simulation tool. Over
the last decades, considerable effort has been devoted to the development of 3D FE that
are capable of modeling thin structures both accurately and efficiently [1]. In this regard,
the solid-shell concept proved to be very interesting, due to its multiple benefits [2-4]. More
specifically, solid-shell elements combine the advantages of both structural and continuum
FE. The current contribution proposes quadratic solid-shell elements for the 3D modeling of
thin structures in the context of explicit dynamic analysis. The formulation of these FE is
based on a purely 3D approach, with displacements as the only degrees of freedom. To prevent
various locking phenomena, a reduced-integration scheme is used along with the assumed-
strain method. The resulting formulations are computationally efficient, since only a single
layer of elements with an arbitrary number of through-thickness integration points is required
to model 3D thin structures. The performance of these elements is first assessed through a set
of selective and representative nonlinear benchmark tests. Then, attention is directed to the
simulation of deep drawing processes involving complex non-linear loading paths, anisotropic
plasticity and double-sided contact. The numerical results demonstrate the good performance
of the proposed elements in the modeling of 3D thin structures, using only a single element
layer through the thickness.

References
1. T. Belytschko and L. Bindeman. Assumed strain stabilization of the eight node hexahedral element.
Computer Methods in Applied Mechanics and Engineering 105 (1993) 225-260.
2. F. Abed-Meraim and A. Combescure. SHB8PS-a new adaptive, assumed-strain continuum mechanics
shell element for impact analysis. Computers and Structures 80 (2002) 791-803.
3. A. Legay and A. Combescure. Elastoplastic stability analysis of shells using the physically stabilized
finite element SHB8PS. Int. J. Num. Meth. Engng. 57 (2003) 1299-1322.
4. F. Abed-Meraim and A. Combescure. An improved assumed strain solid-shell element formulation
with physical stabilization for geometric non-linear applications and elastic–plastic stability analysis. Int.
J. Num. Meth. Engng. 80 (2009) 1640-1686.

49
A FEniCS-based Solver to Predict Time-dependent
Incompressible Flows

Alexander Churbanov
Nuclear Safety Institute, Russian Academy of Sciences
achur@ibrae.ac.ru

Abstract

A new trend in CFD that appears recently for solving industrial problems is to refuse from
using commercial general-purpose software in favor of free open source software (FOSS). This
allows to construct easy-to-use mathematical tools oriented to solving specific problems with
its possible tuning and improvement in the future. The FEniCS (https://fenicsproject.org/)
finite element framework provides a widely employed example of such a complete numerical
toolkit for solving differential equations of various nature including CFD.
The aim of the present work is to discuss a FEniCS-based solver developed for solving the
2D/3D time-dependent incompressible Navier-Stokes equations. It is based on the Douglas-
Rachford algorithm (the method of stabilizing correction/pressure correction method) and
employs the Poisson equation to evaluate the pressure with the following correction of the
intermediate velocity. Special attention is given to the possibility to predict flows in channels
of arbitrary shape. For this aim, various type of boundary conditions have been implemented
in the solver (rigid wall, slip wall, symmetry and ”open” boundary conditions).
There are employed the Taylor-Hood (P2-P1) mixed elements, which seems to be the
most appropriate in sence of the fulfilment of the LBB-condition and simplicity of use. Three
basic formulations of the convective terms were considered: the advective (non-conservative),
conservative and skew-symmetric (the half-sum of the two previous) forms. This issue may
have essential value for modeling transient flows. Viscous effects were treated via the Cauchy
stress tensor in order to omit any questions raised from using the Laplace format.
To validate the numerical algorithm, the method of manufactured solutions (MMS) was
applied. Namely, the 2D steady-state manufactured vortex solution was used to evaluate
convergence in space for the algorithm. As for time-dependent solutions, the classical 2D
corner flow (proposed by van Kan) was predicted numerically in order to study approximation
in time. Next, flows in 3D ducts of various shapes (circular, rectangular, tube bundles etc.)
have been studied numerically and compared with available analytical or experimental data.
The emphasis here is on efficient modeling hydraulics in cores of nuclear reactors and flows
in supplement equipment. Some preliminary results of modeling turbulent duct flows using
FEniCS have been already published in the paper [1].

References
1. A.G. Churbanov and P.N. Vabishchevich. Numerical investigation of a space-fractional model oftur-
bulent fluid flow in rectangular ducts. J. Comput. Phys. 321 (2016) 846-859.

50
An Explicit Algorithm for the Simulation of Multiphase Fluid
Flow in the Subsurface

Natalia Churbanova, Boris Chetverushkin, Marina Trapeznikova


Keldysh Institute of Applied Mathematics RAS
nataimamod@mail.ru, chetver@imamod.ru, mtrapez@yandex.ru

Anastasiya Lyupa
Moscow Institute of Physics and Technology
nastenka.aesc@gmail.com

Abstract

The present work deals with the simulation of multiphase fluid flow in the subsurface using an
original approach based on the analogy with kinetically-consistent finite difference schemes
and the quasigasdynamic system of equations [1]. The proposed model accounts for gravita-
tional and capillary forces as well as possible heat sources. As the temperature of all phases
and of the rock is identical the system of equations includes a single equation of the total
energy conservation.
The phase continuity equation gets a regularizing term and a second order time derivative
with small parameters. The equation type is changed from parabolic to hyperbolic. Numerical
implementation involves a three-level explicit scheme with a rather mild stability condition.
Some large-scale problems (e.g., oil recovery with combustion fronts, phase transitions, com-
plicated functions of the relative phase permeability) require calculations with very small
space steps what strictly constrains the time step. Then explicit schemes can surpass implicit
schemes used in standard solution methods like IMPES or SS. Besides explicit algorithms are
preferable for HPC.
Verification is performed by a drainage test problem [2] concerning two-phase (water-air)
infiltration due to the gravity. A good agreement of numerical results with [2] is obtained. To
investigate an influence of thermal effects the simulation of three-phase (water-oil-air) flow
in a porous medium with a hot water source on the boundary has been fulfilled.
Computations on HPC confirmed high parallelization efficiency of the approach developed.
The work is supported by RFBR (grant No. 16-29-15095-ofi).

References
1. B. Chetverushkin and N. Churbanova and A. Kuleshov and A. Lyupa and M. Trapeznikova.
Application of kinetic approach to porous medium flow simulation in environmental hydrology problems
on high-performance computing systems. Rus. J. Numer. Anal. Math. Modelling 31 N 4 (2016) 187-196.
2. G. F. Pinder and W. G. Gray. Essentials of Multiphase Flow and Transport in Porous Media. John
Wiley & Sons, Inc., Hoboken, NJ (2008) 199-205.

51
Using a Hybrid Mixing in Fixed-Point Self-Consistent
Iterations to Accelerate Electronic Structure Calculations

Matyáš Novák
Department of Mechanics, Faculty of Applied Sciences, University of West Bohemia &
Institute of Physics, The Czech Academy of Sciences
novakmat@fzu.cz

Robert Cimrman
New Technologies - Research Centre, University of West Bohemia
cimrman3@ntc.zcu.cz

Radek Kolman, Jiřı́ Plešek


Institute of Thermomechanics, The Czech Academy of Sciences
kolman@it.cas.cz, plesek@it.cas.cz

Jiřı́ Vackář
Institute of Physics, The Czech Academy of Sciences
vackar@fzu.cz

Abstract
In ab-initio calculations of electronic states within the density-functional theory (DFT) frame-
work, a self-consistent state is sought by a fixed-point iteration, the so called DFT loop. One
of the key components needed for fast convergence is to apply a suitable mixing of new and
previous states in the DFT loop. In this contribution we discuss the standard Broyden [1]
and Pulay [2] mixing algorithms as well as a newly proposed adaptable hybrid scheme that
combines those two approaches in a way that accelerates the convergence. The scheme is used
within our computer implementation of a new robust ab-initio real-space code based on (i)
density functional theory [3], (ii) finite element method and (iii) environment-reflecting pseu-
dopotentials [4] — this approach to solving Kohn-Sham equations and calculating electronic
states, total energy, Hellmann-Feynman forces and material properties brings a new quality
particularly for non-crystalline, non-periodic structures [5].

References
1. C. G. Broyden. A class of methods for solving nonlinear simultaneous equations. Math. Comp. 19 (1965),
577-593 .
2. P. Pulay. Improved SCF convergence acceleration. J. Comput. Chem., 3: 556–560 (1982).
doi:10.1002/jcc.540030413.
3. R. M. Dreizler and E. K. U. Gross. Density Functional Theory. Springer-Verlag, 1990.
4. J. Vackář and A. Šimůnek. Adaptability and accuracy of all-electron pseudopotentials. Phys. Rev. B
67 (2003) 125113.
5. J. Vackář and O. Čertı́k and R. Cimrman and M. Novák and O. Šipr and J. Plešek. Finite Ele-
ment Method in Density Functional Theory Electronic Structure Calculations. In ”Advances in the Theory
of Quantum Systems in Chemistry and Physics”, Prog. Theoretical Chem. and Phys. (22), Springer, 2011.

52
Improving the Representation of Solid Ice Mass Flux From
Ice Sheets to Ocean in the Energy Exascale Earth System
Model (E3SM)

Darin Comeau
Los Alamos National Laboratory (LANL)
dcomeau@lanl.gov

Abstract

Icebergs represent approximately half of the mass flux from the Antarctic ice sheet to the
ocean, and yet are poorly represented in Earth System Models (ESMs). Calved icebergs
transport freshwater away from the coast and exchange heat with the ocean, thereby affecting
ocean stratification and circulation, with subsequent indirect thermodynamic effects on the
sea ice system. Icebergs also have a direct effect on sea ice through dynamic interaction, as
well as dispersing land-sourced nutrients, the effects of which impact marine biogeochemistry.
Icebergs typically move in a similar direction as sea ice, although much slower, and hence
are an obstacle for the upstream sea ice field. Sea ice ridges behind icebergs, leaving an open
lead in front of the iceberg, facilitating increased sea ice production. ESMs typically spread
freshwater due to icebergs near the coast at the surface, which causes overly thick coastal sea
ice, thereby blocking coastal polynya formation, suppressing ocean overturning, and reducing
sub-ice-shelf melting. Conversely, iceberg freshwater fluxes deposited farther offshore and
at depth enhance vertical mixing, bringing heat to the surface that locally inhibits sea ice
growth. We are developing a parameterization for icebergs in two frameworks, Lagrangian and
Eulerian, within the new Energy Exascale Earth System Model (E3SM) being developed at
the Department of Energy. The Lagrangian framework will be useful in forecasting trajectories
of particular ‘giant’ (¿10 nautical miles) iceberg events, which may have highly localized
impacts on ocean and sea ice, while the Eulerian framework allows us to model a realistic
population of Antarctic icebergs without the computational expense of individual particle
tracking. The icebergs will be embedded into the sea ice and ocean components, which are
based on the unstructured grid framework Model for Prediction Across Scales (MPAS), so as
to represent the physical based iceberg processes as well as deposit iceberg fluxes at depth in
the ocean. Future work will also couple the iceberg model to the land and land-ice models to
model calving fluxes that would otherwise be instantly distributed at the surface of coastal
ocean cells, allowing for an end-to-end representation of solid ice mass flux from the ice sheets
through the climate system.

References
1. D. Comeau and A.K. Turner and E.C. Hunke. An Eulerian Iceberg Model in the Energy Exascale
Earth System Model (E3SM). In preparation.

53
Heat Modelling Methods Comparison

Miklós Csizmadia, Miklós Kuczmann


Department of Automation, Széchenyi István University, Győr, Hungary
csizikem@maxwell.sze.hu, kuczmann@sze.hu

Abstract

The design of electrical machines is very complex and complicated task. The thermal pa-
rameters are as important at the design process of electrical machines as the electrical and
mechanical parameters. A good thermal model can significantly cut down the expenditures
to save time and energy. Nowadays there is lots of designing software tool available (e.g.
FEM software). The licenses of this software usually are very expensive. In some cases, the
hot spot analysis can give us enough information about the system’s thermal behavior (e.g.
insulation system). Thus, the thermal model could be simplified. On the other side by using
Finite Element Method, Finite Difference Method or Thermal-Electrical analog models [1, 2]
can give us a more complex approach to simulate the thermal behavior of the system. The
latter from them could be advantageous just for that reason because freeware software can be
used for it like LTSpice. Transient or stationer simulation can be performed. The objective
of this work is comparing the different methods, focusing on practical aspects.

References
1. Jack P. Holman. Heat transfer. McGraw-Hill, New York, 2010.
2. L. Imre and J. Barcza. Villamos gépek melegedése és hűtése. Műszaki könyvkiadó, Budapest, 1982.

54
An Iterative Phase-Space Explicit Discontinuous Galerkin
Method for Stellar Radiative Transfer in Extended
Atmospheres

Valmor F. de Almeida
University of Massachusetts Lowell, Dept. of Chemical Engineering (Nuclear Program), USA
valmor dealmeida@uml.edu

Abstract

A phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar
radiative transfer problems. It allows for greater adaptivity than competing methods without
sacrificing generality. The method is extensively tested on a spherically symmetric, static,
inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and in-
tensities of scattering agreed with asymptotic values. The exponentially decaying behavior
of the radiative field in the diffusive-transparent transition region, and the forward peaking
behavior at the surface of extended atmospheres were accurately captured. The integrodiffer-
ential equation of radiation transfer is solved iteratively by alternating between the radiative
pressure equation and the original equation with the integral term treated as an energy den-
sity source term. In each iteration, the equations are solved via an explicit, flux-conserving,
discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular
to the characteristic curves so that elemental linear algebraic systems are solved quickly by
sweeping the phase space element by element. Two implementations of a diffusive boundary
condition at the origin are demonstrated wherein the finite discontinuity in the radiation
intensity is accurately captured by the proposed method. This allows for a consistent mecha-
nism to preserve photon luminosity. The method was proved to be robust and fast, and a case
is made for the adequacy of parallel processing. In addition to classical two-dimensional plots,
results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all
distinguishing features of the problem studied.

References
1. V. F. de Almeida. An Iterative Phase-Space Explicit Discontinuous Galerkin Method for Stellar Ra-
diative Transfer in Extended Atmospheres. J. Quant. Spectrosc. Radiat. Transfer 196 (2017) 254-269;
https://doi.org/10.1016/j.jqsrt.2017.04.007.

55
Quadratic Raviart-Thomas Potentials

Eduardo De Los Santos


CI2 MA, Departamento de Ingenierı́a Matemática, Universidad de Concepción, Chile
esantos@ing-mat.udec.cl

Ana Alonso Rodrı́guez


Dipartimento di Matematica, Università degli Studi di Trento, Italia
ana.alonso@unitn.it

Jessika Camaño
Departamento de Matemática y Fı́sica Aplicadas, Universidad Católica de la Santı́sima
Concepción and CI2 MA, Universidad de Concepción, Chile.
jecamano@ucsc.cl

Francesca Rapetti
Dep. de Mathématiques J.-A. Dieudonné, Univ. Côte d’Azur, Nice, France
frapetti@unice.fr

Abstract

We propose an efficient algorithm for the computation of a function of the Raviart-Thomas


finite element space of degree 2, uh ∈ RTh,2 , with assigned divergence gh ∈ Ph,1 , being the
latter the space of piecewise linear finite elements. The algorithm is based on graph techniques.
The key point is to notice that for very natural basis of RTh,2 and Ph,1 , the matrix associated
to the divergence operator is a reduced incidence matrix of a particular graph. Choosing a
spanning tree of this graph it is possible to identify an invertible square submatrix of the
divergence matrix and to compute the desired potential uh . The algorithm is an extension
to the three dimensional space RTh,2 of the one proposed in [2] for Raviart-Thomas finite
elements of degree 1 (see also [1]). This procedure can be also used to construct a basis of
the space of divergence free Raviart-Thomas finite elements of degree two.

References
1. A. Alonso Rodrı́guez and J. Camaño and R. Ghiloni and A. Valli. Graphs, spanning trees and
divergence-free finite elements in domains of general topology. IMA J. Numer. Anal., 37 (2017) 1986–2003.
2. P. Alotto and I. Perugia. Mixed finite element methods and tree-cotree implicit condensation. Calcolo,
36 (1999) 233–248.

56
A Performance-Portable C++ Implementation of
Atmospheric Dynamics

Andy Salinger, Luca Bertagna, Andrew Bradley, Michael Deakin, Oksana Guba,
Daniel Sunderland, Irina Tezaur
Sandia National Laboratories
agsalin@sandia.gov, lbertag@sandia.gov, ambradl@sandia.gov,
mdeakin@sandia.gov, onguba@sandia.gov, dsunder@sandia.gov,
ikalash@sandia.gov

Abstract
In this work, we reimplement the atmospheric dynamical core (HOMME) of Energy Exas-
cale Earth System Model (E3SM) so that a single implementation achieves performance on
a variety of HPC architectures. The dynamical core combines an unstructured-grid spectral
element method on the surface of the sphere with a finite difference discretization for vertical
columns and an explicit time integration algorithm to simulate the flow and transport in the
Earth’s atmosphere. Atmospheric dynamical cores are among the most expensive parts of a
climate model, so the new implementation must be of similar – or improved – performance to
the original version on all production architectures for its adoption to be considered. They are
also highly parallelizable, so the new implementation should also strong-scale well. Finally,
next generation HPC architectures are heterogeneous, and the design of subsequent HPC
architectures is still uncertain, so the code base must be designed with principles from com-
puter architecture to achieve reasonable performance, and must be flexible enough to support
tuning small, performance critical kernels to specific targets. To achieve these goals, the new
implementation uses Kokkos, a C++ programming library for on-node parallelism designed
to be a programming model for performance portability across heterogeneous architectures.
We will present our latest results for the performance and portability of our new ver-
sion of the code on a variety of HPC architectures. With Kokkos, the new implementation
achieves performance comparable to the original Fortran implementation, strong-scales bet-
ter, and gives reasonable performance on next generation systems (such as Xeon Phi and
CUDA) without specifically tuning for them. The new code base is better positioned for
achieving performance on upcoming architectures as well. In addition, in refactoring of the
code from Fortran into C++, the new implementation benefits from modern software devel-
opment infrastructure, including a larger and safer library of data-structures and algorithms,
intrinsically safe design patterns, and more features specifically for improving performance
such as explicit vectorization through intrinsics and compile time computations.

References
1. John M. Dennis and Jim Edwards and Katherine J. Evans and Oksana Guba and Peter
H. Lauritzen and Arthur A. Mirin and Amik St-Cyr and Mark A. Taylor and Patrick H.
Worley. CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model .
https://doi.org/10.1177/1094342011428142.

57
On the Distribution of Real Roots in Quadratic and Cubic
Random Polynomials: A Theoretical Numerical Review

Irene Sarahi Del Real Vargas, Maria Esther Grimaldo Reyna, Francisco Javier
Almaguer Martınez
Autonomous University of Nuevo Leon
irene.r.vargas@hotmail.com, megr maac@yahoo.com.mx,
francisco.almaguermrt@uanl.edu.mx

Abstract

The problem of determining the distribution of the roots of random polynomials, where the
coefficients of these polynomials are random variables with some distribution, is an interest-
ing topic that has been commonly studied from a theoretical view. However, it can also be
addresed with numerical tools. In this work, a theoretical numerical review of the probability
distribution in the zeros of quadratic and cubic random polynomials is presented.The coeffi-
cients of the polynomials are independent real uniform random variables. For the quadratic
polynomial two cases are considered; first, the quadratic coefficient has the value one and
in the second, the quadratic coefficient is an uniform random variable. Numerical tests were
performed and the results are consistent with the probability distributions found analytically
and, in general, with the known results of the random polynomials theory. For the cubic
polynomial, their coefficients are independent real uniform random variables, the results of
the numerical tests are analyzed and contrasted with the fact that the real roots prefereably
are accumulate around plus minus one.

References
1. C. P. Hughes and A. Nikeghbali. The Zeros Of Random Polynomials Cluster Uniformly Near The
Unit Circle. Compositio Math. 144 (2008) 734–746.

58
Growth of Crystals in Adiabatic Crystallizers Depending on
the Characteristics of the Raw Material

Carlos Destenave
Post graduate student
carlos destenave@penoles.com.mx

Abstract

The adiabatic crystallizers by thermo-compression are of great importance for the crystal-
lization of glauber salt, since by the rapid loss of solvent (adia- batica evaporation) in the
solution, the cooling is achieved and therefore the supersaturation. The primary nucleation
occurs by the effect of supersatu- ration, that is to say the condensation of a supersaturated
vapor of the liquid phase is only possible after the appearance of microscopic droplets called
nu- clei of condensation. Heterogeneous nucleation models are based on a simple dynamic den-
sity functional theory (the phase-field crystal model) for homoge- neous and heterogeneous
nucleation. The purpose (mathematical and empirical model) of the project is to establish the
process conditions to obtain a specific crystalline structure. The experimental results agree
with the characteristics of the proposed models, these models can be used for a regional
solution with a certain degree of confidence.

References
1. J. W. Mullin. Crystalliztation. Four edition 2001 pag 181-284.

59
A Shared-Memory Parallel Multi-Mesh Fast Marching
Method for Full and Narrow Band Re-Distancing

Georgios Diamantopoulos, Josef Weinbub


Christian Doppler Laboratory for High Performance TCAD, Institute for Microelectronics,
TU Wien, Austria
diamantopoulos@iue.tuwien.ac.at, weinbub@iue.tuwien.ac.at

Andreas Hössinger
Silvaco Europe Ltd., United Kingdom
andreas.hoessinger@silvaco.com

Siegfried Selberherr
Institute for Microelectronics, TU Wien, Austria
selberherr@iue.tuwien.ac.at

Abstract
A common problem arising in expanding front simulations is to restore the signed-distance
field property of a discretized domain (i.e., a mesh), by calculating the minimum distance
of mesh points to an interface. This problem is referred to as re-distancing and a widely
used method for its solution is the Fast Marching Method (FMM) [1]. In many cases a
particular high accuracy in specific regions around the interface is required. There, meshes
with a finer resolution are defined in the regions of interest, enabling the problem to be solved
locally with a higher accuracy. Additionally, this gives rise to coarse-grained parallelization,
as such meshes can be re-distanced in parallel. An efficient parallelization approach, however,
has to deal with interface-sharing meshes, load-balancing issues, and must offer reasonable
parallel efficiency for narrow band and full band re-distancing. We present a parallel Multi-
Mesh FMM to tackle these challenges: Interface-sharing meshes are handled in a similar
way as the inter-subdomain communication mechanism presented in [2]. Parallelization is
introduced by applying the pool of tasks concept, implemented using OpenMP tasks. Meshes
are processed by OpenMP tasks as soon as threads become available, efficiently balancing
out the computational load of unequally sized meshes over the entire computation. Our
investigations cover parallel performance of full and narrow band re-distancing as well as
load-balancing capabilities. The resulting algorithm shows a good parallel efficiency, if the
problem consists of significantly more meshes than the available processor cores.

References
1. J. A. Sethian. Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational
Geometry, Fluid Mechanics, Computer Vision, and Materials Science. 2nd ed, Cambridge University Press,
1999.
2. J. Yang and F. Stern. A Highly Scalable Massively Parallel Fast Marching Method for the Eikonal
Equation. Journal of Computational Physics, vol. 332, pp. 333-362, 2017.

60
Robust Discrete Laplacians

Antonio DiCarlo
CECAM-IT-SIMUL Node
adicarlo@mac.com

Abstract

Computing an approximate solution to the Poisson problem on an n-dimensional Riemannian


manifold (with boundary) is at the heart of a host of numerical applications. Thus, the fact
that no satisfactory discrete approximation to the Laplace-de Rham operators ∆k is generally
available for any k ≤ n is highly annoying. In a nice paper published ten years ago, Wardetzy
et al. [1] proved that, on a general mesh, it is impossible to construct a discrete Laplacian
sharing four important structural properties possessed by the differential Laplace-de Rham
operator ∆0 acting on 0-forms (i. e., scalar fields), namely: (i) linearity, (ii) symmetry, (iii)
positivity, and (iv) locality. Of course, any decent approximation to ∆0 will asymptotically
recover these properties in the limit of infinite mesh refinement. However, one would like to
preserve (most of) them on any coarse mesh.
Building on ideas introduced by DiCarlo et al. [2], I argue that the one requirement to
be dropped—or better, relaxed—is locality, since this renunciation allows us to produce very
robust approximations to ∆k for all k’s, tolerant of poor-quality meshes. This robustness is
especially beneficial when coping with solid models extracted from big geometric data [3] or
obtained by merging two or more cellular complexes [4]: even if topologically correct, these
meshes typically have inferior metric properties.
Interestingly, the approach in [2] combines well also with the idea of constructing localised
bases by spectral decomposition of a modified Laplacian operator, recently put forward by
Melzi et al. [5].

References
1. M. Wardetzky and S. Mathur and F. Kaelberer and E. Grinspun. Discrete Laplace operators:
No free lunch. Symposium on Geometry Processing. Barcelona, Spain, July 4–6, 2007, Eurographics As-
sociation (2007), 33–37, ISBN: 978-3-905673-46-3.
2. A. DiCarlo and F. Milicchio and A. Paoluzzi and V. Shapiro. Discrete physics using metrized
chains. Symposium on Solid and Physical Modeling. San Francisco, CA, October 5–8, 2009, ACM (2009),
135–145, ISBN/ISSN: 978-1-60558-711-0.
3. A. Paoluzzi and A. DiCarlo and F. Furiani and M. Jiřı́k. CAD models from medical images using
LAR. Comput. Aided Des. Appl. 13:6, 747–759 (2016).
4. A. Paoluzzi and V. Shapiro and A. DiCarlo. Regularized arrangements of cellular complexes.
ArXiv:1704.00142v4 [cs.CG] 25 Aug 2017.
5. S. Melzi and E. Rodolà and U. Castellani and M.M. Bronstein. Localized manifold harmonics
for spectral shape analysis. ArXiv:1707.02596v2 [cs.GR] 2 Nov 2017.

61
T-H-M-C Modelling of Geothermal Processes

Alain Dimier, Fabian Limberger, Roman Zorn, Olaf Ukelis


Eifer Institute, Karlsruhe Institute of Technology
alain.dimier@eifer.org, fabian.limberger@student.kit.edu,
roman.zorn@eifer.org, olaf.ukelis@eifer.org

Abstract
In Geothermal or underground energy storage processes, the characterization of the evolution
of the reservoir rock over time induced by the circulation of an aqueous phase is mandatory
in order to optimize and control the process over time. For reservoir rock characterisation,
experiments like percolation benches can give hints on the way the reservoir rock will behave
once subject to an exogenous fluid circulation. For deep geothermal fluids, most of the time,
apart from injection boundary conditions which can be accurately defined, some uncertainties
linked to reservoir characterization do exist, uncertainties eventually induced by the way
the reservoir is processed prior to being put in production. Thereby, considering geothermal
processes, modelling can be considered as the most efficient way to enhance the understanding
of the wells hydraulic, of their thermal and chemical behaviours. Therefore, we have to address
multiphysical processes whose modelling is sometimes referred as T-H-M-C modelling. We
will present here the way such processes are tackled in the frame of the etumos platform where
open source tools like phreeqC [1] for aqueous geochemistry, and Elmer [2] or Openfoam [3]
to handle physical processes like hydraulic, transport of aqueous species, heat transfer and
mechanics, are coupled in a sequential way. Prior to their use within the platform, each of these
tools is wrapped to become a python shared object with the ad-hoc methods enabling the
communication between the different tools and with the user. As an example, aqueous ions
transport induce mineralogy variations impacting porosity, porosity variations which have
to be sent back to hydraulic and transport solvers. Python acts here as the glue between
those shared objects and is used all along the modelling, from the definition of the physics
to handle via the python formulated data model to the post-processing. We will present
two applications of the coupling based on a direct comparison with analytical solutions and
experimental results issued from the literature. They will be commented and serve as an
illustration of the phenomenology that can be presently modelled; furthermore it will serve
as algorithm performances assessment in the field of Enhanced Geothermal Systems, EGS
systems.

References
1. D.L. Parkhurst and C.A.J.Appelo. PHREEQC version 3–A. 2013 book 6, chap. A43, 497 p., available
only at http://pubs.usgs.gov/tm/06/a43.
2. Peter Råback and Mika Malinen. Overview of elmer. CSC – IT Center for Science 2017.
3. H. G. Weller and G. Tabor and H. Jasak and C. Fureby. A tensorial approach to computational
continuum mechanics using object-oriented techniques. COMPUTERS IN PHYSICS, VOL. 12, NO. 6,
NOV/DEC 1998.

62
Enhancement of the Localization Precision of RTLS Used in
the Intelligent Transportation System in Suburban Areas

Marzena Banach
Institute of Architecture and Spatial Planning, Poznan University of Technology, ul.
Nieszawska 13C, 61-021 Poznan, Poland
marzena.banach@erba.com.pl

Rafal Dlugosz
UTP University of Science and Technology, Faculty of Telecommunication, Computer
Science and Electrical Engineering, ul. Kaliskiego 7, 85-796, Bydgoszcz, Poland
rafal.dlugosz@gmail.com

Abstract

In this paper, we focus on the problem of the localization accuracy in real-time locating
systems (RTLS) that can be used for precise positioning of the autonomous cars on the road.
Systems of this type will be based on a concept of ad-hoc networks composed of moving
objects (vehicles) and static devices, mounted in the urban / road infrastructure (RSE –
road side equipment). The vehicle-to-infrastructure (V2I) communication between the nodes
of the network will allow exchanging data, relevant from the safety point of view [1, 2]. It is
anticipated that in the future, the V2I technology will become one of the pillars of the, so-
called, Intelligent Transportation Systems (ITS) used in Smart Cities, as well as in suburban
areas.
In varying road environment, one can expect different densities of the RSE. Spatial dis-
tribution of these devices may impact the localization precision of the overall system. In
suburban areas, the density will be smaller. As a result, multiple V2I communication sessions
will be required, to cross-verify the calculated distances between the vehicles and the RSE
devices. In this paper we present techniques that may allow to enhance the estimation of the
distances in the presence of different noises. The localization uncertainty may depend on dif-
ferent factors, such as fluctuation of the environment temperature, precision of on on-board
sensors mounted in the cars, etc [3].

References
1. Bai S.. US-EU V2V V2I Message Set Standards Collaboration. Honda R&D Americas, Inc., 12 December
2013.
2. Dongyao Jia and Dong Ngoduy. Enhanced cooperative car-following traffic model with the combina-
tion of V2V and V2I communication. Elsevier, Transportation Research Part B: Methodological, Vol.90,
DOI: 10.1016/j.trb.2016.03.008, s.172-191, August 2016.
3. Hassan O. and Adly I. and Shehata K.A.. Vehicle Localization System based on IR-UWB for V2I
Applications. 8th International Conference on Computer Engineering & Systems (ICCES), November
2013.

63
Goal-oriented Anisotropic Mesh Adaptation Method for
Linear Convection-diffusion-reaction Problems

Vit Dolejsi
Charles University, Prague
dolejsi@karlin.mff.cuni.cz

Abstract

We deal with the numerical solution of linear convection-diffusion-reaction equation using the
discontinuous Galerkin method on anisotropic triangular grids. We derive a posteriori goal-
oriented error estimates taking into account the anisotropy of mesh elements. We start from
the standard dual weighted residual (DWR) formula and the “weights” terms are further esti-
mated by technique from [1]. The higher order reconstruction is a key aspect of goal-oriented
estimation and adaptation. We use a least square reconstruction from [2] which works rea-
sonably for all tested polynomial approximation degrees. The resulting error estimates are
employed for the anisotropic mesh adaptation algorithm where the optimal anisotropy of
mesh element is sought by a simple adaptive iterative algorithm. The efficiency, accuracy and
robustness of the mesh adaptation algorithm is demonstrated by several numerical experi-
ments. Finally, an extension to the hp-variant is presented.

References
1. V. Dolejsi. Anisotropic hp-adaptive method based on interpolation error estimates in the Lq -norm. Appl.
Numer. Math., 82 (2014), pp. 80–114.
2. V. Dolejsi and G. May and F. Roskovec and P. Solin. Anisotropic hp-mesh optimization technique
based on the continuous mesh and error models. Comput. Math. Appl., 74 (2017), pp. 45–63.

64
Coupling Ultrasonic Wave Propagation With Fluid-Structure
Interaction Problem

Bhuiyan Shameem Mahmood Ebna Hai, Markus Bause


Chair of Numerical Mathematics, Faculty of Mechanical Engineering, Helmut Schmidt
University - University of the Federal Armed Forces Hamburg
ebnahaib@hsu-hh.de, bause@hsu-hh.de

Abstract

In the present work, we propose a concept of coupling the ultrasonic wave propagation with
the fluid-structure interaction (FSI) problem. First, we investigate the ultrasonic wave propa-
gation in fluid-solid and their interface (the WpFSI problem). Further, we extend the study to
account for the fluid-structure interactions. That is, the associated coupling is one-directional.
The resulting model is referred to as the extended fluid-structure interaction (eXFSI) problem.
In contrast to eXFSI, WpFSI is a strongly coupled problem, which comprises the evolution of
acoustic and elastic waves. The ultimate contribution of the present work is the development
of efficient and credible solution to the eXFSI problem, which we discuss in great details. Nu-
merical solution is based on the combination of Finite Element and Finite Difference methods.
Besides, we follow the monolithic approach, where at each time step the solution of the FSI
problem provides boundary conditions for the WpFSI. An application example discussed here
is computational support of on-line and off-line Structural Health Monitoring (SHM) systems
for composite material and lightweight structure.

References
1. B.S.M. Ebna Hai and M. Bause. Finite Element Model-based Structural Health Monitoring (SHM)
Systems with Fluid-Structure Interaction (FSI) Effect. In proceedings of: the 11th International Workshop
on Structural Health Monitoring 2017. In: F-K. Chang and F. Kopsaftopoulos (Ed.), “Structural Health
Monitoring 2017: Real-Time Material State Awareness and Data-Driven Safety Assurance”, DEStech
Publications Inc., USA, Vol. 1, pp 580–587, 2017..
2. B.S.M. Ebna Hai and M. Bause and P. Kuberry. Finite Element Approximation of the eXtended
Fluid-Structure Interaction (eXFSI) Problem. In proceedings of: the ASME Fluids Engineering Division
Summer Meeting, Vol. 1A, No. FEDSM2016–7506, pp V01AT11A001/1–12, Washington, D.C., USA, July
10–14, 2016..
3. B.S.M. Ebna Hai and M. Bause. Finite Element Approximation of Fluid-Structure Interaction (FSI)
Problem with Coupled Wave Propagation. In proceedings of: the 88th GAMM Annual Meeting of the In-
ternational Association of Applied Mathematics and Mechanics 2017. In: PAMM - Proceedings in Applied
Mathematics and Mechanics, WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim, Vol. 17, Issue 1, pp
511–512, 2018..
4. B.S.M. Ebna Hai and M. Bause. Finite Element Model-based Structural Health Monitoring (SHM)
Systems for Composite Material under Fluid-Structure Interaction (FSI) Effect. In proceedings of: the 7th
European Workshop on Structural Health Monitoring 2014. In: The e-Journal of Nondestructive Testing
& Ultrasonics, NDT.net issue, Vol. 20, Issue 2 (Feb 2015), pp 1380–1387, 2015..

65
Non-uniform Grid in Preisach Model for Soft Magnetic
Materials

Jakub Eichler, Miroslav Novák, Miloslav Košek


Technical University of Liberec
jakub.eichler@tul.cz, miroslav.novak@tul.cz, miloslav.kosek@tul.cz

Abstract
Very efficient hysteresis modeling by Preisach model in discrete presentation uses two-dimen-
sional grid with ideal dipoles (hysterons) in its nodes [1]. The complete description of material
magnetic properties is given by dipole momentums that form the weighting function, defined
at the grid triangular upper part. The hysteresis loop is derived from the weighting function
by different manner for excitation field increase and decrease. Strictly, increasing excitation
switches other dipoles and in different order than the decreasing field. The experimental
derivation of the weighting function requires extended experiment that usually uses the first
order reversal curve (FORC). Measurement that starts at negative saturation level and mea-
sures individual hysteresis loops for increasing excitation amplitude up to symmetric positive
saturation. Then their decreasing reversal parts are processed by partial derivations by both
the field strengths. Usually the uniform grid is used for the weighting function and the linear
excitation amplitude increase for the measurement. However, examined grain oriented steel
has weighting function with very sharp maximum and long saturation regions. In the case of
uniform grid these regions contribute to the results by small, but no negligible amount. The
better solution should be the use of no-uniform grid [2]. In order to analyze the errors due to
the problem simplification, the simplest no regular grid with only two lattice constants was
used. The simple analytical weighting function of probability density type was selected and
harmonic excitation applied. The weighting function maximum is close to the main diagonal
of Preisach’s triangle. All the practical differences due to the grid reduction were very small.
The comparison in time response near saturation region shows, that only differences are the
coarser steps on the magnetic flux waveform, but the shape and position of the curve is not
affected. The use of no uniform grid is important for experiments; since the measurement
time is shorten several times. The same value of computation speed is valid for calculations,
when the simplest robust algorithm in MATLAB is used. If a very sophisticated algorithm is
invented, the calculation speed increases about thousand times. The efficient no uniform grid
is the grid with the same flux density steps, for instance. The extended work in this field is
now in progress.

References
1. G. Bertotti and I. Mayergoyz and . ”The science of hysteresis” Vol. 1, 2 and 3. 1st ed.. Elsevier,
2006. ISBN 978-0–2-369431-7, Chapter 3..
2. P. Postolache and M. Cerchez and L. Stoleriu and A. Stancu. ”Experimental evaluation of the
Preisach distribution for magnetic recording media”. IEEE Transactions on Magnetics, vol. 39, no. 5, pp.
2531-2533, Sept. 2003.

66
Performance Engineering for Tall & Skinny Matrix
Multiplication Kernels on GPUs

Dominik Ernst
Erlangen Regional Computing Center, Germany
dominik.ernst@fau.de

Abstract

Block Vector Algorithms, i.e. algorithms that are formulated to operate on a matrix consist-
ing of several vectors, have been shown to be useful in the context of Eigenvalue Solvers [1],
where they have both numerical and performance benefits. Block Vectors are so called Tall
& Skinny Matrices (TSM), for which the standard GEMM implementation approaches that
are used in the vendor libraries fail to deliver good performance. We therefore introduced
specialized TSM matrix-matrix multiplication operations into GHOST (the General Hetero-
geneous Sparse Matrix Toolkit [2], a sparse linear algebra kernel library) to enable efficient
Block Vector computations. In this work, we show several implementation approaches for
TSM matrix multiplication on GPUs, differing mostly by the way work is mapped to the
hardware. Extensive performance modeling is used to analyze and explain the approaches’
performances by identifying key factors like amount of parallelism, access patterns, limiting
data paths or reuse opportunities.

References
1. Melven Röhrig-Zöllner and Jonas Thies and Moritz Kreutzer and Andreas Alvermann and
Andreas Pieper and Achim Basermann and Georg Hager and Gerhard Wellein and Holger
Fehske. Increasing the Performance of the Jacobi–Davidson Method by Blocking. In: SIAM Journal on
Scientific Computing 37.6 (2015), pp. C697–C722. doi: 10.1137/140976017.
2. Moritz Kreutzer et al. GHOST: Building Blocks for High Performance Sparse Linear Algebra on
Heterogeneous Systems. In: International Journal of Parallel Programming (2016), pp. 1–27. issn: 1573-
7640. doi: 10.1007/s10766-016-0464-z.

67
Effects of Coupling on Firing Patterns in Thermally Sensitive
Neurons

Gerardo Escalera Santos, O Dı́az-Hernández, A Farrera, E Ramı́rez-Álvarez


Autonomous University of Chiapas
gescalera.santos@gmail.com, orlandodiaz 22@hotmail.com,
agustin96fm@gmail.com, elizethra@yahoo.com

Javier Morales Castillo, Mario Alberto Aguirre-López


Autonomous University of Nuevo León
tequilaydiamante@yahoo.com.mx, marioal1906@gmail.com

Pablo Padilla-Longoria
Autonomous University of Mexico
pabpad@gmail.com

Abstract

Mathematical models have been very useful in the study of nonlinear coupled oscillators. From
these models and their interaction new phenomena have emerged and generated advances
in the understanting of biological systems. In this work we study numerically the effect of
coupling strength on the firing patterns of a set of six coupled thermosensitive neurons model.
We select the membrane current I as the coupling variable and considering different strengths
and topologies of the network. The emerging dynamical behavior among oscillators is analyzed
using standard measure such as interspike interval and the order parameter R. Finally, we
wish to point out that our findings may be contribute to enhance our understanding of one of
the most fascinating problems in the biology, namely, the emergence of collective behaviors
induced by coupling in complex systems.

References
1. T. De L. Prado and S.R. Lopes and C.A.S. Batista and J. Kurths and R.L. Viana. Sincronization
of bursting Hodgkin-Huxley-type neurons in clustered networks. Physical Review E 90, 032818 (2014).
2. U. Feudel and A. Newman and X. Pei and W. Wojtenek and H. Braun and M. Huber and F.
Moss. Homoclinic bifurcation in a Hodgkin-Huxley model of thermally sensitive neurons. Chaos 10, 231
(2000).

68
A Comparative Study Between D2Q5 and D2Q9 Lattice
Boltzmann Scheme for Mass Transport Phenomena in Porous
Media

Mayken Espinoza-Andaluz, Ayrton Moyon


Escuela Superior Politécnica del Litoral, ESPOL, Facultad de Ingenierı́a Mecánica y
Ciencias de la Producción, Centro de Energı́as Renovables y Alternativas, Campus Gustavo
Galindo Km. 30.5 Vı́a Perimetral, P.O. Box 09-01-5863, Guayaquil, Ecuador
masespin@espol.edu.ec, ajmoyon@espol.edu.ec

Martin Andersson
Lund University, Department of Energy Sciences, Lund, Sweden
martin.andersson@energy.lth.se

Abstract

Characterization of the different transport phenomena through porous media represents a key
factor to improving the properties of materials for several applications in different fields such
as geological sciences, energy sciences or biological applications. Considering the difficulty
of carried out experimental studies in porous media, these phenomena are more feasible to
describe when computational tools are applied to compute the involved parameters. In this
scenario, the Lattice Boltzmann Method (LBM) appears as a powerful tool to solve different
transport phenomena at micro- and meso-scale [1].
The fluid flow behavior, analyzed with LBM, is commonly solved using the D2Q9 scheme.
This scheme has shown a reliable solution in fluid flow problems [2]. On the other hand,
the mass transport phenomena are recommended to be solved by using the D2Q5 scheme.
However, there is no a comparative, detailed and complete study of the impact of using such
schemes in porous media. The purpose of this study is to analyze the impact of considering
the D2Q5 and D2Q9 LBM scheme in the computation of mass concentration through porous
media with different geometrical characteristics. Parameters such as porosity and tortuosity
are also considered in this study.

References
1. S. Succi. The Lattice Boltzmann Equation: For Fluid Dynamics and Beyond. Oxford University Press.
(2001).
2. S. Chen and G.D. Doolen. Lattice Boltzmann Method for Fluid Flows.. Annual Review of Fluid
Mechanics, 30(1), 329-364. (1998).

69
A Permeability Correlation for a Medium Generated With
Delaunay Tessellation and Voronoi Algorithm by Using
OpenPNM

Mayken Espinoza-Andaluz, Angel Encalada


Escuela Superior Politécnica del Litoral, ESPOL, Facultad de Ingenierı́a Mecánica y
Ciencias de la Producción, Centro de Energı́as Renovables y Alternativas, Campus Gustavo
Galindo Km. 30.5 Vı́a Perimetral, P.O. Box 09-01-5863, Guayaquil, Ecuador
masespin@espol.edu.ec, angaenca@espol.edu.ec

Abstract

Describing different transport phenomena through porous media at micro- and mesoscole
represents a convenient pre-step in experimental characterization for materials. Porous media
properties are required in several applications such as layers in fuel cells, heat exchangers and
geological sciences. The purpose of the present work is to propose a permeability correlation
for a digitally created 3D pore media generated by means of the Delaunay tessellation (DT)
and Voronoi algorithm for pore position and throat characteristics, respectively. OpenPNM,
an open-source pore network modeling package, has proven to be a powerful tool to compute
several transport phenomena in porous media applications [1]. The pore positions are kept
invariant while the throat diameter connecting the pores is changing for a selected range in
order to analyze the impact of the throat diameter on the permeability.

References
1. J. Gostick and M. Aghighi and J. Hinebaugh and T. Tranter and M.A. Hoeh and H. Day and
B. Spellacy and M.H. Sharqawy and A. Bazylak and A. Burns and W. Lehnert . OpenPNM:
a pore network modeling package. . Computing in Science & Engineering, 18(4), pp.60-74. 2016.

70
Optimal Control for the MHD Flow and Heat Transfer With
Variable Viscosity in a Square Duct

Cansu Evcin, Ömür Uğur


Institute of Applied Mathematics, Middle East Technical University
cbilgir@metu.edu.tr, ougur@metu.edu.tr

Münevver Tezer-Sezgin
Department of Mathematics, Middle East Technical University
munt@metu.edu.tr

Abstract
The direct and optimal control solution of the laminar, fully developed, steady MHD flow
of an incompressible, electrically conducting fluid in a duct is considered together with the
heat transfer. The flow is driven by a constant pressure gradient and an external uniform
magnetic field. The fluid viscosity is either temperature dependent, varying exponentially, or
it depends on the flow in the case of power law fluid; and the viscous and Joule dissipations
are taken into consideration. The coupled nonlinear set of momentum and energy equations
are solved by using Finite Element Method with the implementation of the Newton’s method
for nonlinearity. In this respect, direct FEM solutions are obtained for various values of the
problem parameters to ensure the sound structure of the underlying scheme. The FEM results
obtained in this study are not only in good agreement with, but also extends, the results in
[1]. The aim of this study is to investigate the problem of controlling the steady flow by using
the physically significant parameters of the problem as control variables: Hartmann number
(Ha), Brinkmann number (Br), Hall parameter (m) and viscosity parameter (B) in the case
of temperature dependent viscosity. For the case of power law fluid the control parameters
are Ha, Br and the flow index (n). The control problem is solved by the discretize-then-
optimize approach [2] with a gradient based algorithm. Starting with an initial estimate the
optimization loop to calculate new estimates for optimal solution repeated until the norm
of the gradient of the reduced cost function is less than a predefined tolerance. Control
variables are considered as single and pairwise as well. It is observed that controls with
multiple control variables require relatively more number of iterations than the one with a
single control parameter. The most costly one is observed as the case with the pair (m, B)
since they have contrary effects on the fluid for the temperature dependent viscosity case.
Numerical results ensure that the proposed control approach is effective at driving the flow
to prescribed velocity profiles as well as isolines.

References
1. M. E. Sayed-Ahmed. Numerical solution of power law fluids flow and heat transfer with a magnetic field
in a rectangular duct. International Communications in Heat and Mass Transfer, 33 (2006), 1165-1176.
2. M. Hinze and R. Pinnau and M. Ulbrich and S. Ulbrich. Optimization with PDE Constraints.
Springer, 2009.

71
Computational Synthesis of Artificial Neural Networks Based
on Partial Magma Rings

Raul M. Falcón
University of Seville
rafalgan@us.es

Abstract

A simple way to model the electrical properties of neurons consists of considering an electric
circuit where batteries, capacitors and resistors represent, respectively, the difference in ion
concentration inside and outside the cell, the charge storage capacity of cell membranes, and
the ion channels within the cell membrane [4]. Further, the known multiplicative mechanisms
existing within single neurons [5] make interesting the use of analog multipliers within such
electric circuits. The structure constants of partial magma algebras have recently revealed [1]
to play an important role in order to model the flow of electric current within electric circuits
with all the mentioned components: batteries, capacitors, resistors and analog multipliers.
This differs from the use of Boolean algebras by Shanon [6], which enable one to model the
working of switches and relays within a switching circuit, but not the mentioned flow of electric
current. Based on these facts, this work delves into the current literature concerning the
modeling of different biologic and physical procedures by means of certain types of algebras
as, for instance, the so-called evolution algebras [7], magnetic algebras [2] or electric algebras
[3]. As an illustrative example, we make use of Computational Algebraic Geometry to model
neural networks based on partial magma rings that have either a partial quasigroup or a
Hadamard matrix as their Cayley table.

References
1. R. M. Falcón and J. Núñez. Computational synthesis and analysis of LED circuits based on partial
magma algebras. Submitted, 2018.
2. M. Günaydin. Exceptionality, supersymmetry and non-associativity in Physics. Bruno Zumino Memorial
Meeting, CERN, Geneva, 2015.
3. M. Günaydin and D. Minic. Nonassociativity, Malcev algebras and string theory. Fortschr. Phys. 61
(2013) 873–892.
4. A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application
to conduction and excitation in nerve. J. Physiol. (London) 117 (1952) 500–544.
5. C. Koch and T. Poggio. Multiplying with synapses and neurons. In: Single Neuron Computation.
Neural Networks: Foundations to Applications, Academic Press, San Diego, 1992, 315–345.
6. C. E. Shannon. A Symbolic Analysis of Relay and Switching Circuits. AIEEE Trans. 57 (1938) 713–723.
7. J. P. Tian and P. Vojtechovsky. Mathematical concepts of evolution algebras in non-Mendelian
genetics. Quasigroups Related Systems 14 (2006) 111–122.

72
Numerical Simulation of Two-Phase Flow by the FE, DG and
Level Set Methods

Miloslav Feistauer
Charles University, Faculty of Mathematics and Physics
feist@karlin.mff.cuni.cz

Abstract

The subject of the paper is the numerical simulation of two-phase flow of immiscible flu-
ids. Their motion is described by the incompressible Navier-Stokes equations with piecewise
constant density and viscosity. The interface between the fluids is defined with the aid of
the level-set method using a transport first-order hyperbolic equation. The Navier-Stokes
system equipped with initial and boundary conditions and transmission conditions on the
interface between the fluids is discretized by the Taylor-Hood P2/P1 conforming finite ele-
ments in space and the second-order BDF method in time. The transport level-set problem is
solved with the aid of the space-time discontinuous Galerkin method. Numerical experiments
demonstrate that the developed method is acurate and robust.

References
1. E. Bezchlebova and V. Dolejsi and M. Feistauer. Discontinuous Galerkin Method for the Solution
of a Transport Level-Set Problem. Computers and Mathematics with Applications 72 (2016) 455–480.

73
Numerical Simulation of Flows Through a Radial Turbine

Jiřı́ Fürst, Zdeněk Žák


Czech Technical University in Prague
Jiri.Furst@fs.cvut.cz, Zdenek.Zak@fs.cvut.cz

Abstract

The article deals with the application of the coupled finite-volume solver for the simulation
of turbulent compressible flows through a twin-scroll radial turbine. The solver is based on
the OpenFOAM framework [4] and uses the so called Riemann solvers for the approximation
of convective fluxes combined with an implicit matrix-free lower-upper symmetric Gauss-
Seidel method for discretization in time [1]. The performance of the solver is compared to
the performance of segregated pressure correction solvers from OpenFOAM package. The
flow through a twin scroll radial centripetal turbocharger turbine is then solved at several
regimes using basic turbulence models (e.g. k − ω SST model, [3]) as well as advanced explicit
algebraic Reynolds stress (EARSM) model [2], or using a hybrid RANS-LES approach. Finally
the mass flow and the turbine efficiency is evaluated and compared to the experimental data
– this includes different operational regimes of a twin-scroll turbine. The measured data were
obtained from a special test rig for experimental evaluation of twin-scroll turbines.

References
1. J. Blazek. Computational Fluid Dynamics: Principles and Applications. Butterworth-Heinemann (2015).
2. A. Hellsten. New Advanced K-W Turbulence Model for High-Lift Aerodynamics. AIAA Journal 43
(2005) 9:1857–69.
3. F. R. Menter and M. Kuntz and R. Langtry. Ten Years of Industrial Experience with the SST
Turbulence Model.. Turbulence Heat and Mass Transfer 4 (2003) 4:625–32.
4. H. G. Weller and G. Tabor and H. Jasak and C. Fureby. A Tensorial Approach to Computational
Continuum Mechanics Using Object-Oriented Techniques. Computers in Physics 12 (1998) 6:620.

74
Introducing Probabilistic Cellular Automata. A Versatile
Extension of Game of Life

Gabriel Aguilera-Venegas, Rocı́o Egea-Guerrero, José Luis Galán-Garcı́a


University of Málaga
gabri@ctima.uma.es, rociegea@hotmail.com, jlgalan@uma.es

Abstract

The “Game of life” [1] model was created in 1970 by the mathematician Jonh Horton Conway
using cellular automata. Since then, different extensions of these cellular automata have been
used in many applications, such as car traffic control [2] or baggage traffic in an airport [3].
These extensions introduce ideas not only from cellular automata models but also from neural
networks theory.
In this work, we introduce probabilistic cellular automata which include non-deterministic
rules for transitions between successive generations of the automaton together with probabilis-
tic decisions about life and death of the cells in next generation of the automaton. This way,
more realistic situations can be modeled and the obtained results are also non-deterministic.
As an example of use, an implementation of this probabilistic cellular automaton has been
developed using it for simulating tissues evolution. The authors are specially interested in
simulations of cancerous tissues.

References
1. M. Gardner. The fantastic combinations of John Conway’s new solitaire game life. Scientific American
223 (1970), pp. 120–123.
2. José L. Galán-Garcı́a and Gabriel Aguilera-Venegas and Pedro Rodrı́guez-Cielos. An
Accelerated-Time Simulation for Traffic Flow in a Smart City. Journal of Computational and Applied
Mathematics 270 (2014), pp. 557–563.
3. G. Aguilera-Venegas and J. L. Galán-Garcı́a and E. Mérida-Casermeiro and P. Rodrı́guez-
Cielos. An accelerated-time simulation of baggage traffic in an airport terminal. Journal of Mathematics
and Computer in Simulation 104 (2014), pp. 58–66.

75
A Discontinuous Galerkin Method for Solving Elliptic
Eigenvalue Problems on Polygonal Meshes With
Hp-adaptivity

Stefano Giani
Department of Engineering, Durham University , South Road, Durham, DH1 3LE United
Kingdom
stefano.giani@durham.ac.uk

Abstract

We present a discontinuous Galerkin method for solving elliptic eigenvalue problems on


polygonal meshes based on the discontinuous Galerkin composite finite element method
(DGCFEM). In this talk, the key idea of general shaped element domains in DGCFEM
is used to construct polygonal elements and applied to eigenvalue problems. Polygonal and
polyhedral meshes are advantageous to discretize domains of complicated shape reducing the
overall number of elements needed.
A priori convergence analysis is presented for the method and tested on several numerical
examples. Some of the numerical examples use non-convex elements that could be considered
pathological in the finite element context.
Further, adaptive techniques are presented for DGCFEM and applied to complicated do-
mains. The mesh-adaptivity is based on a residual error estimator specific for DGCFEM.
The robustness and accuracy of the adaptive techniques are supported by numerical exam-
ples. Interestingly, the convergence rate of the hp-adaptive technique is exponential also for
polygonal meshes.

References
1. S. Giani. Solving elliptic eigenvalue problems on polygonal meshes using discontinuous Galerkin composite
finite element methods. Applied Mathematics and Computation 267 (2015) 618-631.
2. P. Antonietti and S. Giani and P. Houston. hp-version composite discontinuous Galerkin methods
for elliptic problems on complicated domains. SISC 35(3) (2013) A1417-A1439.
3. S. Giani. hp-Adaptive Composite Discontinuous Galerkin Methods for Elliptic Eigenvalue Problems on
Complicated Domains. Applied Mathematics and Computation 267 (2015) 604-617.

76
Multiscale Hybrid-Mixed Method for the Simulation of
Nanoscale Light-Matter Interactions

Alexis Gobé, Stéphane Lanteri


Inria Sophia Antipolis - Mediteranée, France
alexis.gobe@inria.fr, stephane.lanteri@inria.fr

Claire Scheid
Inria Sophia Antipolis - Mediterranée, France, University of Nice Sophia Antipolis, France
claire.scheid@inria.fr

Frédéric Valentin
LNCC - National Laboratory for Scientific Computing, Petrópolis, Brazil
valentin@lncc.br

Abstract
In this work, we address time dependent electromagnetic wave propagation problems with
strong multiscale features with application to nanophotonics, where problems usually involve
complex multiscale geometries, heterogeneous materials, and intense, localized electromag-
netic fields. Nanophotonics simulations require very fine meshes to incorporate the influence
of geometries as well as high order polynomial interpolations to minimize dispersion. Our goal
is to design a family of innovative high performance numerical methods perfectly adapted
for the simulation of such multiscale problems. For that purpose we extend the Multiscale
Hybrid-Mixed (MHM) finite element method, originally proposed for the Laplace problem
in [2], to the solution of 2d and 3d transient Maxwell equations with heterogeneous media.
The MHM method arises from the decomposition of the exact electromagnetic fields in terms
of the solutions of locally independent Maxwell problems. Those problems are tied together
with an one field formulation on top of a coarse mesh skeleton. The multiscale basis func-
tions, which are responsible for upscaling, are also driven by local Maxwell problems. A high
order Discontinuous Galerkin method (see [1]) in space combined with a second-order ex-
plicit leap-frog scheme in time discretizes the local problems. This makes the MHM method
effective and parallelizable, and yields a staggered algorithm within a “divide-and-conquer”
framework. In this study this MHM-DGTD method has been implemented in 2d. Several
numerical tests assess the optimal convergence of the MHM method, as well as its accuracy
to simulate nanophotonic device on coarse meshes.

References
1. S. Descombes and C. Durochat and S. Lanteri and L. Moya and C. Scheid and J. Viquerat.
Recent advances on a DGTD method for time-domain electromagnetics. Photonics and Nanostructures -
Fundamentals and Applications, 11(4), 2013.
2. C. Harder and D. Paredes and F. Valentin. A family of multiscale hybrid-mixed finite element
methods for the Darcy equation with rough coefficients. J. Comput. Phys., 245:107–130, 2013.

77
Industrial Particle Simulations Using the Discrete Element
Method on the GPU

Nicolin govender
University of Surrey, RCPE GmbH
govender.nicolin@gmail.com
Daniel Wilke
University of Pretoria
nico.wilke@up.ac.za
Patrick Pizette
IMT-Lille-Douai
patrick.pizette@imt-lille-douai.fr
Hermann Kureck
RCPE GmbH
hermann.kureck@rcpe.at
Johannes Khinast
TU Graz
khinast@tugraz.at
Abstract
Accurately predicting the dynamics of particulate materials is of importance to numerous
scientific and industrial areas with applications ranging across particle scales from powder
flow to ore crushing. Computational simulation is a viable option to aid in the understanding
of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory
tests comes at a significant cost. However, the computational time required to simulate an
industrial scale simulation which consists of tens of millions of particles can take months to
complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for
industrial applications. Simulations are therefore typically restricted to tens of thousands of
particles with detailed particle shapes or a few million of particles with often simplified particle
shapes. However, numerous applications require accurate representation of the particle shape
to capture the macroscopic behavior of the particulate system of tens of millions of particles.
The advent of general purpose computing on the Graphics Processor Unit (GPU) over the
last decade and the development of dedicated GPU based DEM codes such as the open-
source software BlazeDEM and the commercial code XPS has resulted in simulations of tens
of millions of particles to be simulated on a desktop computer. In this paper we discuss the
computational algorithms that enable this performance and explore a variety of industrial
applications that can be now be simulated in sufficient detail using a realistic number of
particles.

References
1. Govender et al.. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs. owders & Grains
2017, EPJ Web of Conferences, Volume 140, 06025.

78
On the Numerical Solution of Non-Equilibrium Condensation
of Steam in Nozzles and Cascades

Jan Halama, Vladimı́r Hric


FME, CTU in Prague
jan.halama@fs.cvut.cz, vladimir.hric@fs.cvut.cz

Abstract

Recent “International Wet Steam Modelling Project” promoted by Joerg Starzmann showed
necesity for further development of models, numerical methods and experiments for flow
of condensing steam. Current simulations are based on condensation models, which require
fine tuning of several parameters to fit the condensation start as well as the correct size of
droplets. Such tuning can be sensitive and may depend on the properties of used numerical
method. Further improvement of condensation models is impossible without new more de-
tailed experimental data, which should be available in near future and also without detailed
knowledge of behavior of numerical methods for current models.The aim of present work
is to study the sensitivity of numerical solution related to modifications of thermodynamic
model, to discretization of computational domain especially within the nucleation zone, to
accuracy of numerical method and to the algorithm for time integration for several examples
of non-equilibrium condensation in nozzles and turbine cascades.

References
1. J. Starzmann et al.. Results of the international wet steam modelling project. Proceedings of Wet
Steam Conference Prague 2016, 146-170.
2. J. Halama and F. Benkhaldoun and J. Fort. Flux Schemes Based Finite Volume Method for Internal
Transonic Flow with Condensation. Int. Journal for Numerical Methods in Fluids 65, (2011) 953-968.

79
Acceleration of Stochastic Boundary Inverse Problem

Jan Havelka, Jan Sýkora


Czech Technical University in Prague
jan.havelka.1@fsv.cvt.cz, jan.sykora.1@fsv.cvt.cz

Abstract

In this contribution we investigate the possibility of recovering material heterogeneity inside


a test sample merely from the boundary measurements. Such methods are used in medical
imaging (electrical impedance tomography), material science, geophysics and/or the preser-
vation of historical structures, etc. In particular, our intention is focused on civil engineer-
ing problems described by a non-stationary heat balance equation with two material pa-
rameters/fields, i.e. thermal conductivity and specific heat capacity. Here, we present novel
methodology employing the combination of introducing spatial variability, i.e. random fields,
and Bayesian inference as a method utilized in the identification process. The exhaustive nu-
merical calculations are accelerated by polynomial chaos expansion-based surrogate model.
The proposed approach is computationally verified for various loading scenarios, solver setups
and material field distributions.

References
1. D. S. Holder. Electrical Impedance Tomography: Methods, History and Applications. Taylor & Francis,
2004.
2. A. Kirsch. An Introduction to the Mathematical Theory of Inverse Problems. Springer-Verlag New York,
2011.
3. J. Sylvester and G. Uhlmann. A Global Uniqueness Theorem for an Inverse Boundary Value Problem.
Annals of Mathematics, 125(1) (2011), 153-169.
4. A. Kučerová and J. Sýkora and B. Rosić and H. G. Matthies. Acceleration of uncertainty updating
in the description of transport processes in heterogeneous materials. Journal of Computational and Applied
Mathematics, 236(18) (2012), 4862-4872.
5. A. Allers and F. Santosa. Stability and resolution analysis of a linearized problem in electrical
impedance tomography. Inverse problems, 7(4) (1991), 515.
6. G. Strang and G. J. Fix . An Analysis of the Finite Element Method. Prentice-hall Englewood Cliffs
(1973).
7. M. Cheney and D. Isaacson and J. C. Newell and S. Simske and J. Goble. NOSER: An algorithm
for solving the inverse conductivity problem. International Journal of Imaging Systems and Technology,
2(2) (1990), 66–75 .

80
Modelling of Chemical Ageing and Fatigue in Rubber and
Identification of Parameters

Jan Heczko, Radek Kottner


NTIS – New Technologies for the Information Society, Faculty of Applied Sciences,
University of West Bohemia
jheczko@ntis.zcu.cz, kottner@ntis.zcu.cz

Abstract

Rubber parts are often subjected to combined thermal and mechanical loading and to chem-
ically active environment. This exposure may lead to significant changes in their mechanical
properties due to effects such as formation of microcracks or changes in chemical structure.
Numerical modelling of such changes is of great importance when designing mechanical sys-
tems and their operation (e.g. maintenance schedule).
A model that captures both the chemical changes caused by ageing and the fatigue dam-
age, as well as the coupling between the two, was proposed in [1]. It is a combination of the
dynamic network model by Naumann and Ihlemann [2] and the model of fatigue damage
based on the approach by Ayoub et al. [3], which uses continuum damage mechanics. Es-
timation of parameters related to chemical ageing was described in detail by Naumann [4]
and it is based on precise measurements of oxygen consumption rates. In our work, however,
we investigate the possibility of obtaining similar results by indirect measurements of the
changes in mechanical properties and validity range of such results.
The discussed material model is well suited for the finite element implementation and
thus for simulations of rubber components within a broad range of operating conditions. The
results regarding parameter estimation are of crucial importance to calibration of the model
and, consequently, to practical usability of the model in real-world applications.
This publication was supported by the project LO1506 of the Czech Ministry of Education,
Youth and Sports under the program NPU I.

References
1. J. Heczko and R. Kottner. Modelling of ageing and fatigue under large strains. In Computational
mechanics - EXTENDED ABSTRACTS. Plzeň, 2017.
2. C. Naumann and J. Ihlemann. A dynamic network model to simulate chemical aging processes in
elastomers. In Constitutive Models for Rubber IX. Prague, 2015.
3. G. Ayoub and M. Naı̈t-abdelaziz and F. Zaı̈ri and J.M. Gloaguen. Multiaxial fatigue life prediction
of rubber-like materials using the continuum damage mechanics approach. Procedia Engineering 2 (2010)
985-993.
4. C. Naumann. Chemisch-mechanisch gekoppelte Modellierung und Simulation oxidativer Al-
terungsvorgänge in Gummibauteilen. Ph.D. Thesis, Technische Universität Chemnitz, 2016.

81
Coupling of Algebraic Model of Bypass Transition With
EARSM Model of Turbulence

Jiřı́ Holman, Jiřı́ Fürst


Dept. of Technical Mathematics, Faculty of Mechanical Engineering, Czech Technical
University in Prague
Jiri.Holman@fs.cvut.cz, Jiri.Furst@fs.cvut.cz

Abstract

The contribution deals with the numerical solution of laminar-turbulent transition. Math-
ematical model consists of the Reynolds averaged Navier-Stokes equations, which are com-
pleted by the explicit algebraic Reynolds stress model (EARSM) of turbulence. The algebraic
model of laminar-turbulent transition, which is integrated to the EARSM model, is based
on the work of Kubacki and Dick where turbulent kinetic energy is split to the small-scale
and large-scale parts. The algebraic model is simple and doesn’t requires geometry data such
as wall-normal distance and all formulas are calculated using local variables. Numerical so-
lution is obtained by the finite volume method based on the HLLC scheme with piecewise
linear MUSCL reconstruction and explicit two-stage TVD Runge-Kutta method with point
implicit treatment of the source terms. The proposed method is validated on the ERCOFTAC
T3 test cases. The T3A case has moderate inlet turbulence intensity, the T3A- has low inlet
turbulence intensity, while the T3B case is characterized by high inlet turbulence level.

References
1. J. Holman and J. Fürst. Numerical Simulation of Compressible Turbulent Flows Using Modified
EARSM Model. In: Abdulle A., Deparis S., Kressner D., Nobile F., Picasso M. (eds) Numerical Math-
ematics and Advanced Applications - ENUMATH 2013. Lecture Notes in Computational Science and
Engineering, vol 103. Springer, Cham.
2. S. Kubacki and E. Dick. An Algebraic Model for Bypass Transition in Turbomachinery Boundary Layer
Flows. International Journal of Heat and Fluid Flow 58 (2016), pp. 68-83.

82
HDG Method for the 3d Frequency-Domain Maxwell’s
Equations With Application to Nanophotonics

Mostafa Javadzadeh Moghtader, Stéphane Lanteri, Alexis Gobé


INRIA, 2004 Route des Lucioles, B.P. 93, 06902 Sophia Antipolis Cedex, France
mostafa.javadzadeh-moghtader@inria.fr, stephane.lanteri@inria.fr,
alexis.gobe@inria.fr

Liang Li
School of Mathematical Sciences, University of Electronic Science and Technology of China,
Chengdu, PR China
plum liliang@uestc.edu.cn

Abstract
HDG method is a new class of DG family with significantly less globally coupled unknowns,
and can leverage a post-processing step to gain super-convergence. Its features make HDG a
possible candidate for computational electromagnetics applications, especially in the frequency-
domain. The HDG method introduces an hybrid variable, which represents an additional
unknown on each face of the mesh, and leads to a sparse linear system in terms of the degrees
of freedom of the hybrid variable only. In [1], we have introduced such a HDG method for
the system of 3d time-harmonic Maxwell’s, combined to an iterative Schwarz domain decom-
position (DD) algorithm to allow for an efficient parallel hybrid iterative-direct solver. The
resulting DD-HDG solver has been applied to classical applications of electromagnetics in
the microwave regime. Recently, this HDG method has been extended to the solution of the
2d frequency-domain Maxwell’s equation coupled to different models of physical (local and
non-local) dispersion in metals with application to nanoplasmonics[2]. In the present contri-
bution, we further focus on this particular physical context and propose a arbitrary high order
HDG method for solving the system of 3d frequency-domain Maxwell equations coupled to
a generalized model of physical dispersion in metallic nanostructures at optical frequencies.
Such a generalized dispersion model unifies most common dispersion models, like Drude and
Drude-Lorentz models, and it permits to fit large range of experimental data. The resulting
DD-HDG solver is capable of using different element types and orders of approximation, hence
enabling the possibilities of p-adaptivity and non-conforming meshing, and proves to have
interesting potentials for modeling of complex nanophotonic and nanoplasmonic problems.

References
1. Liang Li and Stéphane Lanteri and Ronan Perrussel. A hybridizable discontinuous Galerkin
method combined to a Schwarz algorithm for the solution of 3d time-harmonic Maxwell’s equation. Journal
of Computational Physics, 256:563–581, 2014.
2. Liang Li and Stéphane Lanteri and Niels Asger Mortensen and Martijn Wubs. A hybridizable
discontinuous Galerkin method for solving nonlocal optical response models. Computer Physics Commu-
nications, 219:99–107, 2017 .

83
A New Mixed Stochastic-Deterministic Simulation Approach
for Particle Populations in Fluid Flows

Volker John, Clemens Bartsch, Robert I.A. Patterson


Weierstrass Institute for Applied Analysis and Stochastics
john@wias-berlin.de, bartsch@wias-berlin.de, patterson@wias-berlin.de

Abstract
This talk presents a new coupled method for the solution of population balance systems
(PBSs). A PBS is a system of partial (integro-)differential equations containing a popula-
tion balance equation (PBE). The particular systems that will be studied come from the
area of computational fluid dynamics (CFD). When modeling such systems by a PBS, the
PBE describes a population of physical particles that are transported in a fluid flow and
interact with each other and the surrounding fluid. This setting determines the type of the
other differential equations present in the system: The Navier–Stokes equations and a set of
convection-diffusion equations that describe quantities that are transported by the flow, like
chemical substance concentrations or temperature.
The new approach consists in a two-way-coupling of a stochastic simulation algorithm
for the particle population to a finite element simulation of the flow. For the solution of the
convection-diffusion equations, special stabilized finite element methods are used Bordas et
al. (2013). The particle population is tracked by a stochastic simulation algorithm in the
Kinetic Monte Carlo spirit, e.g., see Patterson and Wagner (2012). A notable feature of such
stochastic algorithms is their inherent capability to deal with higher-dimensional particle
descriptions. The parts of the simulation are coupled to each other via a splitting scheme,
exchanging coefficients, sources, and sinks. This splitting scheme enables the usage of tailored
approaches for each equation. Since the particles neither have a position nor a spatial extent,
the proposed approach is a so-called quasi-homogeneous method.
In this talk, we will show the successful coupling of advanced CFD techniques with a
stochastic algorithm for the population of particles. Besides presenting the new algorithm
and highlighting some parts of its formulation and implementation, we will present numerical
results from the simulation of crystallizer devices. Such devices are used in chemical engineer-
ing to grow crystals from dissolved material to very regular shapes, exploiting the mechanisms
of surface attachment growth and collision growth. We present an axisymmetric 2D simula-
tion of tubular flow crystallizer device and a full 3D simulation of a batch crystallizer vessel.
Both simulations are verified against experimental data.

References
1. R.I.A. Patterson and W. Wagner. A stochastic weighted particle method for coagulation-advection
problems. SIAM Journal on Scientific Computing, 34, pp. B290-B311, 2012.
2. R. Bordas and V. John and E. Schmeyer and D. Thevenin. Numerical methods for the simulation
of a coalescence-driven droplet size distribution. heoretical and Computational Fluid Dynamics, 27, pp.
253-271, 2013.

84
Comparative Cancer Genomics via Multiresolution Network
Models

Rebecka Jörnsten, Jonatan Kallus, Oskar Allerbo


Mathematical Sciences, Chalmers University of Technology
jornsten@chalmers.se, kallus@chalmers.se, allerbo@chalmers.se

Sven Nelander
IGP, Uppsala University
sven.nelander@igp.uu.se

Abstract

Genome-wide network models of multi ’omics cancer data are popular tools for studying and
revealing both unique and shared mechanisms across malignancies. We have previously stud-
ied such ”pan-can” (multi-cancer) models using sparse inverse covariance selection (SICS)
(Kling et al, 2015). While our SICS model is a highly useful tool for data integration, there
are important questions that warrant further study. First, the large number of co-linear vari-
ables creates instability of network estimation. When no clear ”winning” model comes out
of estimation, these large networks are difficult to interpret or, worse still, simply specula-
tive. Secondly, while model estimation is based on highly optimized solvers, improvements
of scalability are needed to handle future data sets. We therefore propose multi-resolution
SICS (MR-SICS), designed to adaptively aggregate the data into interpretable components,
as part of the network construction. MR-SICS builds on a nested latent model formulation of
network components. At each level of resolution of the model, the parameters for comparative
inference are relatively small, substantially improving estimation stability and interpretabil-
ity. In addition, the multiresolution representation simplifies interactive viewing and analysis
of the models. We demonstrate MR-SICS on simulated data as well as cancer ’omics data.

References
1. T. Kling and P. Johansson and J. Sanchez and V.M. Marinescu and R. Jörnsten and S.
Nelander . Efficient exploration of pan-cancer networks by generalized covariance selection and interac-
tive web content . Nuclear Acids Research 43(15), e98, 2015 gkv413.

85
Towards a Monolithic Discrete Element and Multi-physics
Solver Utilising the GPU

Johannes Joubert
University of Pretoria
u10688944@tuks.co.za

Abstract

The rapid growth of general purpose GPU (GP-GPU) computing has seen meshless point-wise
(or often referred to as particle based) partial differential equation solvers become attractive
due to the highly parallelisable nature of these solvers [1]. Specifically, these solvers are able to
exploit the single instruction, multiple thread (SIMT) paralllesism of the GP-GPU. Smoothed
particle hydrodynamics (SPH) [2], proposed in 1977 by Gingold and Monaghan, is but one of
many particle based solvers, and also the one we will be considering in this study. SPH is a
mesh-free Lagrangian based method that looks at specific point-wise fluid “particles” in the
spatial domain that contains fluid information at their current spatial points. An interpolant
that represent the continuous fluid over the entire spatial domain is built by using a kernel
function. While SPH has primarily been used for fluid simulations, it has also been shown as
a promising method for simulating other physical phenomena such as heat transfer or elastic
deformation of structures. With this in mind, a natural extension that comes from SPH’s
adeptness to solving a wide range of physics problems is to couple different problem types
inside a multi-physics environment. Following on this idea, this paper looks at the specific
coupling between solid-fluid at various length scales starting at fine scale particle models
where interaction follow from a two-phase flow regime up to sufficiently large discrete element
modelling (DEM) of particles where surface stresses are recovered from fluid equations and
used to determine fluid force acting on DEM particles. On the one hand the two-phase flow
regime is a purely SPH based model, while on the other hand the large discrete element
modelling (DEM) requires coupling between the SPH and DEM models. For the latter this
study focuses on the monolithic coupling between the DEM and SPH models which allows
us to further explore the parallelism benefits of the GPU across solvers.

References
1. T. Weaver and Z. Xiao. Fluid simulation by the smoothed particle hydrodynamics method: A survey.
VISIGRAPP 2016 - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer
Graphics Theory and Applications, pp. 215..
2. R. A. Gingold and J. J. Monaghan. Smoothed particle hydrodynamics: theory and application to
non-spherical stars. Monthly Notices of the Royal Astronomical Society, 181(3), pp.375-389..

86
Application of an Intelligent Control on Economics Dynamic
System Described by Differential Algebraic Equation as a
New Management Strategy

Raymundo Juarez-Del-Toro, Patricia A. Valenzuela, Sandra Lopez


Facultad de Contadurı́a y Administración, unidad Torreón, Universidad Autónoma de
Coahuila
r.juarez@uadec.edu.mx, abigail.valenzuela@hotmail.com, salopezc@uadec.edu.mx

José Roberto Cantú-González


Escuela de Sistemas PMRV, unidad Acuña, Universidad Autónoma de Coahuila
roberto.cantu@uadec.edu.mx

Abstract
In this paper, we explore the application of the robust control approach of Attractive Ellipsoid
Methodology on a class of dynamic system described by Differential Algebraic Equations
(DAE) in economics under the effect of bounded external disturbances. To achieve a specific
economic goal we will design an integral management strategy which allows to minimize the
size of the invariant attractive ellipsoid, associated with the dynamic system, with a good
performance in the rejection of external disturbances. The right-hand side of DAE belongs
to the given Quasi-Lipschitz classes and is compatible with several widely used techniques of
linear approximation related to plant models. We can consider transformed problem instead
of the original problem with respect to solvability and related questions.

References
1. R. Juarez and A.S. Poznyak and V. Azhmyakov and undefined undefined. On applications of
attractive ellipsoid method to dynamic processes governed by implicit differential equations. In Electrical
engineering computing science and au- tomatic control (cce), 2011 8th international conference on (p.
1-6). doi: 10.1109/ICEEE.2011.6106585.
2. R. Juarez and V. Azhmyakov and A. S. Poznyak and undefined undefined. Practical stability
of control processes governed by semi-explicit daes. In Electrical engineer- ing, computing science and
automatic control (cce), 2012 9th international conference on (p. 1-6). doi: 10.1109/ICEEE.2012.6421214.
3. H. K. Khalil. Nonlinear systems. Prentice-Hall, New Jersey , 2 (5), 5–1. 1996.
4. M. V. Kunkel . Differential-algebraic equations. analysis and numerical solution. In Ems publishing
house, zurich. 2006.
5. A. S. Poznyak and A. Polyakov and V. Azhmyakov. Attractive ellipsoids in robust control. In (pp.
47–69). Cham: Springer International Publishing. Retrieved from http://dx.doi.org/10.1007/978-3-319-
09210-2 3 doi: 10.1007/978-3-319-09210-23. 2014.
6. R. Juarez and V. Azhmyakov and A. Poznyak. Practical stability of control pro- cesses governed
by semi-explicit daes. In Hindawi publishing corporation. mathematical problems in engineering. volume
2013, article id 6754408 (p. 1-7). doi: 10.1155/2013/675408. 2013.
7. V. Azhmyakov and R. Juarez and A. S. Poznyak. On the practical sta- bility of control processes
governed by implicit differential equations: The invariant ellipsoid based approach. In Elsevier ltd. journal
of the franklin institute. volume 350. 2013, pages 2229-2243 (p. 1-7). doi: 10.1016/j.jfranklin.2013.04.016.
2013.

87
Application of an Intelligent Control on Economics Dynamic
System Described by Ordinary Differential Equation as a New
Management Strategy

Raymundo Juarez-Del-Toro, Maria de Jesús De-Leon, Ivan De Luna


Facultad de Contadurı́a y Administración, unidad Torreón, Universidad Autónoma de
Coahuila
r.juarez@uadec.edu.mx, mariadejesus d@yahoo.com.mx, ivandeluna@gmail.com

José Roberto Cantú-González


Escuela de Sistemas PMRV, unidad Acuña, Universidad Autónoma de Coahuila
roberto.cantu@uadec.edu.mx

Abstract
In the last 30 years, the dynamical behaviour of a widely number of constrained dynami-
cal systems in numerous applications in economics (see A. Poznyak, A. Polyakov, V. Azh-
myakov, (2014)), have been usually modeled via ordinary and differential-algebraic equations.
This kind of nonlinear control problems are described by Ordinary Differential Equations
(ODE) represent still a very active research area. The right-hand side of ODE belongs to the
given Quasi-Lipschitz (Q-L) classes and is compatible with several widely used techniques
of linear approximation related to plant models. Similar linearization-like ideas are common
in the theoretical and numerical practice of control engineering H. K. Khalil (1996). This
linearization-like approximation allows to rewrite original system into a linear control prob-
lem. The Attractive Ellipsoid Methodology (AEM) allows to reach a suitable solution for a
class of given economics models.

References
1. R. Juarez and A. S. Poznyak and V. Azhmyakov and undefined undefined. On applications of
attractive ellipsoid method to dynamic processes governed by implicit differential equation. In Electrical
engineering computing science and automatic control (cce), 2011 8th international conference on (p. 1-6).
doi: 10.1109/ICEEE.2011.6106585.
2. R. Juarez and V. Azhmyakov and A. Poznyak . Practical stability of control processes governed by
semi-explicit DAE. In Electrical engineering, computing science and automatic control (cce), 2012 9th
international conference on (p. 1-6). doi: 10.1109/ICEEE.2012.6421214.
3. H. K. Khalil. Nonlinear system. Prentice-Hall, New Jersey, 2 (5), 5–1. 1996.
4. A. S. Poznyak and A. Polyakov and V. Azhmyakov. Attractive ellipsoids in robust control. In
(pp. 47–69). Cham: Springer International Publish. Retrieved from http://dx.doi.org/10.1007/978-3-319-
09210-23 doi: 10.1007/978-3-319-09210-23. 2014.
5. R. Juarez and V. Azhmyakov and A. Poznyak . Practical stability of control processes governed
by semi-explicit DAE. In Hindawi publishing corporation. mathematical problems in engineering. volume
2013, article id 6754408 (p. 1-7). doi: 10.1155/2013/675408. 2013.
6. V. Azhmyakov and R. Juarez and A. Poznyak. On the practical stability of control processes governed
by implicit differential equations: The invariant ellipsoid based approach. In Elsevier ltd. journal of the
franklin institute. volume 350. 2013, pages 2229-2243 (p. 1-7). doi: 10.1016/j.jfranklin.2013.04.016. 2013.

88
Restricted Boltzmann Machine for Binary Patterns
Aggregation for Image Object Recognition

Rafal Kapela, Szymon Sobczak, Dariusz Pazderski, Krzysztof Kozlowski, Aleksandra


Swietlicka
Poznan University of Technology
rafal.kapela@put.poznan.pl, szymon.k.sobczak@doctorate.put.poznan.pl,
dariusz.pazderski@put.poznan.pl, krzysztof.kozlowski@put.poznan.pl,
aleksandra.swietlicka@put.poznan.pl

Kevin McGuinness, Noel O’Connor


Dublin City University
kevin.mcguinness@dcu.ie, noel.oconnor@dcu.ie

Abstract
The article presents a new approach for image object recognition and object classification
taking advantage of binary local image descriptors. Such descriptors are known to be a rel-
atively fast and effective solution to finding and describing image patches around previously
found key-points. The most popular examples are Fast Retina Keypoint (FREAK), Binary
Robust Invariant Scalable Keypoints (BRISK), Learned Arrangements of Three Patch Codes
(LATCH). All of them exploit binary response as an image patch descriptor which can ensure
fast operation. Simultaneously, this technique suffers from the lack of efficient aggregation
methods.
Restricted Boltzmann Machine is a bi-directional neural network that can learn probabil-
ity distribution of a binary input pattern. In our approach, RBMs are applied to transform
a binary string to a vector of continuous values which can encode a set of binary patterns,
each corresponding to the given previously calculated binary descriptor. For extraction of
the object properties a simple sliding window approach can be employed. In order to find
objects candidates in an image, we also introduce a Selective Search algorithm which com-
bines the strength of both an exhaustive search and segmentation. For research purposes, we
take advantage of our own application of Restricted Boltzmann Machine written in C++.
The novelty considered in this paper is related to presentation of recognition results of our
algorithm as well as proposition of the new implementation of the RBM using CUDA for
GPU accelerated computing.
Based on obtained experimental results we can state that our technique using local image
descriptors provides significantly better performance in comparison to simpler approaches
referring to nearest neighbor and bag of visual words.

References
1. A. Fischer and C. Igel. An Introduction to Restricted Boltzmann Machines. Progress in Pattern Recog-
nition, Image Analysis, Computer Vision, and Applications, Vol. 7441 of Lecture Notes in Computer
Science, Springer Berlin Heidelberg, 2012, pp. 14-36..

89
Asphalt Pavement Surface Objects Detection - Denoising
Concept

Rafal Kapela, Andrzej Pozarycki, Adam Turkot


Poznan University of Technology
rafal.kapela@put.poznan.pl, andrzej.pozarycki@put.poznan.pl,
adam.turkot@put.poznan.pl

Abstract
In the field of noninvasive sensing techniques for civil infrastructures monitoring, image recog-
nition techniques become very handy. On the other hand, the ability of an autonomous image
recognition system in the task of accurate asphalt surface crack detection due to the range
of difficulties seems to be challenging. Usually, the first task in these kinds of systems is the
elimination of the noise and the surface artifacts that are present in the images. Given that
there is a whole range of different asphalt surfaces that are featured with diverse texture char-
acteristics this task is not trivial. As a result, the image processing system has to adapt to
the asphalt texture parameters in order to extract valuable information from the surrounding
background. For this reason, two-fold denoising system is presented in this paper.
The first stage of the denoising system presented in this paper is the noise removal caused
by the vehicle onboard system hardware and adaptive feature enhancement. This is a chal-
lenging task given the vast variety of asphalt surface texture homogeneities. For this reason,
the adaptive contrast enhancement is required. Next, the image is presented to the feature en-
hancement procedure which produces two outputs for feature calculation: enhanced luminance
features and enhanced edge features. The second stage of the system is the actual asphalt
texture extraction (i.e., asphalt image region denoising). For this purpose, we designed and
implemented an automatic procedure that utilizes a Convolutional Neural Network (CNN).
It proceeds with a manual image annotation system where the user picks the image regions
which belong to a certain class (e.g., a hatch or curb). Then, in a fully automatic way, the
training data for the CNN is prepared together with a definition of the network. The next
step involves training of the network with use of GPU acceleration.
The system is based on OpenCV and caffe libraries and is implemented in C++ lan-
guage. Computational experiments show that it is featured with higher denoising ratio than
presented in previous works.

References
1. A. Cubero-Fernandez and Fco. J. Rodriguez-Lozano and Rafael Villatoro and Joaquin Oli-
vares and Jose M. Palomares. Efficient pavement crack detection and classification. EURASIP Journal
on Image and Video Processing 2017.
2. R. Kapela and P. Sniatala and A. Turkot and A. Rybarczyk and A. Pozarycki and P. Ry-
dzewski and M. Wyczalek and A. Bloch. Asphalt surfaced pavement cracks detection based on
histograms of oriented gradients. Mixed Design of Integrated Circuits and Systems (MIXDES), 2015 22nd
International Conference. pp 579-584..

90
Induction Brazing Process Control Using Reduced Order
Model

David Pánek, Pavel Karban


University of West Bohemia
panek50@kte.zcu.cz, karban@kte.zcu.cz

Abstract

Induction brazing of pipes into sleeves of an evaporator is a process of specific character.


The melting temperature of the basic material should be close to the melting temperature
of the solder and the process is required to proceed without all kinds of defects (dissolu-
tion, gathering, insufficient penetration of the solder). This process cannot be controlled by
methods based on on-line measurement of required quantities (they are either enormously
expensive or technologically impossible). As the complete physical model of the process is
often too complex and cannot be implemented in a micro-controller, the technique employs
model order reduction (MOR) providing its fast solution at a still sufficient accuracy. The
paper presents a novel way of control based on a model working with prediction of system
behavior that flexibly allows controlling the process.

References
1. Suhr undefined and B. undefined and Rubeda undefined and J. undefined. Model order reduction
via proper orthogonal decomposition for a lithium-ion cell. COMPEL: The International Journal for
Computation and Mathematics in Electrical and Electronic Engineering, Vol. 32, No. 5, 1735–1748, 2013.
2. Bikcora undefined and C. undefined and Weiland undefined and S. and Coene and W. M..
Thermal deformation prediction in reticles for extreme ultraviolet lithography based on a measurement-
dependent low-order model. IEEE Transactions on Semiconductor Manufacturing, Vol. 27, No. 1, 104–117,
2014..

91
A Look at the Challenges Of, and Some Solutions To,
Evaluating Next-generation Earth System Models

Joseph H. Kennedy, Katherine J. Evans, Salil Mahajan


Oak Ridge National Laboratory
kennedyjh@ornl.gov, evanskj@ornl.gov, mahajans@ornl.gov

Abstract

Earth system models (ESMs) sit at the nexus of some of today’s most pressing computational
and societal challenges. The CMIP6 project, currently underway, provides the backbone of
the climate communities future projections and will requires massive investment of resources
by each participating modeling group. To minimally participate, modeling groups will be
expected to run 1000 simulation years, requiring ≈ 107 core-hours of computing time each,
and together are expected to submit over 50PB of simulation data. At the same time these
simulation are being performed, new leadership-class computers are coming online and ex-
pected to hit the exascale computing threshold in a matter of years. Porting ESMs to these
exascale machines will require significant investment in developers’ time to verify and op-
timize the codebase on the new architectures available. Likewise, scientific development to
take advantage of these machines’ new capabilities will require new initialization, calibration
and validation studies to be performed. The explosion of available Earth observing data (e.g.,
NASA’s EOSDIS contains over 9PB of observation) has made confronting models with obser-
vations a Big Data challenge, requiring new tools and methodologies to provide high-quality
evaluations of ESMs.
Because of the scale of both human and machine resources required to comprehensively
evaluate ESMs, it is critical to continuously evaluate these models as part of the development
cycle so that evaluation can keep pace with model development. We present an integrated
exascale testing strategy and some software packages being developed for exascale testing of
the US Department of Energy’s Energy Exascale Earth System Model (E3SM).

References
1. J. H. Kennedy and A. R. Bennett and K. J. Evans and S. Price and M. Hoffman and W. H.
Lipscomb and J. Fyke and L. Vargo and A. Boghozian and M. Norman and P. H. Worley.
LIVVkit: An Extensible, Python-based, Land Ice Verification and Validation Toolkit for Ice Sheet Models.
J. Adv. Model. Earth Syst., 9 (2017), 854–869, doi: 10.10202/2017MS000916.
2. S. Mahajan and A. L. Gaddis and K. J. Evans and M. R. Norman. Exploring an Ensemble-Based
Approach to Atmospheric Climate Modeling and Testing at Scale. Procedia Computer Science, 108 (2017),
735–744, doi: 10.1016/j.procs.2017.05.259.

92
Numerical Modelling of Newtonian Fluids in Bypass Tube

Radka Keslerova, Hynek Řeznı́ček


CTU in Prague
Radka.Keslerova@fs.cvut.cz, Hynek.Reznicek@fs.cvut.cz

Abstract

Previous work was concentrated on numerical solution of viscous and viscoelastic fluids flow in
the branching channel. Used mathematical models were Newtonian and Oldroyd-B models.
Both models were generalized by cross model in the shear- thinning meaning. The aim of
this work is to describe and discuss the results of numerical study of Newtonian fluids in the
bypass tube. The different constriction in tha main tube were tested. In this work a Newtonian
mathematical model was used for an investigation of bypass flow. The fundamental system
of equations is the system of generalized Navier-Stokes eqautions for incompressi ble laminar
flow. The numerical solution is based on central finite volume method using explicit Runge-
Kutta time integration. Numerical simulations on three dimensional geometries are performed
for our study. Considered geometry is based on 3D hexahedral structured mesh in bypass tube
with different diameter of stenosis (a narrowing of blood vessel). Future work will be to direct
this simulation at extension to shear thinning fluids for numerical modelling of blood flow.

References
1. L. Benes and P. Louda and K. Kozel and at al.. Numerical simulations of flow through channels
with T-junction. Applied Mathematics and Computation 219 (2013) 7225-7235.
2. T. Bodnar and A. Sequeira and M. Prosi. On the shear-thinning and viscoelastic effects of blood
flow under various flow rates. J. Appl. Math. Comput. 217 (2010) 5055-5067.

93
A Pseudo Cell Approach for Hanging Nodes in Unstructured
Meshes

Margrit Klitz
German Aerospace Center (DLR)
margrit.klitz@dlr.de

Abstract
One of the ongoing goals of the German Aerospace Center (DLR - Deutsches Zentrum für
Luft- und Raumfahrt) is the virtual design of an aircraft. This means that the aircraft’s flight
characteristics are determined by numerical simulation before its first flight in the real world.
A key element in the aerodynamic design process is the numerical flow simulation, for which
the DLR develops its next-generation CFD software code Flucs (Flexible Unstructured CFD
Software) [1].
For virtual aircraft design, we have to consider the numerical simulation of complex three-
dimensional transient flows. This is highly time-consuming even on today’s computers. In
order to reduce time, we apply mesh adaptivity to increase the mesh resolution only where it
is strictly needed. However, during the mesh refinement process, hanging nodes are created
along the non-conforming interfaces of large to small elements and much effort would have
to be put into removing them.
Instead in this talk, we focus on how to keep the hanging nodes in the underlying unstruc-
tured mesh. Flucs uses a higher-order Discontinuous Galerkin method as well as a second-
order finite-volume discretization. Discontinuous Galerkin methods especially are able to deal
with very general non-matching grids containing hanging nodes. For the management of our
mesh we use the Flow Simulator Data Manager (FSDM) which is open source and provides a
broad library of classes for in-memory storage and parallel handling of data associated with
Computational Fluid Dynamics. FSDM is already able to handle the unstructured mixed-
element meshes on which Flucs is based but cannot yet store and provide information on
elements with hanging nodes. We extend FSDM by so-called pseudo cells which are mesh ele-
ments that have no volume and do not contribute to computations done on the unstructured
mesh in any way. We show how the pseudo cells help to provide the connectivity information
of neighboring non-conforming elements for the computations of fluxes in the flow solver. The
pseudo cells can even handle higher order elements with hanging nodes. To the knowledge
of this author, such an approach has not yet been considered in the literature; see [2] for a
related approach restricted to cubical elements.

References
1. T. Leicht and D. Vollmer and J. Jägersküpper and A. Schwöppe and R. Hartmann and J.
Fiedler and T. Schlauch. DLR-Project DIGITAL-X : Next Generation CFD solver Flucs. Deutscher
Luft- und Raumfahrtkongress 2016, 13-15 Sep 2016, Braunschweig, Germany.
2. P. Diez and F. Verdugo. An Algorithm for Mesh Refinement and Un-refinment in Fast Transient
Dynamics. International Journal of Computational Methods, 2013, 10.

94
Sensor Failure Detection in Selftesting Navigation System

Krzysztof Kolanowski, Aleksandra Świetlicka


Poznan University of Technology
krzysztof.kolanowski@put.poznan.pl, aleksandra.swietlicka@put.poznan.pl

Abstract

The reliability and security become more important as the complexity of both computational
and procedural increases. Diagnostics of industrial processes deals with the recognition of
changes in the states of these processes, where industrial processes are understood as a series
of intentional actions carried out within a fixed period of time by a specific set of machines
and devices at the available resources. Diagnosis is treated as a process of detecting and
distinguishing damage to an object as a result of collection, processing, analysis and evaluation
of diagnostic signals. Diagnosis can be carried out with varying degrees of detail. Depending
on the type of object and the knowledge about it, the result of the diagnosis may be a detailed
identification of the damage or only a general description of the status class.
The work focuses on navigational systems, determination of positive as well as negative
sides of existing solutions and a solution was proposed combining three systems: satellite,
inertial and barometric. The combination of functionality is carried out using the extended
Kalman filter and the neural self-testing block. The fact of using the neural self test block
allows for effective determination of position in space and detection of irregularities.

References
1. Martin Liggins II and David Hall and James Llinas. Handbook of Multisensor Data Fusion: Theory
and Practice. CRC Press 2008.
2. Krzysztof Kolanowski and Aleksandra Świetlicka and Rafal Kapela and Janusz Pochmara
and Andrzej Rybarczyk. Multisensor data fusion using Elman neural networks. Applied Mathematics
and Computation 319 (2018) 236–244.
3. Global Positioning Systems and Inertial Navigation and and Integration. Mohinder S. Grewal,
Lawrence R. Weill, Angus P. Andrews. WILEY 2007.

95
Distributed Implicit Discontinuous Galerkin MHD Solver

Lukas Korous, Pavel Karban


University of West Bohemia
lukas.korous@gmail.com, karban@kte.zcu.cz

Abstract

The discontinuous Galerkin method is a favorable alternative to the finite volume method,
which is often used in astrophysical codes dealing with MHD. DG methods offer higher order
accuracy and reduced diffusion compared to the finite volume method while keeping the
scheme highly parallelizable. The MHD equations are nonlinear, therefore, we need to solve a
nonlinear problem in each time step, which involves non-differentiable numerical fluxes (such
as HLLD), so care must be taken when applying a nonlinear solver. We propose in this work
using a damped nonlinear solver on each time step. Another complexity of solving MHD
equations using DG is satisfying the Gauss’s law of zero divergence of magnetic flux density,
often achieved by techniques such as divergence cleaning, or Constrained-Transport. In this
work, we chose another approach - using exactly divergence-free space for representation of
the magnetic field. Another problem that requires handling is the presence of undershoots
and overshoots in the DG solution - this is handled in this work by using the Vertex-based
limiter. This work is being implemented using the FE libraries deal.II and Trilinos in 3D and
fully parallel/distributed manner.

References
1. D. Arndt and W. Bangerth and D. Davydov and T. Heister and L. Heltai and M. Kronbichler
and M. Maier and J.-P. Pelteret and B. Turcksin and D. Wells. The deal.II Library, Version
8.5. Journal of Numerical Mathematics, vol. 25, pp. 137-146, 2017..
2. D. Kuzmin. Hierarchical slope limiting in explicit and implicit discontinuous Galerkin methods. Journal
of Computational Physics, Volume 257, 1140-1162, 2014.
3. T. Miyoshi and K. Kusano. A multi-state HLL approximate Riemann solver for ideal magnetohydro-
dynamics. Journal of Computational Physics, Volume 208, 2005.

96
Applicability and Comparison of Surrogate Techniques for
Modeling Selected Heating Problems

Vaclav Kotlan, Karel Pavlicek, Ivo Dolezel


University of West Bohemia
vkotlan@kte.zcu.cz, pavlk@kte.zcu.cz, doleze@fel.cvut.cz

Abstract

Possibility of using surrogate techniques for modeling selected strongly nonlinear heating
problems is investigated. The main purpose is to significantly reduce the computing time in
the case of calculations of many different variants of a given task by the finite element method
on the condition of obtaining results of a still acceptable accuracy. Frequently used surrogate
techniques (based on kriging, neural network etc.) are tested on the problem of laser welding
that represents a very complicated 3D problem. Here, the most important output quantities
are the structure of weld and its depth that depend on a number of input parameters (power
of laser beam, velocity of shift of the welded parts, overall geometry and material properties
etc.) and must be known before welding itself. The paper presents both full model of the
proces and considered surrogate algorithms, and compares the results obtained. It is shown
that careful selection of the surrogate technique together with suitable choice of its input
data is very beneficial and may result in high savings in design of the process. Evaluated is
also implementation performance and suitability of particular techniques of this kind.

References
1. A. Bollig and D. Abel and C. Kratzsch and S. Kaierle. Identification and predictive control of
laser beam welding using neural networks. 2003 European Control Conference (ECC), 2003, pp. 2457-2462.
2. G. Montemayor-Garcı́a and G. Toscano-Pulido. A study of surrogate models for their use in mul-
tiobjective evolutionary algorithms. 2011 8th International Conference on Electrical Engineering, Com-
puting Science and Automatic Control, 2011, pp. 1-6.
3. M. B. Yelten and T. Zhu and S. Koziel and P. D. Franzon and M. B. Steer. Demystifying
Surrogate Modeling for Circuits and Systems. IEEE Circuits and Systems Magazine, 2012, vol. 12, no. 1,
pp. 45-63.

97
Time Integration of Hydro-Mechanical Model for Bentonites

Tomáš Koudelka, Tomáš Krejčı́, Jaroslav Kruis


Faculty of Civil Engineering of Czech Technical University in Prague
tomas.koudelka@fsv.cvut.cz, krejci@fsv.cvut.cz, jaroslav.kruis@fsv.cvut.cz

Abstract

Behaviour of expansive compacted soils depends significantly on its microstructure. Two


distinct pore systems have to be taken into account. Recent advances in modelling of hydro-
mechanical behaviour of unsaturated soils reveal crucial role of hydromechanical coupling.
Volumetric deformation of soil skeleton influences the degree of saturation which in turn influ-
ences soil effective stress. A special model has been developed by Mašı́n[1] and Mašı́n&Khalili
[2] who defined the model within the theory of hypoplasticity.
Our contribution deals with efficient numerical time integration of the constitutive model.
The integration is based on Runge-Kutta method with adaptive step length where some mod-
ifications of the original model were proposed due to singularities revealed in the original
model. The material model was implemented in the open source code SIFEL [3]. The imple-
mentation was used for benchmark tests and there is also comparison of the Runge-Kutta
methods with different order applied to the integration of the constitutive equations.
Financial support for this work was provided by project HORIZON 2020 No. 745942
(BEACON). The support is acknowledged gratefully.

References
1. D. Mašı́n. Development of a coupled thermo-hydro-mechanical double structure model for expansive soils.
The European Physical Journal Conference (2015).
2. D. Mašı́n and N. Khalili. A thermo-mechanical model for variably saturated soils based on hypoplas-
ticity. Int. J. Numer. Anal. Meth. Geomech. Vol. 36 No. 12 (2012) 1461-1485.
3. T. Koudelka and T. Krejčı́ and J. Kruis. http://mech.fsv.cvut.cz/∼sifel/index.html. (2017).

98
A Self-calibrating Method for Heavy Tailed Data Modeling.
Application in Neuroscience and Finance

Marie Kratz
ESSEC Business School, CREAR
kratz@essec.edu

Abstract

Modeling non-homogeneous and multi-component data is a problem that challenges scientific


researchers in several fields. In general, it is not possible to find a simple and closed form
probabilistic model to describe such data. That is why one often resorts to non-parametric
approaches. However, when the multiple components are separable, parametric modeling
becomes again tractable. In this study, we propose a self-calibrating method to model multi-
component data that exhibit heavy tails. We introduce a three-component hybrid distribution:
a Gaussian distribution is linked to a Generalized Pareto one via an exponential distribution
that bridges the gap between mean and tail behaviors. An unsupervised algorithm is then
developed for estimating the parameters of this model. We study analytically and numerically
its convergence. The effectiveness of the self-calibrating method is tested on simulated data,
before applying it to real data from neuroscience and finance, respectively. A comparison with
other standard Extreme Value Theory approaches confirms the relevance and the practical
advantage of this new method. This is a joint work with N. Debbabi (ESPRIT School of
Engineering, Tunis, Tunisia) and M. Mboup (Université de Reims Champagne Ardenne,
France)

References
1. N. Debbabi and M. Kratz and M. Mboup. A self-calibrating method for heavy tailed data modeling.
Application in neuroscience and finance. ArXiv 1612.03974v2 (Dec.2017).

99
Efficient Assembly of BEM Matrices Using ACA on
Distributed Systems

Michal Kravčenko, Michal Merta, Jan Zapletal


IT4Innovations, VŠB – Technical University of Ostrava, 17. listopadu 15/2172,
708 33 Ostrava-Poruba, Czech Republic
Dept. of Applied Mathematics, VŠB – Technical University of Ostrava, 17. listopadu
15/2172, 708 33 Ostrava-Poruba, Czech Republic
michal.kravcenko@vsb.cz, michal.merta@vsb.cz, jan.zapletal@vsb.cz

Abstract

The boundary element method (BEM) reduces the considered problem to the boundary of
the computational domain [3]. This makes it well suited for treatment of problems stated
on unbounded domains, such as sound or electromagnetic wave scattering. However, its high
computational intensity and large memory footprint requires us to use some form of a low rank
approximation scheme to reduce the computational time and the size of the system matrices.
In this work we present a distributed version of the adaptive cross approximation (ACA)
method for BEM [1] based on the cyclic graph decomposition described in [2]. Moreover,
we focus on our modification of the ACA algorithm based on geometric properties of the
problem mesh, which is used to treat usually problematic zero matrix blocks and to ensure
a robust assembly of the approximated matrices. We then briefly describe the Helmholtz
problem and showcase weak and strong scaling of the Helmholtz BEM operators assembly
using our scheme on distributed memory systems.

References
1. M. Bebendorf. Approximation of boundary element matrices. Numer. Math. 86 (2000).
2. D. Lukáš and P. Kovář and T. Kovářová and M. Merta. A parallel fast boundary element method
using cyclic graph decompositions. Numerical Algorithms 70(4), 807–824 (2015).
3. S. Rjasanow and O. Steinbach. The Fast Solution of Boundary Integral Equations. Springer (2007).

100
Quick Method of Creation of 3D Treatment Volume Margin.

Zuzanna Krawczyk, Jacek Starzyński


Warsaw University of Technology
zuzanna.krawczyk@ee.pw.edu.pl, jstar@ee.pw.edu.pl

Ondřej Semmler, Petr Voleš


UJP Praha, a.s
osemmler@ujp.cz , osemmler@ujp.cz

Abstract

Accurate delineation of target volumes is crucial for proper radiotherapy planning that ensures
delivery of the radiation dose to the entire tumor while protecting critical tissue structures
(Organs at Risk – ORs). Safety margins are added to different target volumes for a number of
reasons [1], for instance, in order to protect ORs or to take into consideration motion of the
organs. Manual creation of a 3D margin is very time consuming. A few automated approaches
to the task, applicable in some special cases, have been discussed in the literature. The method
of margin creation presented in [2] is applicable only to the margins that are symmetrical
with respect to the directions of the margin growth and methods in [2,3] are suitable for
margin expansion only. We propose a method which allows to quickly create a 3D margin
on the basis of six preset distances (provided by a program operator) in the positive and
negative directions of the Cartesian coordinate system axes. In the first step, a surface mesh
is created out of volume delineation and a normal vector is associated with each point of the
mesh. Next, each point of the mesh is moved along its normal for a distance being a linear
combination of the preset distances. In some instances, after the mesh transformation, the
faces of the mesh intersect each other (this situation may happen in case of concave shapes
or those with holes inside) causing the corruption of the margin volume. For this reason, in
further steps of the procedure the number of intersections between each point of the mesh
and the mesh triangles is computed and on the basis of this information it is decided whether
the whole triangle is located inside the volume and should be removed or it belongs to the
surface and it should be preserved. In case of triangles which are intersecting with each other,
the intersection line is calculated and the corrected polygon is added to the mesh.

References
1. N. G Burnet and S. J. Thomas and K. E. Burton and S. J. Jefferies. Defining the Tumour and
Target Volumes for Radiotherapy. Cancer Imaging 2004 4(2) 153–161.
2. J.C. Stroom and P.R.M. Storchi . Automatic Calculation of Three-dimensional Margins around Treat-
ment Volumes in Radiotherapy. Physics in Medicine & Biology vol 42, Nr 4, pp 745, 1997.
3. G. S. Shentall and L. Bedfordt and . A Digital Method for Computing Target Margins in Radiother-
apy. Medical Physics, The International Journal of Medical Physics Research and Practice, 1998 25(2),
pp 224–231.

101
CT Data Segmentation Based on Reference Skeleton Model

Zuzanna Krawczyk, Jacek Starzyński, Robert Szmurlo


Warsaw University of Technology
zuzanna.krawczyk@ee.pw.edu.pl, jstar@ee.pw.edu.pl,
robert.szmurlo@ee.pw.edu.pl

Abstract

Reliable segmentation of CT data is an essential preprocessing stage for radiotherapy plan-


ning. During the radiotherapy treatment, which usually consists of a series of medical pro-
cedures spread over time, the patient body and tumor size can change. This may cause the
need for replanning the process [1]. Using individualized, realistic, digital model of the patient
body can simplify and speed-up the task of replanning. Precise segmentation of the patients
skeleton is a key-point for such an approach since the skeleton is the most invariant structure
of the body. The paper presents a novel method of CT data processing. To extract skeletal
structure of the patient, a reference model of the human skeleton is used. The method allows
to obtain a surface or volume model of the patient bones which correspond with the CT data.
The main advantage of the presented approach is the controllable smoothness and complex-
ity of the result. This smoothness is obtained by morphing [2, 3] of the reference model to
match the CT. In the first phase the affine transformation is applied to the reference model to
bring it as close as possible to the CT data. Then consecutive one bone models are matched
against the CT to find the input data for morphing. The morphing algorithm is applied to
each bone, transforming the reference skeleton into the patient’s one. The presented method
is implemented in Python with extensive use of VTK library.

References
1. N. Li and M. Zarepisheh and A. Uribe-Sanchez and K. Moore and Z. Tian and X. Zhen and Y.
J. Graves and Q. Gautier and L. Mell and L. Zhou. Automatic Treatment Plan Re-optimization for
Adaptive Radiotherapy Guided with the Initial Plan DVHs. Phys Med Biol. 2013 Dec 21; 58(24) :8725-38.
2. J.Starzyński and Z.Krawczyk and B. Chaber and R. Szmurlo. Morphing Algorithm for Building
Individualized 3D Skeleton Model from CT Data. Applied Mathematics and Computation 267 (2015)
655–663.
3. Z. Krawczyk and J. Starzyński. Generation of Skeletal Structures for the Pelvis Area with the Use
of 3-dimensional Binary Thinning Algorithm. 17th International Workshop ”Computational Problems of
Electrical Engineering”, DOI: 10.1109/CPEE.2016.7738769.

102
Homogenization of Masonry and Heterogeneous Materials on
PC Clusters

Tomáš Krejčı́, Tomáš Koudelka, Jaroslav Kruis, Aleš Jı́ra, Michal Šejnoha
Faculty of Civil Engineering, Czech Technical University in Prague
krejci@fsv.cvut.cz, tomas.koudelka@fsv.cvut.cz, jaroslav.kruis@fsv.cvut.cz,
jira@fsv.cvut.cz, sejnom@fsv.cvut.cz

Abstract

There are two major approaches for numerical modeling of masonry with the help of FEM.
The first possibility is to use a very fine mesh which takes into account the composition of
the material and the second one is based on a homogenization method. The decision which
approach should be used is difficult because both possibilities lead to a very large number of
arithmetic operations and they are very time consuming and therefore execution on parallel
computers may be a suitable option. This contribution presents a processor farming method
in connection with a multi-scale analysis. In this method, each macroscopic integration point
or each finite element is connected with a certain meso- scopic problem represented by an ap-
propriate representative volume element (RVE). The solution of a mesoscale problem provides
then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel
computing because the mesoscale problems can be distributed among many processors. The
method differs from classical parallel computing methods which come out from the domain
decomposition. The macro-problem is assigned to the master processor while the solution of
homogenization procedure ([1] and [2]) at the meso-level is carried out on slave processors.
At each time step the current temperature, moisture and mechanical fields together with the
increments of their gradients at a given macroscopic integration point are passed to the slave
processor (imposed onto the associated periodic cell), which, upon completing the small-scale
analysis, sends the homogenized data on mesoscale (effective conductivities, averaged stor-
age terms, fluxes, forces etc.) back to the master processor. The application of the processor
farming method to a real-world masonry structure is illustrated by a numerical example.

References
1. J. Sýkora and J. Vorel and T. Krejčı́ and M. Šejnoha and J. Šejnoha. Analysis of coupled heat
and moisture transfer in masonry structures. Materials and Structures (2009) 1153-1167.
2. J. Sýkora and M. Šejnoha and J. Šejnoha. Homogenization of coupled heat and moisture transport
in masonry structures including interfaces.. Applied Mathematics and Computation 2199(3) (2010) 7275-
7285..

103
Electro-Mechanical FEM Simulations With “General Motion“

Fritz Kretzschmar, Peter Binde


Dr. Binde Ingenieure
fritz.kretzschmar@drbinde.de, peter.binde@drbinde.de

Abstract
In this work we present the coupling of MAGNETICS for SC, beeing a highly sophisticated
FEM solver for electromagnetic simulations on the basis of GetDP [1], with a precise and
fully integrated mechanical solver; resulting in the multiphysics solver General Motion (GM).
The GM solver is smoothly integrated within the Siemens NX/SC system; thus making it
possible to use the full capabilities of the NX/SC CAD system for creating and meshing of
desired geometries . Due to this tight integration and the capability of the GM solver to use
unstructured grids, h-refinement on unstructured is inherent to the solver in an easy-to-use
fashion. Moreover, the electromagnetic part of the GM solver is capable of employing higher
order basis functions thus also allowing p-enrichment.

As an example, we will demonstrate the simulation of an actuator, that consists of a coil, a


magnet, a stopper and an anchor. The coil has 185 turns and a current of 3A is employed to
induce a sufficient magnetic field in order to attract the anchor. Here, magnet, stopper and
anchor consist of a non-linear material, i.e. a glowed steel with a non-linear B-H curve. For
the EM part of the GM solver we use a Magnetoquasistatic setting, beeing
d
∇×E=− B and ∇ × H = J, (1)
dt
in combination with Ohm’s Law J = σE and the material Law B = µH. Here, E and H
are the electric and magnetic fields, respectively; B is the magnetic flux density; J is the
current; σ is the conductivity; and µ is the permeability. For the Mechanical part of the
solver Newtons Law of motion is used
F = ma, (2)
where F is the force acting on the moving part, and m and a are the mass and acceleration
of the moving part, respectively. The coupling of the two solvers is then done via the EM
forces, i.e. Lorentz forces and Reluctance forces
dW
FL = J × B and FR = . (3)
dx
The obtained simulation results are in agreement with the measured data. In addition, other
electro-mechanical systems, e.g. motors and generators can be simulated with the GM solver.

References
1. C. Geuzaine. High Order FEM schemes for Maxwell’s equations taking thin structures and global quan-
tities into account.. PhD Thesis, 2001.

104
AR/VR at Google

Vladimir Krneta
Google Inc
vkrneta@google.com

Abstract

Augmented Reality, a technology that superimposes video or computer-generated image on


a user’s view of the real world, thus providing a composite view. Virtual Reality, a realis-
tic and immersive simulation of a three dimensional environment, created using interactive
software and hardware; experienced or controlled by movement of the body. Some things VR
could be used: Virtual tourism, Product Design & Engineering, Real Estate & Architecture,
Training & Simulation, Medicine & Psychiatry, Business Travel & Remote inspections, Gam-
ing, Teleportation, Pocketable Movie Theater, Memory. Some things AR can be used for:
Portable ”desktop computing”. Medicine, a big screen TV everywhere, Navigation, Telepres-
ence, Annotated everything. Current trends: Google Lens, Daydream standalone VR headset,
ARCore, Tilt Brush, WebVR

References
1. Google . OFFICIAL BLOG Google AR and VR. tiltbrush.com.

105
Loss Approximation in Induction Machines

Miklós Kuczmann
Department of Automation, Széchenyi István University, Győr, Hungary
kuczmann@sze.hu

Abstract

Electrical machines are widely used in e-mobility, especially to drive electric cars or au-
tonomous vehicles. The accurate knowledge of the loss components is essential already in the
design stage.
The full paper shows an iron loss estimation in the ferromagnetic steel laminations of
an asynchronous machine by developing a two-step method to deal with eddy currents and
hysteresis effects.
In the first step, the approximate magnetic field distribution inside the motor is calculated.
It can be performed by a two-dimensional simulation or by a three-dimensional calculation.
Latter is assuming a bulk material having anisotropic conductivity and laminates are not
taken into account. In the second step, the eddy current field inside the individual laminates
is modeled. The boundary conditions of any individual sheets are obtained from the bulk
model. The eddy current model employs the finite element method to consider the three-
dimensional current distribution in the steel sheets. The hysteresis losses are obtained from
a vector Jiles-Atherton hysteresis model.
Results obtained from different kind of software tools are compared.

References
1. P. Handgruber. Advanced Eddy Current and Hysteresis Loss Models for Steel Laminations of Rotating
Electrical Machines. PhD Dissertation, University of Graz, 2015.
2. B. Silwal. Computation of Eddy Currents in a Solid Rotor Induction Machine with 2-D and 3-D FEM .
Aalto University, 2012.

106
Temperature and Frequency Dependence of Hysteresis
Characteristics

Miklós Kuczmann
Department of Automation, Széchenyi István University, Győr, Hungary
kuczmann@sze.hu

Abstract

The comprehesive analysis of hysteresis characteristics dependence on temperature and fre-


quency has been performed by a computer controlled measurement system. Sinusoidal mag-
netic flux density with pre-defined amplitude and frequency has been generated and the
concentric hysteresis loops are measured. The toroidal shaped specimen is situated inside a
furnace which temperature can be set.
The full paper shows the Preisach model and the Jiles-Atherton model to approximate
the measured data.
The static Preisach model is built up by the Everett function. The frequency dependence
is modelled by an extra magnetic field intensity term identified by the measured data. The
effect of temperature on the Everett function is analyzed and approximated.
The Jiles-Atherton model is based on ordinary differential equations which parameters
are set by the measured curves.

References
1. A. Iványi. Hysteresis Models in Electromagnetic Computation. Academic Press, Budapest, 2017.
2. S. Steentjes and S. Zirka and Y. I. Moroz and K. Hameyer. Dynamic Magnetization Model of
Nonoriented Steel Sheets. IEEE Trans. on Magn. 50(4) (2014) 1-4.

107
Multiplicative Schwarz Method for Asynchronous Temporal
Integration of Governing Equations for Transport Processes
in Porous Media

Michal Kuraz
Czech University of Life Sciences Prague, Faculty of Environmental Sciences, Department of
Water Resources and Environmental Modeling
kuraz@fzp.czu.cz

Abstract

The transport processes in porous media are in general governed by quasilinear convection-
diffusion-reaction equation – a non-symmetric operator. The moving front is a very typical
problem which is solved in this modeling practice. It turns out that the numerical difficulties
related to abrupt changes of the solution have time dependent locations in spatial domain
depending on the position of the moving wetting front.
Subcycling algorithms enabling different temporal discretization on either elements or
subdomains have been extensively studied since 1970s. The problems with moving wetting
front were already in our previous works efficiently resolved with time adaptive domain de-
composition. In this contribution the subcycling algorithm is combined with Schwarz over-
lapping domain decomposition on time dependent subdomain split. Implementation details
will be presented in this contribution together with a simple benchmark problem were we
tested our newly implemented algorithm of adaptive domain decomposition together with
multi-time-step temporal integration.

References
1. Michal Kuraz and Petr Mayer and Pavel Pech. Solving the nonlinear and nonstationary Richards
equation with two-level adaptive domain decomposition (dd-adaptivity). Applied Mathematics and Com-
putation Volume 267, 15 September 2015, Pages 207-222.

108
Improving Algorithms for Particle Simulation on Modern
GPUs

Hermann Kureck, Nicolin Govender, Stefan Enzinger, Srdan Letina, Alexander


Korsunsky
Research Center Pharmaceutical Engineering GmbH, Graz, Austria
hermann.kureck@rcpe.at, nicolin.govender@rcpe.at, stefan.enzinger@rcpe.at,
srdan.letina@rcpe.at, alexander.korsunsky@rcpe.at

Johannes Khinast
Research Center Pharmaceutical Engineering GmbH & Institute for Process and Particle
Engineering, Graz University of Technology, Graz, Austria
khinast@tugraz.at

Abstract

Understanding of granular flow is important in many fields. Especially in the pharmaceutical


industry simulation is a crucial tool to gain process understanding. We use the Discrete Ele-
ment Method (DEM) to compute granular flows based on particle-particle pair interactions.
Typically, huge amounts of particles are needed to accurately model real-world prob-
lems, in conjunction with small time steps. Therefore, massively parallel algorithms designed
for modern Graphics Processing Units (GPUs) were developed to make computation times
acceptable.
This work focuses on important aspects to be considered when designing GPU algorithms
for particle simulation. The main objective is to improve overall performance, especially
for dynamic systems, when the number of interactions per particle varies greatly, or the
calculation complexity of interaction pairs (contacts) is high. This is often the case when the
particles are being mixed during the simulated process, heavily vary in size, or the simulation
contains particles of non-spherical shape (e.g. bi-convex tablets, polyhedral shapes).
Examples will be presented, showing the huge improvement in performance, mainly due
to increased data locality and execution convergence, while the register usage is lowered. As a
side effect, contact tracking is made possible, which enables contact history, usually needed by
tangential force models. Also, applications like analysis of stress or force chains are feasible,
to just name a few.

References
1. C. Radeke and B. Glasse and J. G. Khinast. Large-scale mixer simulations using massively parallel
GPU architectures. Chemical Engineering Science, Volume 65 (2010), Issue 24, pp. 6435-6442.
2. M. Boerner and M. Michaelis and E. Siegmann and C. Radeke and U. Schmidt. Impact of
impeller design on high-shear wet granulation. Powder Technology, Volume 295 (2016), pp. 261-271.
3. N. Govender and D. Wilke and P. Pizette and J.G. Khinast. BlazeDEM3D-GPU A Large Scale
DEM simulation code for GPUs. Powders & Grains 2017, EPJ Web of Conferences, Volume 140, 06025.

109
eXtended Particle System (XPS) - High-Performance Particle
Simulation

Hermann Kureck, Stefan Enzinger, Srdan Letina, Alexander Korsunsky


Research Center Pharmaceutical Engineering GmbH, Graz, Austria
hermann.kureck@rcpe.at, stefan.enzinger@rcpe.at, srdan.letina@rcpe.at,
alexander.korsunsky@rcpe.at

Johannes Khinast
Research Center Pharmaceutical Engineering GmbH & Institute for Process and Particle
Engineering, Graz University of Technology, Graz, Austria
khinast@tugraz.at

Abstract

Understanding of granular flow is important in many fields. Especially in the pharmaceutical


industry simulation is a crucial tool to gain process understanding. XPS uses the Discrete
Element Method (DEM) to compute granular flows based on particle-particle pair interac-
tions.
Typically, huge amounts of particles are needed to accurately model real-world prob-
lems, in conjunction with small time steps. Therefore, massively parallel algorithms designed
for modern Graphics Processing Units (GPUs) were developed to make computation times
acceptable.
To deal with fluidized processes, like wet coaters or fluidized bed applications, we use a
coupling interface with the industrial simulation software AVL FIRE R to add support for
Computational Fluid Dynamics (CFD). The biggest challenge is to keep the computation
time per time step and the memory consumption as low as possible.
XPS supports complex real-world boundaries via STL input files, can handle various
non-spherical particles (bi-convex tablets, glued-spheres, polyhedral shapes), and optionally
includes heat flow and particle coating. Currently Smoothed Particle Hydrodynamics (SPH)
is being integrated and a multi GPU implementation is planned.

References
1. C. Radeke and B. Glasse and J. G. Khinast. Large-scale mixer simulations using massively parallel
GPU architectures. Chemical Engineering Science, Volume 65 (2010), Issue 24, pp. 6435-6442.
2. D. Jajcevic and E. Siegmann and C. Radeke and J. G. Khinast. Large-scale CFD-DEM simulations
of fluidized granular systems. Chemical Engineering Science, Volume 98 (2013), pp. 298-310.
3. M. Boerner and M. Michaelis and E. Siegmann and C. Radeke and U. Schmidt. Impact of
impeller design on high-shear wet granulation. Powder Technology, Volume 295 (2016), pp. 261-271.
4. N. Govender and D. Wilke and P. Pizette and J.G. Khinast. BlazeDEM3D-GPU A Large Scale
DEM simulation code for GPUs. Powders & Grains 2017, EPJ Web of Conferences, Volume 140, 06025.

110
Cuts for 3D Magnetic Scalar Potentials: Visualizing
Unintuitive Surfaces Arising From Trivial Knots

Alex Stockrahm, P. Robert Kotiuga


Boston University
adstockrahm@gmail.com, prk@bu.edu

Valtteri Lahtinen, Jari Kangas


Tampere University of Technology
valtteri.lahtinen@tut.fi, jari.kangas@tut.fi

Abstract
A wealth of literature exists on computing and visualizing cuts for the magnetic scalar poten-
tial of a current carrying conductor via Finite Element Methods (FEM) and harmonic maps
to the circle. By “cut” we refer to an orientable surface bounded by a given current carrying
path (such that the flux through it may be computed) that restricts contour integrals on a
divergence-zero vector field to those that do not link the current-carrying path, analogous to
branch cuts of complex analysis. In our previous paper, we aimed to extend cuts for knot-
ted geometries into undergraduate curricula via open-source software including GMSH and
Python in order to allow students to compute and 3D print surfaces [Stockrahm]. The exer-
cises therein were intended to be a gateway to the intuitive study of near force-free magnetic
fields and plasma physics. Here we extend these methods to broaden the ability of students
to utilize free, readily available tools to communicate technically through visualization.
This work is concerned with a study of a peculiar contour that illustrates topologically
unintuitive aspects of cuts obtained from a trivial loop and raises questions about the notion of
an optimal cut, as motivated in [Kotiuga]. There are at least three perspectives to emphasize:
computation, intuition and physical appearance. Specifically, an unknotted curve that bounds
only high genus surfaces in its convex hull is analyzed [Almgren]. The current work considers
the geometric realization as a current-carrying wire in order to construct a magnetic scalar
potential. We first realize cuts as level sets of harmonic maps associated with the trivial
knot, noting they cannot be genus minimizing while having support lying in the convex hull
of the (topologically trivial) conductor. Second, we produce a geometric configuration of
currents that cannot possibly be a force-free magnetic field, observing there is no topological
obstruction to having a near force-free magnetic field as a result of an ambient isotopy.

References
1. F. J. Almgren and W. P. Thurston. Examples of Unknotted Curves Which Bound Only Surfaces of
High Genus Within Their Convex Hulls. Ann. Math., Vol. 105, No. 3 (May 1977), 527-538.
2. P. R. Kotiuga. On the Topological Characterization of Near Force-Free Magnetic Fields and the Work
of Late-onset Visually-impaired Topologists. Discrete and Continuous Dynamical Systems, Series S, Vol.
9, No. 1 (Feb 2016), 215-234.
3. A. Stockrahm and P. R. Kotiuga and J. Kangas. Tools for Visualizing Cuts in Electrical Engineering
Education. IEEE Trans. Magn., Vol. 52, No. 3 (2016), 9401104.

111
A Novel Weighted Likelihood Estimation With Empirical
Bayes Flavor

Md Mobarak Hossain, Tomasz Kozubowski


University of Nevada
mhossain@unr.edu, tkozubow@unr.edu

Krzysztof Podgorski
Lund University
krzysztof.podgorski@stat.lu.se

Abstract

We propose a novel approach to estimation, where a set of estimators of a parameter is com-


bined into a weighted average to produce the final estimator. With the weights proportional
to the likelihood evaluated at the estimators, the method can be viewed as a Bayesian ap-
proach with a data-driven prior distribution. Several illustrative examples and simulations
show that this straightforward methodology produces consistent estimators comparable with
those obtained by the maximum likelihood method.

References
1. Md.M. Hossain and T.J. Kozubowski and K. Podgorski. A novel weighted likelihood estimation
with empirical Bayes flavor. Comm. Statist. Sim. Comput. 47 (2018) 392-412.

112
Fractional Modeling of Anomalus Difussion in Plants Cells

Raúl Lamadrid Chico, Jorge Oziel Rivas Puente, F-Javier Almaguer, Francisco
Hernádez-Cabrera
FCFM-UANL
bourbakista@gmail.com, oziel.rivaspnt@gmail.com,
francisco.almaguermrt@uanl.edu.mx, fcabrera007@yahoo.com.mx

Abstract

Changes of turgor pressure in plant cells could produced leaf or branch movements as con-
sequence of ion diffusion. The phenomena occurs when the solutes actively accumulated in
cells are release to interstitial space. The lower pressure in the cells reduce the mechanical
properties of intracellular walls and weakens tissues. In inverse process, ions return to cells
and the turgor pressure is maintained in plant structures. Moreover, the exchange of ions in
the plant produces an electrical response that can be measured in the frequency domain by
electrical impedance spectroscopy. The experiments shown an anomalous diffusive relaxation
phenomenon that requires a fractional analysis to determine the parameters of the equivalent
electric circuit. The effects of memory on the response of the plant indicate a process depen-
dent on previous excitable stimuli, which is explained by fractional differential equations. The
proposed model was fitted to experimental data and explain elements related to the processes
of semi-infinite diffusion. This elements correspond to several kind of ions responsible for the
propagation of the electrical impulse through the plant in transition between a capacitive
and diffusive states.

References
1. Liu undefined and X undefined. Electrical Impedance Spectroscopy Applied in Plant Physiology Stud-
ies. RMIT University. 2006.
2. Volkov undefined and A. G. and et. al. Mimosa pudica: Electrical and mechanical stimulation of
plant movements . Plant, Cell & Environment. Volume 33, Issue 2, pp. 163–173. 2010..

113
On Multi-channel Stochastic Networks With Markov Control

Hanna Livinska, Eugene Lebedev


Taras Shevchenko National University of Kyiv
livinskaav@gmail.com, stat@unicyb.kiev.ua

Abstract
Development and research of analysis techniques for queueing networks with controlled input
flow are actual directions of development of the queueing networks theory.
The main model we consider is a queueing network consisting of r service nodes. Each
node is a queueing system and it consists of infinite number of servers. Therefore, if a customer
arrives at such a system, then it begins processing immediately. Input flow arriving at the
network is controlled by a Markov process. We define a service process in the network as an
r-dimensional stochastic process Q(t) = (Q1 (t), ..., Qr (t))0 , t ≥ 0, where Qi (t), i = 1, 2, . . . , r,
is the number of customers at the i-th node at instant t.
We study such networks in two cases. Firstly, we consider one-dimensional case, where
the network has the only service node. It is assumed that the instants of customers’ arrivals
to the system are the same as jump instants of a homogeneous continuous-time Markov
chain with a finite set of states. A customer arrived to the system immediately begins to be
served anywhere on a free server. The service time is distributed exponentially. In this case a
generating function of the stationary distribution for the process Q(t) is obtained. The form
of the generating function is a matrix version of the Takacs formula.
Further, the network with r > 1 service nodes is studied. A common input flow of cus-
tomers arrives at servicing nodes. This flow is controlled by a Markov chain η(t) according to
the following algorithm. As before, the instants of customers arrivals are the same as jump
instants tn , n = 1, 2, ..., of the chain η(t). If the chain η(t) jumps into state i at the instant tn ,
the customer numbered n arrives for service into the ith node. Note, that the number of states
for controlling Markov chain η(t) coincides with the number of network nodes. At the node
the customer occupies a free server for the time distributed exponentially with parameter µi .
After service in the ith node the customer is transferred to the jth node with probability pij ,
Pr
j = 1, 2, . . . , r, or leaves the network with probability pir+1 = 1 − pij . For a multivariate
j=1
service process the condition of a stationary regime existence and a correlation matrix are
found.
Finally, the stochastic network with controlled input flow is considered in heavy traffic. It
is proved, that under certain heavy traffic conditions on the network parameters, the service
process converges in the uniform topology to a Gaussian process. Correlation characteristics
of the limit process are written via the network parameters.

References
1. V.V. Anisimov and E.A. Lebedev. Stochastic Queueing Networks. Markov Models. Kiev Univ., Kiev,
1992.

114
Air Pollution Estimation Based on the Intensity of Received
Signal in 3G/4G Mobile Terminal

Grazia Lo Sciuto, Giacomo Capizzi, Francesco Beritelli


Department of Electrical, Electronics and Informatics Engineering, University of Catania
glosciuto@dii.unict.it, gcapizzi@diees.unict.it,
francesco.beritelli@dieei.unict.it

Fabio Famoso , Rosario Lanzafame


Department of Industrial Engineering, University of Catania
ffamoso@unict.it, rlanzafa@dii.unict.it

Abstract

In order to estimate the attenuation in a communication system, it is necessary to take into


account not only the attenuation due to the medium in which our wave propagates, but
also all the components of which the system is composed. The electromagnetic wave signal
may suffer attenuation by suspended particles. Particulates Matter is defined as mix of all
solid and liquid particles suspended in air. PM can be originated from natural processes
(soil erosion, forest fires and pollen dispersion) and human activities, typically combustion
processes, road transport and vehicular traffic. Moreover, secondary air pollutants through
chemical reactions in the atmosphere such as nitrogen oxides, sulfur dioxide, ammonia and
Volatile Organic Compounds form sulfates, nitrates and ammonium salts. In our experimental
campaigns carried out in the territory of Catania (Italy),the main source contributions to high
levels of particulate PM10 are traffic congestion, vehicular traffic, natural sources as desert
sand transported by air masses from North Africa and volcanic eruption injection or volcanic
passive degassing since Catania is situated on east coast of Sicily under the active volcano
Mount Etna. In this paper we propose a new Air Pollution Estimation method based on a
probabilistic neural network on the Intensity of Received Signal in 3G/4G Mobile Terminal.
For the estimates of the effective signal attenuation and particulate matter PM10 have been
required measurements of signal strength using Ubiquiti NanoStation at frequency of 2.4 Ghz
in controlled ambient. The experimental set-up provides a tube in each side locked with many
perforations from which different sources of PM10 are injected. To control the emission of
PM2.5 and PM10 Aerocet 531S Handheld Particle Counter is located into the tube

References
1. A. Musa S. O. Bashir A.H Abdalla. Review and Assessment of Electromagnetic Wave Propagation in
Sand and Dust Storms at Microwave and Millimeter Wave Bands. Part I.. Progress In Electromagnetics
Research M (2014) , 40, 91-100.
2. G.Capizzi G. Lo Sciuto P. Monforte C. Napoli. Cascade Feed Forward Neural Network-based Model
for Air Pollutants Evaluation of Single Monitoring Stations in Urban Areas. International Journal of
Electronics and Telecommunications, 61(4), 327-332..

115
Optimization of Error Indicators for SOLD Methods

Petr Lukas
Charles University in Prague
lukas@karlin.mff.cuni.cz

Pavel Solin
University of Nevada, Reno
solin@unr.edu

Abstract

In the talk we consider the numerical solution of the scalar convection–diffusion–reaction


equation
∂u
−ε ∆u + b · ∇u + c u = f in Ω, u = ub on Γ D , ε = g on Γ N . (1)
∂n
We present new results of an adaptive technique in finite element method based on minimizing
a functional called error indicator Ih : Wh → R, where Wh is a FE space for the discrete
solution wh of (1). The simplest form of such an indicator is
X
Ih (wh ) = h2K k − ε∆wh + b · ∇wh + cwh − f k20,K ∀wh ∈ Wh , (2)
K∈Th ,K∩∂Ω=∅

where we have used the usual notation from the article of V. John, P. Knobloch, S. B. Savescu
[1]. It is possible to enrich this indicator by other terms, which favour a less smeared solution
to a diffuse one. One example of such an added term is kφ(|b⊥ · ∇wh |)k0,1,K , where φ is a
function like square root. We can also bound the residue term in (2) from above to obtain a
physical solution for examples with non-steep Dirichlet boundary conditions. The suitability
of added terms depends on the problem we solve.
The parameter we are changing in the optimization procedure is currently the parameter
τ from SUPG (SDFEM) method and the parameter called ˜ in [2] for the SOLD method
which adds diffusion in the crosswind direction (page 2205). We use several different FE
spaces for both parameters, see [3].

References
1. V. John and P. Knobloch and S. B. Savescu. A posteriori optimization of parameters in stabilized
methods for convection-diffusion problems - Part I. Comput. Methods Appl. Mech. Engrg. 200 (2011),
2916-2929.
2. V. John and P. Knobloch. On spurious oscillations at layers diminishing (SOLD) methods for
convection-diffusion equations: Part I - A review. Comput. Methods Appl. Mech. Engrg. 196 (2007),
2197-2215.
3. P. Lukas and P. Knobloch. Adaptive techniques in SOLD methods. Applied Mathematics and Com-
putation 319 (2018), 24-30.

116
Modelling of Large Deforming Fluid Saturated Porous Media
Using Homogenization Approach

Vladimı́r Lukeš, Eduard Rohan


University of West Bohemia
vlukes@kme.zcu.cz, rohan@kme.zcu.cz

Abstract

We present a model of fluid saturated porous media undergoing large deformation. We as-
sume the porous structure consists of hyperelastic skeleton and compressible viscous fluid.
The medium is described by the Biot model constituted by poroelastic coefficients and the
permeability governing the Darcy flow. The numerical solution is based on a consistent in-
cremental formulation in the Eulerian framework [1] and on the updated Lagrangean formu-
lation. The spatial deformed configuration is used to express the equilibrium equation and
the mass conservation equation.
The homogenization method is applied to the linearized equations which result from the
differentiation of model equations expressed in the residual form. The proposed upscaling
approach allows us to introduce the effective medium properties involved in the incremental
formulation using the homogenization of the microstructure with locally periodic structure.
Modified poroelastic coefficients can be computed for given updated configurations, see [1],
where the linear continua with hierarchical structures are treated. The fluid flow upscaling
yields the Darcy-type flow by reason of the homogenization based on the linearized model. Us-
ing the sensitivity analysis with respect to the microsctructure deformation and the pressure
perturbation, cf. [2], the sensitivity of the homogenized coefficients is computed. The resulting
model is consistent with the updated-Lagrangian computational incremental scheme based
on the Eulerian formulation, [3].

References
1. E. Rohan and S. Naili and T. Lemaire. ouble porosity in fluid-saturated elastic media: deriving
effective parameters by hierarchical homogenization of static problem. Continuum Mech. Thermodyn. 28
(5) (2016) 1263-1293, DOI:10.1007/s00161-015-0475-9 .
2. E. Rohan and V. Lukeš. On modelling nonlinear phenomena in deforming heterogeneous media
using homogenization and sensitivity analysis concepts. Appl. Math. Comput. 267 (2015) 583-595,
DOI:10.4203/ccp.106.85.
3. E. Rohan and V. Lukeš. Modelling large-deforming fluid-saturated porous media using an Eulerian
incremental formulation. Adv. Eng. Softw. 113 (2017) 84-95, DOI: 10.1016/j.advengsoft.2016.11.003.

117
On the Robustness of Hierarchical Bayesian Models for
Uncertainty Quantification in Inverse Problems

Aaron Luttman
Nevada National Security Site
luttmaab@nv.doe.gov

Abstract

The two primary challenges with classical variational approaches for linear inverse problems
in modern scientific modeling and computing are the manual selection of regularization pa-
rameters and the inability to directly model and quantify uncertainties. In order to address
the issue of uncertainty quantification, the last decade has seen a transformational shift to
Bayesian models for inverse problems and the development of Markov Chain Monte Carlo
(MCMC) or other sampling approaches to computing posterior distributions of “solutions,”
in place of single, approximately optimal answers. Straightforward Bayesian models still re-
quire scaling parameters, as a measure of how much one trusts their data vs. their prior,
retaining an analogy to the classical problem of regularization parameter selection. To deal
with both problems of parameter selection and uncertainty quantification, there has been a
move toward hierarchical Bayesian models, in which one puts a prior distribution on the scale
parameters then selects the so-called hyperprior parameters for the prior distributions [1,2,3].
In this setting, one replaces a few, presumed highly sensitive, scale parameters with a larger
number of, presumed highly insensitive, hyperprior parameters. In this work we show that for
several linear inverse problems, the computation of the mean of the posterior for hierarchi-
cal Bayesian models, as computed with Gibbs samplers or other similar MCMC methods, is
highly robust to the manual selection of hyperprior parameters, but that the computation of
the variances of the model parameters with hierarchical models is very sensitive to the hyper-
prior parameter selection. This suggests that hierarchical models are an excellent approach to
ameliorating the classical problem of regularization parameter selection, but that uncertainty
quantification with such models is, at least for some common problems, still tenuous.

References
1. J. Wang and N. Zabaras. Hierarchical Bayesian Models for Inverse Problems in Heat Conduction.
Inverse Problems 21 (2004) 183.
2. D. Calvetti and E. Somersalo. Hypermodels in the Bayesian Imaging Framework. Inverse Problems
24 (2008) 034013.
3. M. Howard and M. Fowler and A. Luttman and S. Mitchell and M. Hock. Bayesian Abel
Inversion in Quantitative X-Ray Radiography. SIAM J. Sci. Comput. 38 (2016) B396-B413.

118
Analysis and Optimisation of Inductive-Based Wireless Power
Charger

Dániel Marcsa
Department of Automation, Széchenyi István University
marcsa@maxwell.sze.hu

Abstract

Widespread use of wireless communication, the cables are taken away step by step from
our electrical equipment. Nowadays, the power cable is the last wire based connection to
the equipment. Therefore, the wireless power transfer (WPT) is desired technology. The
advances make the WPT very desirable to the electric vehicle (EV) charging applications in
both stationary and dynamic charging scenarios.
The inductive type WPT can be considered as a large gap transformer, where the primary
winding is the transmitter coil, and the secondary winding is the receiver coil. The application
of analytic methods for mutual inductance calculation to real life cases is almost impossible,
so the computational electromagnetics can help to analyse this complex system. In addition,
knowing the electromagnetic field and parameters of transmitter and receiver coils are not
enough, the system level approach of wireless power transfer is required. Further, the oper-
ating frequency of resonance-based charger is high, so the transmitter and receiver coil acts
as an antenna. Therefore, the analysis of the transmitted, reflected and total electromagnetic
field also important in this application.
Our presentation focuses on the numerical analysis of a wireless power charger using
finite element method and co-simulation of electric circuit and magnetic system of wireless
power transfer for resonance in ANSYS Simplorer system simulation environment. The aim
of optimization is to reduce material costs at a certain distance. The presentation covers the
importance of eddy current shielding and losses in this application.

References
1. M. Kuczmann and A. Iványi. The Finite Element Method in Magnetics. Akadémiai Kiadó, Budapest,
2008.
2. K. Mehdi and G. Maysam. The Circuit Theory Behind Coupled-Mode Magnetic Resonance-Based
Wireless Power Transmission. IEEE Transactions on Circuit and Systems – I: Regular Paper 59 (2012)
2065-2074.
3. L. Reuben and W. Leo and E.C. Charles . Electronic Transformers and Circuits. 3rd Ed., Wiley-
Interscience, New York, 1988.
4. V. Rikard. Wireless Energy Transfer by Resonant Inductive Coupling. Master thesis, Department of
Signals and Systems, Chalmers University of Technology, 2015.

119
Quantifying the Skill of Sea Ice Simulations in Earth System
Models Using a Variational ICESat-2 Emulator

Andrew Roberts, Wieslaw Maslowski


Naval Postgraduate School
afrobert@nps.edu, maslowsk@nps.edu

Sinéad Farrell
University of Maryland
sinead.farrell@noaa.gov

Elizabeth Hunke
Los Alamos National Laboratory
eclare@lanl.gov

Abstract

ICESat-2 is scheduled for launch in late 2018 and offers the potential of basin-scale sea ice
freeboard measured with unprecedented accuracy. A challenge awaiting Earth System Mod-
elers using data from the Advanced Topographic Laser Altimeter System (ATLAS) aboard
ICESat-2 is in quantifying model skill and bias to accurately account for the time, place,
snow cover and scale-dependent density of sea ice in models. To achieve this, we have devel-
oped a satellite emulator in the Regional Arctic System Model (RASM; Roberts et al., 2015,
Hamman et al. 2017, Cassano et al. 2017) to generate run-time skill metrics within CICE
Consortium sea ice code. Rather than comparing modeled ice thickness with approximations
generated from ATLAS retrievals, the emulator directly compares modeled and measured free-
board, avoiding uncertainties associated with ice density and snow loading in observational
freeboard-to-thickness conversions. A virtual version of ICESat-2 flies through the model and
samples freeboard at the same place and time as ICESat-2. The most important innovation
associated with the emulator lies in determining modeled freeboard with variational ridging.
This permits the scale-dependent density of sea ice to be matched to the Gaussian footprint
of ICESat-2 laser tracks via a ‘Dilation Field’ using the principle of virtual work. We fo-
cus on this aspect of the ICESat-2 emulator and highlight its computational efficiency and
portability, and present RASM results of model bias and skill-scores.

References
1. Roberts et al.. Simulating transient ice – ocean Ekman transport in the Regional Arctic System Model
and Community Earth System Model. Ann. Glaciol., 56(69) (2015) 211-228, doi:10.3189/2015AoG69A760.
2. Hamman et al.. The Coastal Streamflow Flux in the Regional Arctic System Model. J. Geophys. Res.
Ocean., 122 (2017) doi:10.1002/2016JC012323.
3. Cassano et al. . Development of the Regional Arctic System Model (RASM): Near Surface Atmospheric
Climate Sensitivity. J. Clim., JCLI-D-15-0775.1 (2017) doi:10.1175/JCLI-D-15-0775.1.

120
Autoscaling Localized Reduced Basis Methods With pyMOR
and EXA-DUNE

René Milk, Mario Ohlberger, Felix Schindler


Applied Mathematics Münster
rene.milk@wwu.de, mario.ohlberger@wwu.de, felix.schindler@wwu.de

Abstract

The mathematical models of complex flows, arising for example in reservoir engineering or
water pollution dispersion prediction, are naturally of multi-scale and parametric character.
They combine effects on multiple scales of space, ie. microscopic material features like porosity
with macroscopic influences like external pressures or well placement. It is often impossible
to know the exact material properties at every given point in the computational domain, but
rather a statistical distribution needs to be assumed.
Localized RB methods are well suited for the computational efficiency requirements arising
from multi-query scenario like uncertainty quantification or optimization. Both scenarios
typically incur a huge number of forward solves, making model reduction with RB methods
very attractive to reduce time-to-solution.
The blending of accuracy enhancing multi-scale and parametric forward-solve acceler-
ating reduced basis methods achieved in our Localized Reduced Basis Multi-Scale Method
(LRBMS) make it particularly useful in these problem domains. Adaptive online enrichment
enables us to start with relatively coarse approximation spaces and only expend more com-
putational effort where either problem specific knowledge or error indicators dictate.
As a part of the German Science Foundation’s Strategic Priority Programme 1648 “Soft-
ware for Exascale Computing (SPPEXA)” project EXA-DUNE we are developing a dual
software stack in Python and C++. In this contribution we will show how using pyMOR as
the driver for the model reduction, we will be able to dynamically and efficiently allocate
and free available computing resources as needed, spanning high-dimensional solves across
changing pools of MPI-Parallel discretizations powered by EXA-DUNE, with very little input
from the user.

References
1. M. Ohlberger and F. Schindler. Error control for the localized reduced basis multiscale method with
adaptive online enrichment. SIAM Journal on Scientific Computing 37.6 (2015): A2865-A2895.
2. P. Bastian and et. al.. Advances concerning multiscale methods and uncertainty quantification in
EXA-DUNE. oftware for Exascale Computing-SPPEXA 2013-2015. Springer International Publishing,
2016. 25-43.
3. R. Milk and S. Rave and F. Schindler. pyMOR–Generic Algorithms and Interfaces for Model Order
Reduction. SIAM Journal on Scientific Computing 38.5 (2016): S194-S216.

121
Analysis for Carbon Combustion and Energy Loss by
Chemical Reactions Into a Rotatory Cement Kiln by CFD: A
Study Case

Marco Antonio Merino-Rodarte


Centro de Investigación en Materiales Avanzados
marco.merino@cimav.edu.mx

Héctor Alfredo López-Aguilar, Antonino Pérez-Hernández


Centro de Investigación en Materiales Avanzados
hector.lopez@cimav.edu.mx, antonino.perez@cimav.edu.mx

Jorge Alberto Gómez


Universidad Autónoma de Ciudad Juárez
joralgomez74@gmail.com

Javier Morales-Castillo
Universidad Autónoma de Nuevo León
tequilaydiamante@yahoo.com.mx

Abstract

Rotatory cement kilns are wide used for the cement industry, into this kind of kilns occurs
chemicals reactions while specific raw material is moving through the Kiln, temperature
and time are two important parameters to control thus the type and quality of materials
used to formulate the cement. The energy source is the combustion gases obtained by the
mineral carbon combustion traveled counter flow. In this study a CFD software Ansys Fluent
is utilized and an uniform and homogeneous material bed was considered, this bed was
divided into small volumes as “control volumes” which were analyzed as plug flow reactors
interconnected in series; for each one of the plug flow reactors the conservation equations for
matter and energy and the chemical reaction kinetics were applied to estimate the energy
consumed for the first five important chemical reactions into a rotatory cement kiln. In
this work temperature profile and gases velocities are presented considering the loss energy
through the kiln walls, the efficiency and the combustion products are estimated too.

References
1. Locher undefined and G. and M. Schneider. Modeling in Cement Kiln Operations. Innovations in
Portland Cement Manufacturing, 2004: p. 1275-1310..

122
Comparing Hysteresis Characteristics Using Finite Element
Method (FEM) With Different Techniques

Zoltan Nemeth, Miklós Kuczmann


Department of Automation, Széchenyi István University, Győr, Hungary
nemeth.zoltan@sze.hu, kuczmann@sze.hu

Abstract

An Epstein frame has been built, which is used to measure magnetic properties of different
kind of core. The sinusoidal current excitation has been used in a frequency range of 1-400Hz.
The measurements have been performed by a computer controlled measurement system. The
building and the measurement process have been published before.
The objective of this work is to compare the results of different kind of software like
COMSOL Multiphysics or Ansys by performing Finite Element Method (FEM) simulations.
For this work Jiles-Atherton model has been chosen as hysteresis identification method. The
model parameters can be obtained by using the measurement results.
The frame has been modelled in 2D and 3D as well. Simulations have been per-formed
with the built-in modules as well as with some implemented potential formulation [1].

References
1. A. Ivanyi and M. Kuczmann. The Finite Element Method in Magnetics. Academic Press, Budapest,
2008.

123
Performance Predictions for Storm-Resolving Simulations of
the Climate System

Philipp Neumann, Niklas Röber, Joachim Biercamp


German Climate Computing Center
neumann@dkrz.de, roeber@dkrz.de, biercamp@dkrz.de

Luis Kornblueh, Matthias Brück


Max Planck Institute for Meteorology
luis.kornblueh@mpimet.mpg.de, matthias.brueck@mpimet.mpg.de

Daniel Klocke
Deutscher Wetterdienst
Daniel.Klocke@dwd.de

Abstract

With exascale computing becoming available in the next decade, global weather prediction
at the kilometer scale will be enabled. Moreover, the climate community has already begun
to contemplate a new generation of high-resolution climate models.
High-resolution model development is confronted with several challenges. Scalability of
the models needs to be optimal, including all relevant components such as I/O which easily
becomes a bottleneck; both runtime and I/O will dictate how fine a resolution can be chosen
while still being able to run the model at production level, e.g. at 1-30 years/day depending
on the questions to be addressed. Moreover, given various scalability experiments from pro-
totypical runs and additional model data, estimating performance for new simulations can
become challenging. Finally, the actual forecast quality of the models at that scale is not fully
understood yet.
I present results achieved in the scope of the Centre of Excellence in Simulation of Weather
and Climate in Europe (ESiWACE) [1] using the ICON model for global high-resolution sim-
ulations. I show results from multi-week global 5km simulations, I discuss current features
and limits of the simulations, and I link the findings to a new intercomparison initiative DYA-
MOND for high-resolution predictions. Finally, I finish with work on performance prediction
approaches for existing performance data.

References
1. J. Biercamp et al.. ESiWACE website. www.esiwace.eu.

124
High-order Wave-based Laser Absorption Algorithm for
Hydrodynamic Codes

Jan Nikl1,2 , Milan Kuchařı́k2 , Jiřı́ Limpouch2 , Milan Holec3 , Richard Liska2 , Stefan
Weber1
1 ELI-Beamlines, Institute of Physics, Academy of Sciences of the Czech Republic, 18221
Prague, Czech Republic
2 Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, 11519

Prague, Czech Republic


3 Centre Lasers Intenses et Applications, Université de Bordeaux-CNRS-CEA, UMR 5107,

F-33405 Talence, France


nikljan@fjfi.cvut.cz, kucharik@newton.fjfi.cvut.cz,
jiri.limpouch@fjfi.cvut.cz, milan.holec@u-bordeaux.fr,
liska@siduri.fjfi.cvut.cz, stefan.weber@eli-beams.eu

Abstract

Collisional laser absorption is typically the main driving mechanism in hydrodynamic laser
plasma simulations. Relevance of its modelling multi-folds especially when coupled energy
transport mechanisms are involved in realistic simulations [1,2]. Even though, less attention
is paid to accuracy and self-consistency of the absorption models. The vicinity of the critical
plane, where most of the laser energy is absorbed, cannot be accurately described by any
approximation of geometric optics, since plasma profile strongly varies at spatial scale of a
single laser wavelength. Only wave-based approaches are correctly applicable in this region
in principle [3]. Despite the fact the Helmholtz equation for stationary harmonic waves is
well-known from the mathematical and numerical point of view, its direct solution is not
feasible in hydrodynamic simulations in many cases. Here, a method directly derived from
stationary Maxwell’s equations is presented, which encompasses arbitrary polynomial order
approximation of the refractive indices by finite elements. The proposed method is completely
self-consistent, i.e. relying only on the hydrodynamic quantities, and achieves high order of
convergence. However, it still remains computationally inexpensive and sufficiently robust to
be usable in numerical codes.

References
1. J. Nikl and M. Holec and M. Zeman and M. Kuchařı́k and J. Limpouch and S. Weber. Macro-
scopic laser-plasma interaction under strong non-local transport conditions for coupled matter and radi-
ation. submitted (2017) .
2. T. Kapin and M. Kuchařı́k and J. Limpouch and R. Liska. Hydrodynamic simulations of laser
interactions with low-density foams. Czechoslovak J. Phys. 56 (2006) B493-B499.
3. M. M. Basko and I. P. Tsygvintsev. A hybrid model of laser energy deposition for multi-dimensional
simulations of plasmas and metals. Comp. Phys. Commun. 214 (2017) 59-70.

125
Tuning the Electronic and Magnetic Properties of ReS2 by
Lanthanide Dopants Ions: A First Principles Study

Kingsley Onyebuchi Obodo


University of South Africa
obodokingsley@gmail.com

Abstract

Using quantum mechanical calculations, we investigate the structural, electronic, magnetic


and optical properties of lanthanide metal doped triclinic mono-layered rhenium disulfide
(ReS2 ). The calculated electronic band gaps for pristine ReS2 mono-layer is 1.43 eV with a
non-magnetic ground state. The calculated dopant substitutional energies under both Re-rich
and S-rich conditions show that it is possible to experimentally synthesize lanthanide metal
doped ReS2 mono-layer structures. We found that the presence of dopant ions (such as Ce,
Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb and Lu) in the ReS2 mono-layer lattice
significantly modifies their electronic ground states. Consequently, there is introduction of
defect levels and modification of the density of states profile. Some of these dopant ions
introduce magnetization in ReS2 lattice. This implies that these group of materials would
have possible application as spintronic materials. The calculated absorption and reflectivity
spectra show that this class of dopants leads to a general increase in the absorption spectral
peaks but only a minute influence on the reflectivity. These ordered doped ReS2 monolayer
can lead to designing effective ultra-thin spintronic materials with improved performance.

References
1. Kingsley Onyebuchi Obodo and Cecil Napthaly Moro Ouma and Joshua Tobechukwu Obodo
and Moritz Braun. Influence of transition metal doping on the electronic and optical properties of ReS2
and ReSe2 monolayers. PCCP. 19 (29) 2017 19050-19057.

126
A Computational Approach to Confidence Intervals and
Testing for Generalized Pareto Tail Index Based on
Greenwood Statistic

Anna Panorska
University of Nevada at Reno
ania@unr.edu

Abstract

We provide a general result describing stochastic behavior of the Greenwood statistic within
certain stochastically ordered parametric families of distributions. Our example is the gen-
eralized Pareto family, for which we develop a computational approach leading to an exact
test for the tail parameter based on the Greenwood test statistic. In turn, an inversion of the
test leads to exact confidence set for the tail index, which is shown to be an interval.

References
1. M. Arendarczyk and T.J. Kozubowski and A.K. Panorska . A computational approach to confi-
dence intervals and testing for generalized Pareto tail index based on Greenwood statistic. working paper.

127
Sensitivity Analysis of Droplets Distribution to Test
Conditions in Wind-tunnel Icing Experiments

Giulio Gori, Pietro Marco Congedo


Inria Bordeaux - Sud-Ouest, Team Cardamom
giulio.gori@inria.fr, pietro.congedo@inria.fr

Gianluca Parma, Marta Zocca, Alberto Guardone


Politecnico di Milano
gianluca.parma@polimi.it, martamaria.zocca@polimi.it,
alberto.guardone@polimi.it

Olivier Le Maitre
LIMSI-CNRS
olm@limsi.fr

Abstract

This paper investigates the role of uncertainties affecting the ice accretion phenomenon in
ground experimental facilities. In particular, this work is focused on the uncertainties affecting
the distribution of water over an airfoil, i.e the collection efficiency, within an ice wind tunnel.
Following [1] and [2], the uncertainties over test conditions are first characterized. Afterwards,
a Non-Intrusive Spectral Projection (NISP) library is used to propagate these uncertainties
through a in-house particle tracking code coupled with the open-source Computational Fluid
Dynamics (CFD) solver SU2. A sensitivity analysis highlights the dependency of the water
distribution over the airfoil with respect of test conditions. The investigation of Sobol indexes
allow to assess the relevance of every single source of uncertainty to the variance of the
resulting collection efficiency.

References
1. Dean R. Miller and Mark G. Potapczuk and Tammy J. Langhals. Preliminary Investigation of
Ice Shape Sensitivity to Parameter Variations. 43rd Aerospace Sciences Meeting and Exhibit, January
10-13 2005, Reno NV USA..
2. Harold E. Addy. Ice Accretion and Icing Effects for Modern Aircraft. Glenn Research Center, Cleveland,
Ohio, April 2000..

128
Ice Sheet Initialization and Uncertainty Quantification of
Sea-level Rise

John Jakeman, Mauro Perego, Irina Tezaur


Sandia National Laboratories
jdjakem@sandia.gov, mperego@sandia.gov, ikalash@sandia.gov

Stephen Price
Los Alamos National Laboratory
sprice@lanl.gov

Georg Stadler
Courant Institute of Mathematical Sciences
stadler@cims.nyu.edu

Abstract
In order to produce reliable estimates of the sea level rise in next decades to centuries, it
is of paramount importance to appropriately initialize Greenland and Antarctic ice sheets
by estimating unkown or poorly know fileds such as the bedrock topography and the basal
friction coefficient. It is also crucial to be able to quantify the uncertainty associated with
such estimates.
In this talk, we present recent work towards developing a methodology for quantifying
uncertainty in Greenland and Antarctica ice sheets’ contribution to sea-level rise. While we
focus on uncertainties associated with the optimization and calibration of the basal sliding
parameter field, the methodology is largely generic and could be applied to other (or multiple)
sets of uncertain model parameter fields. The first step in the workflow is the solution of a
large-scale, deterministic inverse problem, which minimizes the mismatch between observed
and computed surface velocities by optimizing the two-dimensional coefficient field in a fric-
tion sliding law. This step is performed using FELIX, a parallel finite element C++ code with
embedded analysis tools. The inversion is performed through a PDE-constrained optimization
approach. Under the Gaussian approximation, we then determine the probability distribu-
tion of the basal friction coefficient using the Hessian of the deterministic inversion. The
uncertainty in the modeled sea-level rise is obtained by performing an ensemble of forward
propagation runs.
The extent and complexity of the geometries, the nonlinearity of the flow equation, and
the high-dimensionality of the parameter space present numerous challenges that will be
addressed in the talk. We will present and discuss results obtained using different resolutions
of the Greenland and Antarctic ice sheets.

References
1. M. Perego and S. Price and G. Stadler. Optimal initial conditions for coupling ice sheet models to
Earth system models. J. Geophys. Res. Earth Surf., 119 (2014).

129
DEM GPU Simulations With Convex and Non-convex
Particles for Railway Ballasts

Patrick Pizette, Nor-Edine Abriak


IMT Lille Douai, Univ. Lille, EA 4515 - LGCgE - F-59000 Lille, France
patrick.pizette@imt-lille-douai.fr, nor-edine.abriak@imt-lille-douai.fr

Nicolin Govender
Department of Chemical and Process Engineering, University of Surrey, Guildford GU2
7XH, UK
govender.nicolin@gmail.com

Daniel N. Wilke
Centre of Asset and Integrity Management, University of Pretoria, South Africa
wilkedn@gmail.com

Liu Gastbye, Wen-Jie Xu


State Key Laboratory of Hydroscience and Hydraulic Engineering, Department of
Hydraulic Engineering, Tsinghua University, Beijing, China 100084
liuguangyusdu@outlook.com , wenjiexu@tsinghua.edu.cn

Abstract
The stability of the ballast depends on the characteristics include particle shape and angu-
larity which are critical to provide sufficient load distribution and strength to the railroad
structures. The Discrete Element Method (DEM) is classically used to model the ballast layer
as spherical particles using complex contact models or clustered particles that lack resolu-
tion on participle angularity. Although both approaches are able to reproduce the behavior
when characterized for that ballast, the geometrical simplifications limit the application of
these models away from the point where it was characterized. In addition, any investigation
that relies on geometrical changes are limited using spherical particle models. In contrast, a
polyhedral particle representation would be ideal to represent the faceted nature of ballasts
but unfortunately this used to significantly limit the number of particles that can be mod-
eled. The recent advances of using the graphical processing unit (GPU) to model polyhedral
discrete element particle systems as demonstrated by BlazeDEM-3DGPU, is allowing for the
first time convex and non-convex polyhedral ballast representations to be studied for up to a
100 000 particles. This study investigates the modeling the ballast as convex or non-convex
particle systems. The shaped geometries used were directly extracted from the real ballasts
using 3D-laser scanning for different resolutions allowing for representative ballast shapes.

References
1. D.N Wilke and N. Govender and P. Pizette P. and N-E Abriak. Computing with non-convex
Polyhedra on the GPU. Springer Proceedings Phys. (2017) 188:1371–1377.

130
Using Deep Learning for Detection and Correction Speeches
Based on Human Emotions

Mateusz Póltorak, Janusz Pochmara


Poznan University of Technology
mateusz.poltorak@student.put.poznan.pl, janusz.pochmara@put.poznan.pl

Abstract

The main problem in speech data recognition is acoustic variability. It’s depends on [1]:
First: there is variation in what is said by the speaker. Second: there is variation due to
differences between speakers. Third: there is influence of noise conditions.Fourth: emotions
that affect the form of expression and its quality. Many projects are based on steps from First
to Third. We propose using Fourth step as main goal of investigation. Before the appearance
of convolutional neural networks [2], most of the emotion speech recognition models were
based on the extraction of features (for example: energy-based distributions of speech [3].
This process was followed by simple machine learning classifier like SVM or dense neural
network. Our work doesn’t include feature extraction. It is based only on image decoder. We
propose a model that is independent of the First to Third problems. Speaker’s emotions are
encoded in the word he uses, but the tone and intonation of his voice are even more important.
Shapes created by different words, but spoken with the same feelings are quite similar. Hence,
our model is resistant to the occurrence of untrained words. We want to normalize emotions
stored in speaker’s voice to improve the quality of algorithms for detection and correction of
signals in terms of correct pronunciation.The main idea is to convert graph of the spectrum
of frequencies and amplitudes of speech as the main information carrier. Deep convolutional
neural network is responsible for decoding information stored in the spectrogram [4]. In our
research we will focus on the convertion accuracy. We will also focus on problems with reduce
of the system complexity. In this paper, we propose an evaluation system for classification
and correction of speech.

References
1. Publication of National Institute on Deafness Other Communication Disorders. Statistics on
Voice, Speech, and Language. https://www.nidcd.nih.gov/health/statistics/statistics-voice-speech-and-
language.
2. Krishna Asawa and Priyanka Manchanda. Recognition of Emotions using Energy Based Bimodal
Information Fusion and Correlation. International Journal of Artificial Intelligence and Interactive Mul-
timedia, Vol. 2, No 7.
3. Yann LeCun and Yoshua Bengio. Convolutional Networks for Images, Speech and Time Series. The
Handbook of Brain Theory and Neural Networks, MIT Press, 1995, .

131
Spatial Wave Size for Gaussian Random Fields

Krzysztof Podgorski
Lund University, Sweden
krys.podgorski@gmail.com

Abstract

A method of measuring three-dimensional spatial wave size is proposed and statistical dis-
tributions of the size characteristics are derived in explicit integral forms for Gaussian sea
surfaces. New definitions of wave characteristics such as the crest-height, the length, the size
and the wave front location are provided in fully dimensional context. The joint statistical
distributions of these wave characteristics are derived using the Rices formulas for expected
numbers of local maximum and distance from a local maximum to a level crossing contour.
Review of the Rice’s method to study crossing distributions is given.

References
1. K. Podgorski and I. Rychlik. Spatial wave size for Gaussian random fields. Spatial wave size for
Gaussian random fields, working paper..

132
Geometric Multigrid Methods for Darcy–Forchheimer Flow in
Fractured Porous Media

Andrés Arrarás, Laura Portero


Department of Engineering Mathematics and Computer Science, Public University of
Navarre, 31006 Pamplona (Spain)
andres.arraras@unavarra.es, laura.portero@unavarra.es

Francisco J. Gaspar, Carmen Rodrigo


IUMA and Department of Applied Mathematics, University of Zaragoza, 50018 Zaragoza
(Spain)
fjgaspar@unizar.es, carmenr@unizar.es

Abstract
In this work, we consider single-phase flow models in fractured porous media. Let us define
an n-dimensional convex domain Ω ⊂ Rn with boundary ∂Ω, for n = 2 or 3, and an (n − 1)-
dimensional subset γ ⊂ Ω with boundary ∂γ (i.e., the fracture) that divides Ω into two
subdomains Ω1 and Ω2 with boundaries ∂Ω1 and ∂Ω2 , respectively (i.e., the porous medium
matrix). Following [1], we consider linear Darcy flow in the subdomains,
ui = −Ki ∇pi , div ui = qi , in Ωi , for i = 1, 2,
with pi = gi on ∂Ωi , for i = 1, 2, and nonlinear Darcy–Forchheimer flow in the fracture,
(1 + b |uf |) uf,τ = −Kf,τ d ∇τ pf , divτ uf,τ = qf + (u1 · n − u2 · n), in γ,
with pf = gf on ∂γ, together with the interface condition
αf pi = αf pf + (−1)i+1 (ξ ui · n + (1 − ξ) ui+1 · n), in γ, for i = 1, 2,
2K
where αf = df,n , ξ ∈ 12 , 1 , and i + 1 = 1 if i = 2. Here, pj is the fluid pressure, uj is the


Darcy velocity, Kj is a diagonal permeability tensor, and qj is a source term, with j = 1, 2, f


corresponding to Ω1 , Ω2 and γ, respectively. Further, uf,τ denotes the tangential component
of uf , and Kf,τ and Kf,n are the tangential and normal components of Kf . Finally, b is the
Forchheimer coefficient, d is the fracture width, and n is the unit normal vector on γ directed
outward from Ω1 . Note that ∇τ and divτ represent the tangential gradient and divergence
operators, respectively.
We discretize the preceding equations using a two-point flux approximation scheme that
takes into account the mixed-dimensional nature of the problem at hand. Monolithic geo-
metric multigrid methods are then proposed for solving the resulting system of algebraic
equations. Numerical experiments illustrating the behaviour of the algorithms are shown.

References
1. N. Frih; J.E. Roberts; A. Saada. Modeling fractures as interfaces: a model for Forchheimer fractures.
Comput. Geosci. 12 (2008) 91–104.

133
An Anisotropic Adaptive, Particle Level Set Method for
Moving Interfaces

Juan Luis Prieto


E.T.S. Ingenieros Industriales - Universidad Politécnica de Madrid - ORCID iD:
orcid.org/0000-0001-5085-0482
juanluis.prieto@upm.es

Jaime Carpio
E.T.S. Ingenieros Industriales - Universidad Politécnica de Madrid
jaime.carpio@upm.es

Abstract

In this study, we outline the features of a novel, anisotropic adaptive, particle level set method
for the simulation of moving interfaces. The method takes advantage of a semi-Lagrangian
formulation to handle the convective terms, while the moving interface is captured as the zero
isocontour of a certain level set function, with additional “marker particles” being added
to improve mass conservation and interface resolution. The (an)isotropic adaptive, mesh
refinement algorithm makes use of the concept of “metric tensor” to derive the size, shape
and orientation of the optimal, anisotropic triangulation. We highlight the capabilities of this
new framework for moving interfaces with a number of pure-advection tests, observing the
accuracy, flexibility and computational economy of the technique, which can be extended to
the simulation of multiphase flows for Newtonian and non-Newtonian fluids.

References
1. J.L. Prieto. SLEIPNNIR: A multiscale, particle level set method for Newtonian and non-Newtonian
interface flows. Comput. Methods Appl. Mech. Engr. 307 (2016) 164-192.
2. J. Carpio and J.L. Prieto. An anisotropic, fully adaptive algorithm for the solution of convection
dominated equations with semi-Lagrangian schemes. Comput. Methods Appl. Mech. Engr. 273 (2014)
77-99.
3. J. Carpio and J.L. Prieto and R. Bermejo. Anisotropic “goal-oriented” mesh adaptivity for elliptic
problems. SIAM J. Sci. Comput. 35 (2) (2013), A861-A885.

134
Improving Data Imputation for High Dimensional Datasets

Neta Rabin
Afeka - Tel-Aviv Academic College of Engineering
neta.rabin@gmail.com

Dalia Fishelov
Afeka - Tel-Aviv Academic College of Engineering and Tel-Aviv University
fishelov@gmail.com

Abstract

A common pre-possessing task in machine learning is to complete missing data entries in


order to form a full dataset. Known techniques provide simple solutions to this problem
by replacing the missing data entries with the mean or median value calculated for the
known data of the same measurement or type. Alternatively, missing entries may be replaced
by random values that are drawn from a distribution that fits to the known data values.
More sophisticated methods use regression to complete missing data. For a given column
with missing values, the column is regressed against other columns for which the values are
known. In case the dimension of the input data is high, it is often the case that the data
columns lay on a low-dimensional space. Constriction of low-dimensional embedding of the
subset of the complete data produces a loyal representation of it. This new representation
can now be used to construct regression or regression-type models for imputing the missing
values. In previous work [1], we proposed a two-step algorithm for data completion. The first
step utilizes a non-linear manifold learning technique, named diffusion maps [2], for reducing
the dimension of the data. This method faithfully embeds complex data while preserving
its geometric structure. The second step is the Laplacian pyramids [3] multi-scale method,
which is applied for regression. Laplacian pyramids construct kernels of decreasing scales to
capture finer modes of the data and the scale is automatically fit to the data density and
noise. In this work, we improve the previous method by considering the dual geometry of the
dataset. We construct a model that learns the geometry of the rows and of the columns of
the full subset alternately. Experimental results demonstrate the efficiency of our approach
on a publicly available dataset.

References
1. N. Rabin and D. Fishelov. Missing Data Completion Using Diffusion Maps and Laplacian Pyramids.
International Conference on Computational Science and Its Applications ICCSA. (2017) 284-297.
2. R. Coifman and S. Lafon. Diffusion Maps. Appl. Comput. Harmon. Anal. 21 (2006) 5-30.
3. N. Rabin and R. Coifman. Heterogeneous Datasets Representation and Learning Using Diffusion Maps
And Laplacian Pyramids. 12th SIAM International Conference on Data Mining. (2012) 189-199.

135
Anisotropic Goal-oriented Error Estimates for HDG Schemes

Ajay Mandyam Rangarajan, Georg May


AICES, RWTH Aachen University
ajay.rangarajan@rwth-aachen.de, may@aices.rwth-aachen.de

Vit Dolejsi, Filip Roskovec


Charles University, Prague
dolejsi@karlin.mff.cuni.cz, roskovec@gmail.com

Abstract

Many physical phenomena, such as convection-diffusion problems, are characterized by strongly


anisotropic features. Numerical simulation of such problems is greatly enhanced by anisotropic
mesh adaptation. Moreover, numerical schemes using piecewise polynomial approximation of-
fer increased flexibility by adjusting the local polynomial degree as well (hp-adaptation).
Previously, we have proposed a continuous-mesh (i.e. metric-based) optimization method
for higher order discontinuous Galerkin methods on triangular meshes [1]. A metric is obtained
by a two-step formal optimization procedure with respect to a suitable continuous-mesh error
model. The advantage of such metric-based approaches, is that generating the optimal metric
can be done by analytical optimization methods, followed by mesh re-generation using a
standard metric-based mesh generator. This approach has been extended to goal-oriented
adaptation (e.g. [2], or the more recent paper with more general error estimates [3]), and/or
hp-adaptation [4].
The main focus of this talk is the formulation of the continuous mesh approach specifically
for HDG schemes. Challenges include the derivation of suitable continuous-mesh error models.
In particular, the estimates in [3] are so far available only for SIPG Discontinuous-Galerkin
schemes. In addition to the theory, we present several numerical examples for our mesh
optimization approach, chosen from linear and nonlinear convection-diffusion models.

References
1. Ajay Mandyam Rangarajan and Aravind Balan and Georg May. Mesh Adaptation and Opti-
mization for Discontinuous Galerkin Methods Using a Continuous Mesh Model. AIAA Modeling and
Simulation Technologies Conference, AIAA SciTech Forum, (AIAA 2016-2142).
2. Ajay Mandyam Rangarajan and Georg May and Vit Dolejsi. Adjoint-based anisotropic mesh
adaptation for Discontinuous Galerkin Methods Using a Continuous Mesh Model. 23rd AIAA Computa-
tional Fluid Dynamics Conference, AIAA AVIATION Forum, (AIAA 2017-3100).
3. Vit Dolejsi and Georg May and Ajay Rangarajan and Filip Roskovec. Goal-oriented high-order
anisotropic mesh adaptation method for linear convection-diffusion-reaction problems. SIAM Journal on
Scientific Computing(submitted).
4. Vit Dolejsi and Georg May and Ajay Rangarajan. A continuous hp-mesh model for adaptive
discontinuous Galerkin schemes. Applied Numerical Mathematics, Volume 124, 2018, Pages 1-21.

136
Two Approaches for the Potential/field Problem With High
Order Whitney Forms and New Degrees of Freedom

Francesca Rapetti
Math. Dept., Universite’ Cote Azur, Nice, France
frapetti@unice.fr

Ana Alonso Rodriguez


Math. Dept., Universita’ degli Studi di Trento, Italy
ana.alonso@unitn.it

Abstract

Given the degrees of freedom (dofs) of a Whitney (k +1)-form w, it is not difficult to compute
those of the Whitney k-form u such that du = w, being d the exterior derivative operator [3].
The matrix describing the operator d is particularly simple if both the potential u and the
field w are defined through their weights on the small simplices, a new set of possible dofs for
high order Whitney forms which have been firstly introduced in [2] and later analyzed in [1].
Once the dofs are given, the identification of the form is straightforward if a cardinal basis
is known, but this is not always the case in the high order framework. We thus recall how
this cardinal basis with respect to the weights on the small simplices can be easily obtained
starting from a set of vector functions generating the H(curl) or the H(div) Nédélec first
type spaces of degree r ≥ 1. We then present two different approaches to solve numerically
this potential/field problem.

References
1. S.H. Christiansen and F. Rapetti. On high order finite element spaces of differential forms. Math.
Comp. 85 (2016) 517-548.
2. F. Rapetti and A. Bossavit. Whitney forms of higher degree. SIAM J. Numer. Anal. 47 (2009) 2369-
2386.
3. F. Rapetti and A. A. Rodriguez. The discrete relations between fields and potentials with high order
Whitney forms. Enumath 2017 conference proceedings (2017) preprint .

137
Mechanical Modeling of Edema Formation Applied to
Bacterial Myocarditis

Ruy Freitas Reis, Rodrigo Weber dos Santos, Bernardo Martins Rocha, Marcelo
Lobosco
Universidade Federal de Juiz de Fora
ruyfreis@gmail.com, rodrigo.weber@ufjf.edu.br,
bernardomartinsrocha@ice.ufjf.br, marcelo.lobosco@ufjf.edu.br

Abstract

Heart diseases are the number one cause of death in the world, i. e. the number of people
dying annually due to problems related to heart is bigger than any other reason, according
to World Health Organization (WHO). So, this study aims to investigate myocarditis, a
heart disease caused by inflammation when occurred in the myocardium. There are several
reasons that lead to myocarditis including infections caused by a virus, bacteria, protozoa,
fungus, and others. Additionally, this inflammatory response is one of the consequences of
an infection triggered by an immunological reaction. The most common symptoms found
in infections are edema, redness, fever, and pain, indeed it is known as the four cardinal
signs of inflammation. So, edema is one of the consequences of the inflammatory response
which increases the capillary permeability leading to an excessive filtration. Finally, this
exudate accumulates at the interstitium establishing the edema. One of the techniques used
to visualize myocardial edema is the cardiovascular magnetic resonance of myocardial tissue.
Using this imaging exam it is possible to visualize hypointense cores within the edematous
zone. Thus, this research aims to solve a plasma flow model due to a bacterial infection in
a heart short-axis two-dimensional domain and compare with data found from literature.
This mathematical model consists of partial differential equations (PDEs) using the theory
of poroelasticity mechanics of a fluid-saturated porous media applied to living tissue coupled
with an immune system model. Thus, the mathematical model used in this study is basically
divided into two parts: one modeling the immune system response due to a bacterial infection,
represented by neutrophils; and another representing the hydro-mechanical problem.

References
1. Richard A. Goldsby and Thomas J. Kindt and Janis Kuby and Barbara A. Osborne. Immunol-
ogy. W. H. Freeman, 5th, 2002.
2. A. B. Pigozzo and G. C. Macedo and R. W. dos Santos and M. Lobosco. On the computational
modeling of the innate immune system. BMC bioinformatics. 2013;14(Suppl 6):S7.
3. S Giri and Y-C Chung and A Merchant and G Mihai and S Rajagopalan and S V Raman
and O P Simonetti. T2 quantification for improved detection of myocardial edema. J Cardiovasc Magn
Reson 11 (1), 56.
4. J Scallan and V H Huxley and R J Korthuis. Capillary fluid exchange: regulation, functions, and
pathology. Vol. 2: Morgan& Claypool Life Sciences; 2010..
5. A H-D Cheng. Poroelasticity. Vol. 27: Springer; 2016.

138
Impact of Vegetation on Dustiness Produced by Surface Coal
Mine in North Bohemia.

Hynek Řeznı́ček, Luděk Beneš


Czech Technical University in Prague (Faculty of mechanical engineering)
hynek.reznicek@fs.cvut.cz, ludek.benes@fs.cvut.cz

Abstract

The contribution deals with a practical application of a CFD computation with real geometry.
An assignment has come from Czech mining company (Severočeské doly) which wants to
study the impacts of a mine extension on environment. In the neighbourhood of the mine lies
a village highly affected by dust transport from the mine and the question is, how much can
a vegetative barrier lessen the dust concentration.
The flow field and the concentrations are computed on 2D cuts with a real geometry for
Bı́lina coal mine. An in-house CFD solver, based on finite volume method AUSM+ up scheme,
is used to compute the flow field. System of the RANS equations for viscous incompressible
flow with variable density is used for description of the flows. The two equations turbulence
model is used for the closure of this set of equations. Three effects of the vegetation should
be considered: effect on the air flow, i.e. slowdown or deflection of the flow, influence on the
turbulence levels inside and near the vegetation, and filtering of the particles present in the
flow.
The transport equation for concentration of passive contaminant is solved. Petroff’s model
of the dust deposition on vegetation is employed. It reflect four main processes leading to par-
ticles deposition on the leaves: Brownian diffusion, interception, impaction and gravitational
settling.

References
1. V. Šı́p undefined and L. Beneš undefined. Dry deposition model for a microscale aerosol dispersion
solver based on the moment method. J. of Aerosol Science 107 (2017), 107-122..
2. A. Petroff undefined and A. Mailliat undefined and M. Amielh and F. Anselmet. Aerosol
dry deposition on vegetative canopies.. Atmos. Environ. 42 (2008), 3625–3653..

139
A Knowledge-Based System for DC Railway Electrification
Verification

Eugenio Roanes-Lozano
Instituto de Matemática Interdisciplinar & Depto. de Álgebra, Geometrı́a y Topologı́a,
Universidad Complutense de Madrid, Spain
eroanes@mat.ucm.es

Rubén González-Martı́n
Ineco, Madrid, Spain & Universidad Politécnica de Madrid, Madrid, Spain
rgonzalez.martin@ineco.com

Javier Montero
Instituto de Matemática Interdisciplinar & Departamento de Estadı́stica e Investigación
Operativa, Universidad Complutense de Madrid, Spain
monty@mat.ucm.es

Abstract
Gröbner bases can be used to verify knowledge–based systems (KBS) [1]. An algebraic ap-
proach can also be used to decide whether a given undirected graph can be 3–colored or
not. We applied a related approach to decide whether a situation proposed to a railway in-
terlocking system is safe or not [2]. The code of these algebraic approaches is really brief.
We also implemented a matrix-based computer tool that allows an expert to automatically
check whether a proposed scenario, given through the topology of the railway station and
the positioning of the section insulators, air-gap insulators, earthing disconnectors, load dis-
connectors and remote load disconnectors, fulfills the requirements of the Spanish railway
infrastructure administrator (ADIF) for 3,000 V railway electrifications or not [3] (testing if
certain different states of these elements result in certain sections under electric tension or
not). Now we have designed an implemented a new computer tool for this latter goal based on
an algebraic translation of the problem. Unlike in [2], determining the sections under electric
tension is computed solving linear systems (the graph is undirected), and therefore far bigger
station layouts can be addressed.
The second author works in a railway electrification company and this work addresses a
real world need, nowadays manually checked by experts, as KBS were verified in the past.

References
1. E. Roanes-Lozano and L. M. Laita and A. Hernando. An Algebraic Approach to Rule Based Expert
System. RACSAM Rev. R. Acad. Cien. Serie A. Mat. 104/1 (2010) 19-40. doi: 10.5052/RACSAM.2010.04.
2. E. Roanes-Lozano and E. Roanes-Macı́as and L. M. Laita. Railway Interlocking Systems and
Groebner Bases. Math. Comp. Simul. 51/5 (2000) 473-481. doi: 10.1016/S0378-4754(99)00137-8.
3. E. Roanes-Lozano and R. González-Martı́n. Matrix Approach to DC Railway Electrification Verifi-
cation. Proc. Comp. Sci. 108 (2017) 1424-1433. doi: 10.1016/j.procs.2017.05.226.

140
A Brief Reflection About Iterative Sentences and Arithmetic

Eugenio Roanes-Lozano
Instituto de Matemática Interdisciplinar & Depto. de Álgebra, Geometrı́a y Topologı́a,
Facultad de Educación, Universidad Complutense de Madrid, Spain
eroanes@mat.ucm.es

Abstract
I have taught mathematics with computers to students of the School of Education for 32 years
within the frame of different subjects and using different languages and hardware. Now I use
Scratch, Maple and GeoGebra. The goals of these subjects are to show how the computer
can help in the learning process in Primary and/or Secondary Education and to clarify and
strengthen the concepts of our students. Mastering a computational language or package is
not a goal. And I believe it is advisable to always introduce, when possible, mathematical
or computational ideas using examples well known by the students (in another context). For
instance, a procedure can be compared with the step by step explanation of a tennis shot.
Let us focus on a very specific problem: implementing arithmetic operators as nontrivial
examples of iterative sentences (depending on the level of the audience, I later introduce
them in a recursive way or not). This is not Peano’s arithmetic, but it is also a constructive
approach. This year we were working with Scratch 2 [1], and I proceeded as usual:
i) I explained how we calculate by hand a power which exponent is a positive integer and
pushed the students to rediscover the process underneath (in order to implement it).
ii) Exercise: do the same for multiplication (from addition),
iii) Exercise: do the same for addition (from the elementary operation “add 1” -successor)
iv) Exercise: implement the factorial function.
My experience is that this top to down order is the best possible one. It is cleat that iv) goes
after i), ii), iii) because you have to multiply by a number that changes. But why i), ii) and
iii) are best introduced in that order, although they seem of increasing “complexity”? (really,
the algorithms are almost identical). In my opinion, the reasons are:
i) We don’t memorize “power tables”, and what we do by hand to calculate a power which
exponent is a positive integer is exactly the algorithm proposed.
ii) Although we memorize multiplication tables, we sometimes use this procedure to mentally
calculate the result in some cases like, e.g., 3 · 1250, so this process is somehow “fresh”.
iii) This is the process carried out to sum by a kid that can count but still don’t know how
to sum (“count with the fingers”). For instance, 3 + 4 would correspond to count 3 fingers
plus four times one more finger. As we haven’t done this for a long time, it seems more
difficult. Another reason is that “successor” is not recognized as an operation.
This is just a reflection after a long teaching experience, but I think it is a curious hypothesis.

References
1. .Anonymous . Scratch 2 web page. https://scratch.mit.edu/.

141
Using Extensions of the Residue Theorem for Improper
Integrals Computations With CAS

José L. Galán-Garcı́a, Gabriel Aguilera-Venegas, Pedro Rodrı́guez-Cielos, Marı́a Á.


Galán-Garcı́a, Yolanda Padilla-Domı́nguez, Iván Atencia-Mc.Killop
University of Málaga
jlgalan@uma.es, gabri@ctima.uma.es, prodriguez@uma.es, magalan@ctima.uma.es,
ypadilla@ctima.uma.es, iatencia@ctima.uma.es

Ricardo Rodrı́guez-Cielos
Technical University of Madrid
ricardo.rodriguez@upm.es

Abstract
The computation of improper integrals of the first kind (integrals on unbounded domain) are
used in different applications in Engineering (for example in Kynetic Energy, electric poten-
tial, probability density functions, Gamma (Γ ) and Beta (β) functions, Laplace and Fourier
Transforms, Differential Equations, . . . ). Nowadays, Computer Algebra Systems (CAS) are
being used for developing such computations. But in many cases, some CAS lack of the
appropriate rules for computing some of these improper integrals.
In a previous talk in ESCO 2016 and a later extension in [1], we introduced new rules for
computing improper integrals of the first kind using some results from Advanced Calculus
Theories (Residue Theorem, Laplace and Fourier Transforms) aimed to improve CAS capa-
bilities on this topic. In this talk, we develop new rules for computing other types of improper
integrals using different applications from extended versions of the Residue Theorem.
The type of improper integrals we will compute are:
Z ∞ Z 0 Z ∞
1. f (x) g(x) dx ; f (x) g(x) dx and f (x) g(x) dx
0 −∞ −∞
p(x)
where g(x) = 1 or g(x) = sin(ax) or g(x) = cos(ax) and f (x) = with grade of
q(x)
p(x)
Z ∞ smaller than grade of q(x) and q(x) with no real roots of order greater than 1.
2. xα f (x) dx where α ∈ R \ Z or −1 < α < 0
0
We will show some examples of such improper integrals that current Cas can not compute.
Using extensions of the Residue Theorem in Complex Analysis, we will be able to develop
new rules schemes for these improper integrals. These new rules will improve the capabilities
of Cas, making them able to compute more improper integrals.

References
1. José L.Galán-Garcı́a and Gabriel Aguilera-Venegas and Marı́a Á. Galán-Garcı́a and Pedro
Rodrı́guez-Cielos and Iván Atencia-Mc.Killop. Improving CAS capabilities: New rules for comput-
ing improper integrals. Applied Mathematics and Computation 316 (2018) 525-540.

142
Reiterated Homogenization of Flows in Deforming Double
Porosity Media

Eduard Rohan, Vladimir Lukeš, Jana Turjanicová


University of West Bohemia
rohan@kme.zcu.cz, vlukes@kme.zcu.cz, turjani@students.zcu.cz

Abstract

The double porosity materials consist of two very distinct porous systems so that their inter-
action has a strong influence on the fluid transfer and other mechanical properties. In general,
the “primary” and the “dual” porosities can be distinguished. These two systems character-
ized by very different pore sizes are arranged hierarchically, one is embedded in the other. In
the present study, we consider the fluid-structure interaction problem in the double porosity
medium. To respect the skeleton poroelasticity, we extend the model of the hierarchical flow
in a rigid double porosity medium described in [3]. The two-level homogenization by the
periodic unfolding method was applied to upscale the Stokes flow in a rigid micro-porosity
and, consequently, to upscale the Darcy-Stokes system relevant to the mesoscopic medium.
The macroscopic flow was described by a Darcy-Brinkman system of equations governing
macroscopic fields of pressure and flow velocity. The model derived in this conference paper
follows also the hierarchic upscaling procedure described in [1], where a static problem was
considered. In [2] we modified the model of a hierarchical poroelastic material by including the
Darcy flow in the microporosity. Here we consider the reiterated homogenization of the Stokes
flow problem with a strong contrast in the fluid viscosity between the micro- and meso-pores.
The 1st level homogenization leads to the Biot model associated with the microporosity. The
2nd level upscaling of the mesoscopic fractured medium then follows: the mesoscopic model
is constituted by the Biot model governing the microporosity and by the Stokes flow model of
the fractures. Modified Saffman interface conditions are obtained after the 1st level upscaling.
In the paper we present the local problems for characteristic responses; using their solutions
the homogenized coefficients of the meso- and macro-models are computed. Moreover, we
explain reconstruction of the three-scale fields at the two levels of the heterogeneous porous
structure. We discus influence of the relative size between the micropores and the fractures
and also the influence of the mesoscopic interface condition possibly modified to control the
slip velocity on the mesoscopic interfaces.
This research is supported by project GACR 16-03823S and in part by the project LO
1506 of the Czech Ministry of Education, Youth and Sports.

References

1. E. Rohan and R. Cimrman. Hierarchical numerical modelling of nested poroelastic media. Proc. of the
11th Int. Conf. on Comput. Structures Technology, B.H.V. Topping (ed.), Civil-Comp Press, Scotland,
2012. .

143
2. E. Rohan and S. Naili and R. Cimrman and T. Lemaire. Double porosity in fluid-saturated elastic
media: deriving effective parameters by hierarchical homogenization of static problem. Continuum Mech.
Thermodyn., 28 (2016) 1263–1293,.
3. E. Rohan and V. Lukeš and and J. Turjanicová.. A Darcy-Brinkman model of flow in double
porous media - two-level homogenization and computational modelling. Computers and Structures (2017)
In Press..

144
Multi-model Approach for Rotor Dynamics in Helicopter and
Wind Turbine Simulation

Melven Röhrig-Zöllner
German Aerospace Center (DLR)
Melven.Roehrig-Zoellner@DLR.de

Abstract

We present a multi-model approach for simulating complex systems with an example from he-
licopter dynamics. Our focus lies on the co-design of numerical methods, software engineering,
and an engineering modelling process.
Our goal is to simulate the dynamic behaviour (from vibrations to flight dynamics) of
rotor systems. This requires coupling of rigid structures, flexible elements, simple aerodynam-
ics and other sub-systems such as controllers. We want to allow general rotor configurations
avoiding ”artificial” constraints in the coupling of these sub-systems. Therefore we use a
generic formulation for coupling multiple ODE models which leads to an index-1 DAE sys-
tem. The individual sub-models can be developed and tested independently. This means we
obtain a loose coupling of the software components. In addition we can start with simple
sub-models and add more complex behaviour when needed. We discuss the robust and fast
implementation of the employed numerical methods for the time-integration of the index-1
DAE system. Here we focus on half-explicit Runge-Kutta methods combined with exponential
integrators to tackle stiff systems.

References
1. E. Kremser. Towards Helicopter Simulation with an Index-1 Differential-Algebraic Equations System.
Master’s Thesis. University of Cologne. 2017.

145
Dimensionality Reduction of Hybrid Rocket Fuel Combustion
Data

Alexander Rüttgers
German Aerospace Center (DLR), Simulation and Software Technology
Alexander.Ruettgers@dlr.de

Anna Petrarolo
German Aerospace Center (DLR), Institute of Space Propulsion
Anna.Petrarolo@dlr.de

Abstract

The project ATEK at the German Aerospace Center (DLR) focuses on novel techniques for
space transport vehicles to allow for cost reductions. One promising approach in this context
is the use of hybrid rockets that combine solid propellants and liquid propellants. However,
the process of hybrid rocket fuel combustion is still a matter of ongoing research and not fully
understood yet.
Recently, various experimental combustion tests on hybrid rocket fuels were performed at
DLR. For a better understanding of the experiments, the combustion process was captured
with a high-speed video camera. The decomposition of these videos leads to a larger number
of about 30000 images for each measurement. Furthermore, each single image already contains
a complex data matrix since hybrid combustion is dominated by transient flow dynamics like
Kelvin-Helmholtz instability and vortex shedding. In the end, a high-dimensional dataset has
to be evaluated.
In this talk, we present the results of different dimensionality reduction techniques such
as a principal component analysis to reduce the complexity of the experimental dataset. Even
then, it is still essential to parallelize the statistical routines to obtain the results within a
reasonable amount of time. In a second step, we investigate the statistically independent
structures of the flow field that has been reduced in complexity by using an Independent
Component Analysis (ICA). More precisely, the data is either separated into spatially (sICA)
or temporally (tICA) independent components. Finally, both ICA approaches are compared
in order to better understand and interpret them.

References
1. A. Petrarolo and M. Kobald. Evaluation techniques for optical analysis of hybrid rocket propulsion.
J. Fluid Sci. and Technol.11 (2016) 1-20.

146
Machine Learning Techniques for Global Sensitivity Analysis
in Earth System Models

Cosmin Safta, Khachik Sargsyan


Sandia National Labs
csafta@sandia.gov, ksargsy@sandia.gov

Daniel Ricciuto
Oak Ridge National Labs
ricciutodm@ornl.gov

Abstract

Studies dealing with Earth system models are not only challenged by the compute intensive
nature of these models but also by the high-dimensionality of the input parameter space. In
our previous work with the land model components (Sargsyan et al., 2014, Ricciuto et al.,
2018) we identified subsets of 10 to 20 parameters relevant for each output quantity of inter-
est (QoI) via Bayesian compressive sensing and variance-based decomposition. Nevertheless
the algorithms were challenged by the nonlinear input-output dependencies for some of the
relevant QoIs.
In this work we explore a set of machine learning techniques to build computationally
inexpensive surrogate models tailored to land model predictions at several Fluxnet sites
(www.fluxdata.org). We compare the skill of machine-learning models, e.g. neural networks,
to identify the optimal number of classes in selected QoIs and construct robust multi-class
classifiers that partition the parameter space into regions with smooth input-output depen-
dencies. These classifiers are then coupled with sparse learning techniques to build sparse or
low-rank surrogate models tailored to each class. The multiclass surrogates are then used for
global sensitivity analysis (GSA) via variance-based decomposition, identifying parameters
that are important for each QoI. Besides confidence in GSA results, the error associated with
the surrogate can subsequently be used for likelihood construction in surrogate-enhanced
calibration studies.

References
1. K. Sargsyan and C. Safta and H.N. Najm and B.J. Debusschere and D. Ricciuto and P.
Thornton. Dimensionality reduction for complex models via Bayesian compressive sensing. International
Journal for Uncertainty Quantification, vol 4(1), 2014 .
2. D. Ricciuto and K. Sargsyan and P. Thornton. The impact of parametric uncertainties on biogeo-
chemistry in the ACME land model. Journal of Advances in Modeling Earth Systems, in press, 2018.

147
On Initial Coefficient Inequalities for New Subclasses of
Meromorphic Bi-univalent Functions

F. Müge Sakar
Batman University, Faculty of Management and Economics, Department of Business
Administration, Batman-TURKEY
mugesakar@hotmail.com

Abstract

In this present study, we investigate two new subclasses of meromorphic and bi-univalent
functions defined on ∆ = {z : z ∈ C, 1 < |z| < ∞}. We obtain improved estimates on the
initial Taylor-Maclaurin coefficients for the functions in these subclasses. Some other closely
related results are also indicated.

References
1. P.L. Duren. Coefficient of Meromorphic Schlicht Functions. Proc. Amer. Math. Soc. 28 (1971) 169-172.
2. D. A. Brannan and J. Clunie. Aspects of Contemporary Complex Analysis. in:Proceedings of the
NATO Advanced Study Institute held in Durham, England, August 26 September 6, 1974, Academic
Press, New York, NY, USA, 1979.
3. J. Clunie and F. R. Keogh. On Starlike and Convex Schlicht Functions. MR 22, 1682, J. Lond. math.
Soc. 35 (1960) 229–233.
4. H.M. Srivastava and A.K. Mishra and P. Gochhayat. Certain Subclasses of Analytic and Bi-
univalent Functions. Appl. Math. Lett. 23 (10) (2010) 1188-1192.
5. G. Springer. The Coefficient Problem for Schlicht Mappings of the Exterior of the Unit Circle. Amer.
Math. Soc. 70 (1951) 421-450.

148
Concatenation Operator for Piecewise-defined Functions

José Alfredo Sánchez de León, Francisco Javier Almaguer Martı́nez, Marı́a Esther
Grimaldo Reyna
Facultad de Ciencias Fı́sico Matemáticas - Universidad Autónoma de Nuevo León
jose.sanchez@villacero.com, FRANCISCO.ALMAGUERMRT@uanl.edu.mx,
maria.grimaldory@uanl.edu.mx

Abstract

Piecewise-defined functions represent certain sort of mathematical arrays whose constituent


elements are single functions defined over a set of fixed domains; they appear very frequently
in the fields of mathematics, physics and engineering where they play an important role.
The elements of this kind of functions can be properly unifyed, and be merged into a single
plain equation, it is, concatenation. This can be done by means of the use of some kind of
sigmoidal functions, where those can build a gate between each one of their elements in order
to concatenate them; an example of this is the Heaviside function [1]. However sigmoidal
functions do require an extra parameter of arbitrary precision to control how close can those
elements be with respect to some specific point in the new merged function. This paper
concerns the mathematical formulation of an operator, aimed at the concatenation of the
elements of a piecewise-defined function, unifying them solely in terms of their constituent
elements by themselves, and their defined fixed ranges; no extra parameter is needed for that
aim . The only requirements towards the application of this operator represents: the continuity
of any element of the function in the concatenation points and the existence of their limits
in those points. Throughout this document it is exposed the mathematical framework from
where the operator was raised from the ground, its formulation, and as well as some scenarios
where its deployability is exerted.

References
1. Schecter Stephen. Step functions, delta functions, and the variation of pa-
rameters formula. Mathematics Department, North Carolina State University,
http://www4.ncsu.edu/ schecter/ma 341 sp06/varpar.pdf.

149
Accelerating Multivariate Simulation Using Graphical
Processing Units With Applications to RNA-seq Data

A. Grant Schissler
Department of Mathematics and Statistics, University of Nevada, Reno
aschissler@unr.edu

Abstract

This talk will introduce a parallelized algorithm to facilitate massive, multivariate simula-
tions. A growing number of applications involve multivariate data with dependent marginals.
For example, gene expression measurements from biological samples do not behave indepen-
dently. At the same time, these data are often high dimensional (many measurements per
sample) and heterogeneous (differing distributions). These characteristics create a compu-
tational challenge when conducting Monte Carlo simulations needed to study the empirical
operating characteristics of statistical methodology. Often researchers resort to far-reaching
simplifying assumptions that greatly diminish the simulation’s usefulness and credibility. To
overcome this, we introduce a graphical processing unit (GPU) accelerated version of the
well-known Normal to Anything (NORTA) algorithm. In the course of algorithm develop-
ment, we’ll study the role that high-dimensional covariance estimation plays in computa-
tional efficiency and statistical properties. Moving to purely computational matters, we’ll
conduct benchmark studies to elucidate the scenarios that GPU-acceleration produces sub-
stantive speed-ups. We’ll conclude by deploying the GPU-NORTA algorithm to simulate
RNA-sequencing data sets in the context of breast cancer research.

References
1. M.A. Suchard and Q. Wang and C Chan and Jacob Frelinger and Andrew Cron and and Mike
West.. Understanding GPU Programming for Statistical Computation: Studies in Massively Parallel
Massive Mixtures.. Journal of Computational and Graphical Statistics, 19(2):419–438, jan 2010..
2. M.C. Cario and B.L. Nelson. Modeling and generating random vectors with arbitrary marginal
distributions and correlation matrix. Technical report, 1997. Available at http://citeseerx.ist.psu.
edu/viewdoc/download?doi=10.1.1.48.281&rep=rep1&type=pdf..
3. C. Genest and J. Mackay.. The Joy of Copulas: Bivariate Distributions with Uniform Marginals. The
American Statistician, 40(4):280–283, nov 1986..
4. J. Won and J. Lim and S. Kim and and B. Rajaratnam. Condition-number-regularized covariance
estimation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3):427–450,
jun 2013..

150
Phase-field Formulation of Brittle Damage With Application
on Laminated Glass Beams

Jaroslav Schmidt
Czech Technical University in Prague, Faculty of Civil Engineering
jarasit@gmail.com

Abstract

Brittle fracture of laminated glass, seen as a layered composite material consisting of glass
panes and polymer interlayers, is investigated in this paper. In recent years this sandwich
material became popular in civil engineering and is increasingly used in load-bearing elements
such as staircases, columns, and floor systems. The phase-field formulation of brittle fracture
is employed because of its ability to predict the crack initialization and propagation. As con-
ventional in continuum damage mechanics, a damage field s is defined along the centerline
in each brittle layer. For the common cases it is sufficient to assume that the polymer inter-
layers do not suffer from damage and the damage field s takes place only in the glass layers.
Due to the contact stress between glass fragments in post-breakable stage, we implement the
anisotropic version of phase-field formulation. It assumes that the damage field s affects only
tension, meanwhile in compression the material is intact. Several numerical simulations and
comparison between small-strain and large deflection Von Kármán formulation are presented
to show the usability of the phase-field formulation of damage for laminated glass beams.

References
1. J. Kiendl and M. Ambati and L. De Lorenzis and H. Gomez and A. Reali.. Phase-field description
of brittle fracture in plates and shells. Computer Methods in Applied Mechanics and Engineering, Vol 312
(2016), pp 374-394..
2. Ch. Miehe and M. Hofacker and F. Welschinger.. A phase field model for rate-independent crack
propagation: Robust algorithmic implementation based on operator splits. Computer Methods in Applied
Mechanics and Engineering, Vol 199, No. 45, pp 2765-2778.
3. J. Vignollet and S. May and R. de Borst and C. V. Verhoosel.. Phase-field models for brittle
and cohesive fracture. Meccanica, Vol 49, No. 11, pp 2587-2601.

151
Polyharmonic Splines Generated by Multivariate Smooth
Interpolation

Karel Segeth
Institute of Mathematics, Czech Academy of Sciences, Praha, Czechia
segeth@math.cas.cz

Abstract

Splines can be derived in two different ways: the algebraic one (where splines are understood
as functions defined piecewise with smoothness conditions at points of change of the function
definition) and the variational one (where splines are obtained by minimization of a quadratic
functional with constraints), see, e.g., Segeth (2018) for the case of cubic 1D spline.
We show that the general variational approach called smooth interpolation (first intro-
duced by Talmi and Gilat (1977)) can be extended from 1D interpolation (odd degree polyno-
mial splines) to the multivariate case and the order of the spline can be extended as compared
with Mitáš and Mitášová (1988).
To this end, we choose the system of functions exp(−ik · x) for the basis of the space
where we minimize functionals and measure the smoothness of the result. The general form
of the interpolation formula is then the linear combination of the values of some radial basis
functions and low-order polynomials (called the trend in Mitáš and Mitášová (1988)) at nodes,
see Segeth (2018). The dimension considered in this contribution is 1, 2, and 3.
We also mention the problem of smooth curve fitting (data smoothing) and present a
simple numerical example. Smooth approximation in 2D and 3D can be a very useful tool
e.g. in computer aided geometric design or geographic information systems.

References
1. L. Mitáš and H. Mitášová. General Variational Approach to the Interpolation Problem. Comput. Math.
Appl. 16 (1988) 983-992.
2. K. Segeth. Some Splines Produced by Smooth Interpolation. Appl. Math. Comput. 319 (2018) 387-394.
3. A. Talmi and G. Gilat. Method for Smooth Approximation of Data. J. Comput. Phys. 23 (1977) 93-123.

152
DRBEM Solution to MHD Flow in Ducts With Thin Slipping
Side Walls and Separated by Conducting Thick Hartmann
Walls

P. Senel, M. Tezer-Sezgin
Middle East Technical University
psenel44@gmail.com, munt@metu.edu.tr

Abstract
In this study, the dual reciprocity boundary element method (DRBEM) solution to magne-
tohydrodynamic (MHD) flow of an electrically conducting fluid is given in a single and two
ducts stacked in the direction of external magnetic field. The duct walls perpendicular to the
applied magnetic field (Hartmann walls) are conducting thick walls whereas the horizontal
walls (side walls) are insulated thin walls allowing the velocity slip. The DRBEM transforms
the convection diffusion type MHD equations in the duct and Laplace equation in the thick
walls to boundary integral equations which are discretized by using constant elements. The
resulting DRBEM matrix-vector equations are solved as a whole with the coupled boundary
conditions on the common boundaries between the fluid and the thick walls. The effects of
the slip length, thickness of conducting walls and the strength of the applied magnetic field
are shown on the flow and the induced magnetic field. It is found that, in the absence of
slip, as Hartmann number (Ha) increases the flow is concentrated in front of the side walls
in terms of two side layers, and this separation is happened for much smaller value of Ha
when the thickness of the conducting walls is increased. The Hartmann layers are diminished
when both the Ha and wall thickness are increased. For large Ha, the velocity magnitude
drops and thus the flow is flattened especially in the core region. The continuation of induced
magnetic fields to the thick walls is well observed in both co-flow and counter flow cases.
When the velocity slips on the thin side walls, the flow tends to form symmetric vortices in
front of the side walls showing the slip phenomenon with an increasing velocity magnitude.
If the conductivity of the fluid is larger than the conductivity of the thick walls, the flow
returns back to one vortex form still keeping the same order Hartmann layers. Meantime in-
duced magnetic field closes itself inside the duct as if the Hartmann walls are insulated. The
proposed numerical scheme DRBEM is capable of capturing the well known MHD flow char-
acteristics in the ducts coupled with thick walls as well as the perturbations in the behaviors
of the flow and the induced magnetic field due to the thin slip walls.

References
1. L. Dragos. Magnetofluid dynamics. Abacus Press, England, 1975.
2. P.W. Partridge and C.A. Brebbia and L.C. Wrobel. The Dual Reciprocity Boundary Element
Method. Computational Mechanics Publications, Boston, 1992.
3. M.J. Bluck and M.J. Wolfendale. An analytical solution to electromagnetically coupled duct in MHD.
Journal of Fluid Mechanics, 771 (2005) 595-623.

153
Simulating Complex Shaped Particles With DEM

Eva Siegmann
RCPE GmbH
eva.siegmann@rcpe.at

Johannes Khinast
RCPE GmbH; Institute for Process and Particle Engineering, Graz University of Technology
khinast@tugraz.at

Gundolf Haase
Institute for Mathematics and Scientific Computing, Karl-Franzens-University
gundolf.haase@uni-graz.at

Abstract

Simulations of granular flows are an effective tool to gain a deeper understanding and sub-
sequently, optimization of processes such as fluidized beds, mixing, powder transport, etc.
The discrete element method (DEM) allows simulating these kinds of processes. Each par-
ticle is treated individually and its motion is described by Newton’s equation of motion. A
soft-sphere approach is used, where colliding particles are allowed to slightly overlap. This
overlap results in a repulsive force. The particle shape plays a crucial role in modelling the
process as realistically as possible. There are several approaches for modelling the shapes
of more complex particles. The multisphere approach clusters several spheres to one rigid
particle (Ketterhagen). This approach is very flexible and can describe various shapes. Bi-
convex tablets can be represented as the overlap of three spheres. Superquadrics represent
many shapes, such as ellipsoids, cylinder- and box-like particles (Podlozhnyuk et al.). To
increase the accuracy, particles can be modelled as convex polyhedrons (Govender et al.).
This approach allows simulating sharply-edged non-spherical particles. Although it is com-
putationally expensive, it is the only way to model complex shapes realistically. Where the
collision of spheres is easy to handle more complex shapes are challenging. This work shows
an accurate contact algorithm for arbitrarily shaped convex polyhedrons. The algorithm has
been implemented in the GPU based DEM software XPS, allowing simulation of a very large
number of particles (Jajcevic et al.).

References

1. Ketterhagen W. Modeling the motion and orientation of various pharmaceutical tablet shapes in a film
coating pan using DEM. International Journal of Pharmaceutics, Volume 409, Issues 1-2. May 2011, S.
137-149.
2. Podlozhnyuk A. and Pirker S. and Kloss C.. Efficient implementation of superquadric particles in
Discrete Element Method within an open-source framework. Computational Particle Mechanics. 2016, S.
1-18.

154
3. Govender N. and Wilke D.N. and Kok S. and Els R.. Development of a convex polyhedral discrete
element simulation framework for NVIDIA Kepler based GPUs. Journal of Computational and Applied
Mathematics, Volume 270. November 2014, S. 386-400.
4. Jajcevic D. and Siegmann E. and Radeke C. and Khinast J.. Large-scale CFD-DEM simulations
of fluidized granular systems. Chemical Engineering Science, Volume 98. 2013, S. 298-310.

155
TiGL, an Open Source Computational Geometry Library for
Parametric Aircraft Design

Martin Siggel
German Aerospace Center (DLR)
martin.siggel@dlr.de

Abstract

The design and optimization of aircraft typically involves several different simulation codes.
Many of them require a description of the outer or internal geometry of the aircraft – such as
CFD simulations. Here, we present the open source library TiGL, which serves as a central
geometry modeler for all those tools involved in the conceptual and preliminary aircraft and
helicopter design phase. This library is a joint development from the open source commu-
nity, foremost the German Aerospace Center, Airbus Defense and Space, and RISC Software
GmbH. To create a full 3-dimensional model of the aircraft, it uses the parametric CPACS [1]
description as its input, which amongst other things includes geometry cross sections and
their relative positioning to each other in a hierarchical manner.
At its core, TiGL has a parametric geometry modelling kernel based on OpenCASCADE,
which is used to generate the NURBS based surfaces of the aircraft. It models the external
and internal geometry of an aircraft such as wings, flaps, fuselages, engines or structural
elements of the wing. The library offers many functions to interact with the generated geom-
etry. These include functions for geometry exports into common file formats (STEP, IGES,
STL, VTK), to sample points on the aircraft surface, to project points onto the surface, to
compute intersections of planes with the aircraft, or functions to create 3D surface and volu-
metric meshes of the model. To do so, many different algorithms are involved, which include
NURBS interpolation and approximation, surface skinning, computation of intersections and
the projection of points onto the surface.
Although TiGL is written in C++, it also ships with bindings to other programming
languages, which are currently C, Python, Java, and MATLAB. In addition to the library,
the application TiGL Viewer is part of the TiGL package, which is an OpenGL based GUI
that displays the 3-dimensional geometries created by TiGL. It allows a convenient access to
TiGL functions and to execute small scripts to e.g. convert file formats or create screenshots
and animations. We are using an open software development process that allows also external
contributors to fix and further extend it via pull request. The TiGL library published under
the Apache License 2.0 and can be downloaded from https://github.com/DLR-SC/tigl.

References
1. B. Nagel and D. Böhnke and V. Gollnick and P. Schmollgruber and A. Rizzi and G. La
Rocca and J.J. Alonso. Communication in Aircraft Design: Can we establish a Common Language?.
28th International Congress of the Aeronautical Sciences, Brisbane, Australia, 2012.

156
Statistical Test for Fractional Brownian Motion Based on
Detrending Moving Average Algorithm

Grzegorz Sikora
Faculty of Pure and Applied Mathematics, Wroclaw University of Science and Technology
grzegorz.sikora@pwr.edu.pl

Abstract

Motivated by contemporary and rich applications of anomalous diffusion processes we propose


a new statistical test for fractional Brownian motion, which is one of the most popular
model for anomalous diffusion systems. The test is based on detrending moving average
statistic and its probability distribution, which we determined as a generalized chi-squared
distribution using theory of Gaussian quadratic forms. The proposed test could be generalized
for statistical testing of any centered Gaussian process. Finally we examine the test via Monte
Carlo simulations for two exemplary scenarios of subdiffusive and superdiffusive dynamics.

References
1. E. Alessio and A. Carbone and G. Castelli and V. Frappietro. Second-order moving average and
scaling of stochastic time series. Eur. J. Phys. B, 27, 197, 2002..
2. A. M. Mathai and S. B. Provost. Quadratic Forms in Random Variables: Theory and Applications.
Marcel Dekker, New York, 1992.
3. P. G. Moschopoulos. The distribution of the sum of independent gamma random variables. Ann. Inst.
Stat. Math. 37, 541544, 1985..

157
A Novel Approach for Detecting Unexpected Model Results
in E3SM Climate Model

Balwinder Singh, Philip J. Rasch, Hui Wan


Pacific Northwest National Laboratory
balwinder.singh@pnnl.gov, philip.rasch@pnnl.gov, hui.wan@pnnl.gov

Abstract

Comprehensive model testing is essential in maintaining simulation quality and numerical


accuracy in a computer model. This can be a challenging task for complex numerical models
such as those used for climate simulation. Traditional approaches such as unit tests that
evaluate small chunks of source code, or system level testing (e.g. regression testing) that
checks for bit for bit solution reproduction are valuable tools for maintaining solution integrity.
The approaches are less useful when modifications to the source code is expected to change
the model results because model equations support chaotic solutions, or conditional operators
that can introduce large changes with small perturbations. These traditional testing tools do
not give any insights into whether the source code modifications under these circumstances. A
common scenario complicating model testing occurs when model developers are re-factoring
the source code (e.g. rearranging do-loops) to augment model’s performance. In this case, the
results can be non-BFB but they should not be different in an unexpected way.
A resource intensive approach (computationally expensive as well as involving significant
number of man hours) to find these unexpected cases is to run ensembles of long simulations
and analyze model output to characterize the statistical similarity of those simulations com-
pared to previous model behavior every time a source code modification is made. Rosinski
and Williamson (1997) suggested that many pathologies could be detected when models are
changed (by changing compilers, optimization levels, code refactoring, hardware, etc) by com-
paring two very short simulations, and evaluating their solution divergence, which is much
less expensive than a statistical evaluation of long simulations. In this talk, we will describe
a new and even faster method to detect such issues in a very complex E3SM (Energy Exas-
cale Earth System Model) climate model. Our method is based on the technique described
by Rosinski and Williamsons (1997) but it differs by using an ensemble of single time step
simulations. The test is capable of detecting source code modifications, changes in model
parameters and other computational environment related changes (compilers, OS, libraries
etc.) which can significantly alters the model simulations.

References
1. Rosinski J. M. and Williamson D. L.. The accumulation of rounding errors and port validation for
global atmospheric models. SIAM J. Sci. Comput., 18, 552–564(1997), doi:10.1137/S1064827594275534..

158
Calculation of Linear Induction Motor Features by Detailed
Equivalent Circuit Method With Taking Into Account
Non-linear Electromagnetic and Thermal Properties

Fedor Sarapulov, Ivan Smolyanov, Fedor Tarasov


Ural Federal University
sarapulovfn@yandex.ru, i.a.smlianov@urfu.ru, F.E.Tarasov@urfu.ru

Abstract

The work considers the analysis of the calculation accuracy of linear induction motor features
by detailed equivalent circuit with taking into account non-linear of electromagnetic and ther-
mal parameters Reducing computational load is achieved at the expense of considering the
problem only in two spatial coordinates. Influence of the third coordinate taken into account
by coefficient Boloton’s for the equivalent electrical conductivity of the conducting layers
of the domain in magnetic quasi-static problem. The heat problem considers short-term in-
creased thermal loads on various parts of the linear electric motor. The method is validated by
comparing results with the calculated data obtained by the finite element method. Experience
in using such approach indicates the good convergence between calculation and experimental
data.

References
1. F. Sarapulov; S. Sarapulov; I. Smolyanov. Compensated linear induction motor characteristics re-
search by detailed magnetic equivalent circuit. 2017 International Conference on Industrial Engineering,
Applications and Manufacturing (ICIEAM).
2. F. Sarapulov; V. Frizen; I. Smolyanov; E. Shmakov. Dynamic study of thermal characteristics of
linear induction motors. 2017 15th International Conference on Electrical Machines, Drives and Power
Systems (ELMA).
3. F. Sarapulov; S. Sarapulov; I. Smolyanov. Research of thermal regimes of linear induction motor.
2017 18th International Conference on Computational Problems of Electrical Engineering (CPEE).

159
Numerical Analysis of Induction Heating by 3D Modeling

Vaclav Kotlan, Ivo Doležel, Pavel Karban


University of West Bohemia
vkotlan@kte.czu.cz, karban@kte.czu.cz, idolezel@kte.czu.cz

Ivan Smolyanov
Ural Federal University
i.a.smolianov@urfu.ru

Abstract

The article considers comparison of calculations of three dimensional coupled problem of


magnetic and temperature fields by Comsol Multiphysics and previously received results from
Flux software. Induction hardening is formulated with help two non-linear particle differen-
tial equations. Magnetic and thermal properties of the steel billet are considered dependent
on temperature with taking into account the saturation of the steel. Comments on the op-
timization of the calculation time for this type of tasks are given. Building a model mesh is
a difficult task in this problem. Therefore, a number of comments on the optimal selection
of elements and the construction of the mesh are given. Improving the quality of hardening
is achieved by changing the parameters of the power source during the heating of the steel
gear wheel. The results of the billet temperature profile for one-frequency and two-frequency
heating are analyzed.

References
1. J. Barglik and A. Smalcerz a and R. Przylucki and I. Doležel. 3D modeling of induction hard-
ening of gear wheels. Journal of Computational and Applied Mathematics 270 (2014) 231–240.
2. E. Wrona and B. Nacke. 3D-Modelling for the Solution of Sophisticated Induction Hardening Tasks.
International Workshop Simulation of Manufacturing Processes for Product Development Gothenburg,
May 20-21 (2003) 1-6.

160
Industry 4.0 Requires Education 4.0

Pavel Solin
University of Nevada, Reno
solin@unr.edu

Abstract

The fourth industrial revolution is changing not only the way we live, but also the way we
work. And the main changes are yet to come. According to McKinsey and Company [1],
currently demonstrated technologies could automate 45 percent of the activities people are
paid to perform. With automation taking place in most industries, education in areas such as
computer programming, 3D modeling, statistics, data analysis and machine learning is essen-
tial. However, the current education system is not ready for the change. The National Science
Foundation reports that computer programming and STEM jobs are growing rapidly, but the
education system is lagging behind in the demand for workforce due to the lack of trained
teachers [2]. In this presentation we summarize our findings from 8 years of active collabo-
ration with a wide range of entities including K-12 schools, colleges, universities, workforce
development agencies, government agencies, and employers. Our conclusions are compatible
with the National Science Report [3]. We present an overview of the weak points of the current
education system and introduce a new, active and student-centered approach which elimi-
nates them. We also present its concrete implementation via courses where students learn at
their own pace by solving problems, and show examples illustrating the positive impact on
students.

References
1. M. Chui and J. Manyika and and M. Miremadi. Where Machines Could Replace Humans - And
Where They Can’t (Yet). McKinsey Quarterly, July 2016.
2. National Science Foundation. What Does the S&E Job Market Look Like for U.S. Graduates?.
https://www.nsf.gov/nsb/sei/edTool/data/workforce-03.html. Accessed March 28, 2018.
3. National Science Foundation. Enough With The Lecturing. NSF News Release 14-064, 2014.

161
Generalized Tellegen’s Theorem and Its Applications in
System Theory

Milan Stork
University of West Bohemia, Department of Applied Electronics and Telecommunications
stork@kae.zcu.cz

Daniel Mayer
University of West Bohemia, Department of Theory of Electrical Engineering
mayer@kte.zcu.cz

Abstract

The paper deals with application of Tellegen’s theorem for systems modeling and solutions.
The proposed approach is based on generalization of Tellegen’s theorem well known from
electrical engineering. The novelty of this approach is that it is based on the abstract state
space energy for real linear, nonlinear and chaotic systems. The main aim of the contribution
is to formulate a fundamental problem of physical correctness detection of system representa-
tions and the sequel proposes is possible. It was derived that for system described by special
structure and appropriate mathematic equations the generalized Tellegen’s theorem is given
as scalar product of vector of state space variables and its derivation. Mathematically as well
as physically correct results are obtained. Some known and often used system representation
structures are discussed from the abstract state space energy point of view. Most important
is, that time evolution of energy can be used for classification of systems, e.g. periodic, non
periodic, chaotic. The examples of linear, nonlinear and chaotic systems are also included.

References
1. P. Ramachandran and V. Ramachandran and undefined undefined. ”Tellegen’s Theorem Applied
to Mechanical, Fluid and Thermal Systems ”. ”Proceedings of the 2001 American Society for Engineering
Education Annual Conference & Exposition Copyright 2001, American Society for Engineering Educa-
tion”.
2. ”B. Jakoby”. ”The relation of Tellegen’s theorem to the continuous field equations revisited”. ”John
Wiley & Sons, Ltd., International Journal of Circuit Theory and Applications, Volume 39, Issue 4, April
2011, Pages 411–415 10.1002/cta.646”.
3. ”J. Willems”. ”Active Current, Reactive Current, Kirchhoff’s Laws and Tellegen’s Theorem”. ”Electrical
Power Quality and Utilisation, Journal Vol. XIII, No. 1, 2007”.

162
Numerical Solution of Nonlinear Aeroelastic Problems Using
Linearized Approach and Finite Element Approximations

Petr Sváček
Czech Technical University in Prague, Faculty of Mechanical Engineering, Department of
Technical Mathenatics, Prague, Czech Republic
petr.svacek@fs.cvut.cz

Abstract

The mathematical modelling of fluid-structure interaction (FSI) problems is important in


various applications as aeroelastic tools used to investigate the aircraft safety. The classical
aeroelastic approach based on the asymptotic aeroelasticity is fast, efficient and reliable, and
this is why it is still very popular in the technical practice. The critical velocity can be
determined, but in general the asymptotic stability does not guarantee the safety. Even if
the system is aeroelastically stable, an external excitation can cause the transient growth
and consequently the structural failure. In this case the use of computational methods is an
alternative, which can provide an additional information.
The mathematical modelling of FSI problems in general is much more complicated as
viscous possibly turbulent flow needs to be modelled. Moreover, the flow field interacts with
the nonlinear behavior of the elastic structure and due to the vibrating structure the time
changes of the flow domain has to be taken into account. Last, the coupled fluid-structure
system for the fluid flow and for the oscillating structure needs to be solved simultaneously
using a coupled strategy at any time instant. All this together makes the high fidelity aeroe-
lastic models still very demanding to be solved efficiently and thus its direct use in industry
is rather rare.
This paper is interested in solution of two-dimensional aeroelastic problems. The classical
approach of linearized aerodynamical forces is used to determine the aeroelastic instability
and the aeroelastic response in terms of frequency and damping coefficient. This approach is
compared to the coupled fluid-structure model solved with the aid of finite element method
used for approximation of the incompressible Navier-Stokes equations. The finite element
approximations are coupled to the non-linear motion equations of a flexibly supported airfoil,
the cases of two and three degrees of freedom are considered. Both methods are first compared
for the case of small displacement, where the linearized approach can be well adopted. The
influence of nonlinearities for the case of post-critical regime is tested and the numerical
results are discussed.

References
1. E. H. Dowell and R. N. Clark. A modern course in aeroelasticity. Kluwer, Boston, 2004.
2. M. Feistauer and J. Horáček and P. Sváček . Numerical Simulation of Airfoil Vibrations Induced
by Turbulent Flow. Communications in Computational Physics 17(1), 146-188, 2015.

163
Gradient Methods of Training and Generalization Ability of a
Biological Neural Network Model

Aleksandra Świetlicka, Krzysztof Kolanowski


Poznan University of Technology
aleksandra.swietlicka@put.poznan.pl, krzysztof.kolanowski@put.poznan.pl

Abstract
In this research we focus on the generalization ability of a model of a biological neuron with
dendritic structure (which can be understood as a simple biological neural network). Our
previous work covers issues like development of the biological neural network model based on
Markov kinetic schemes [6] (discretization, implementation and simulation of the model) and
introduction of training in this model [4]. We also performed a test of generalization ability
of a biological neuron model with a point-like structure in [5]. A natural continuation of this
research is to examine the generalization ability of the kinetic model of a biological neural
network, while it is one of the most important features of neural networks in general.
Commonly used method of testing the generalization ability of any neural network is
by examining the Vapnik-Chervonenkis dimension. A huge disadvantage of this method is a
requirement of a sufficiently large training set. Another approach, which is more and more
often a subject of research in this area, assumes modifying the error function of a neural
network by adding a performance index (regularizer). This performance index can be in one
of many forms, e.g. Tikhonov functional [2], penalty function [3] or a square norm of the
network curvature [1].
In this work, we examine three different forms of a regularizer to test the generalization
ability of the stochastic kinetic model of a biological neural network. Additionally, we consider
an improved training procedure, where - besides the gradient descent algorithm - we use
conjugate gradient, stochastic gradient and Newton’s methods. As an application we use a
problem of noise reduction in an image.

References
1. M. Galicki and L. Leistritz and E.B. Zwick and H. Witte. Improving Generalization Capa-
bilities of Dynamic Neural Networks. Neural Computation (2014), vol. 16, no. 6, pp. 1253–1282, doi:
10.1162/089976604773717603.
2. S. Haykin. Neural Networks: A Comprehensive Foundation. second edition, Pearson Education, 1998.
3. A. Krogh and J. Hertz. A simple weight decay can improve generalization. in: Advances in Neural
Information Processing Systems 4, Morgan Kaufmann, 1992, pp. 950-957.
4. A. Świetlicka. Trained stochastic model of biological neural network used in image processing task.
Applied Mathematics and Computation (2015), vol. 267, pp. 716-726, doi: 10.1016/j.amc.2014.12.082.
5. A. Świetlicka and K. Kolanowski and R. Kapela and M. Galicki and A. Rybarczyk. Investi-
gation of generalization ability of a trained stochastic kinetic model of neuron. Applied Mathematics and
Computation (2017), doi: 10.1016/j.amc.2017.01.058.
6. A. Świetlicka and K. Gugala and W. Pedrycz and A. Rybarczyk. Development of the Determin-
istic and Stochastic Markovian Model of a Dendritic Neuron. Biocybernetics and Biomedical Engineering
(2017), vol. 37, issue 1, pp. 201-216, DOI: 10.1016/j.bbe.2016.10.002.

164
Non-intrusive Parameter Identification of Transport Processes

Jan Sýkora, Jan Havelka


Czech Technical University in Prague
jan.sykora.1@fsv.cvut.cz, jan.havelka.1@fsv.cvut.cz

Abstract

In many fields it is advantageous to analyze the construction or material sample without inter-
vening into the structure itself. This contribution presents such numerical procedure relying
solely on data gathered on boundary. Our interest is focused towards building materials and
their properties when exposed to coupled heat and moisture transport. As a material model,
we introduce Künzel’s transport model for its relative simplicity and sufficient accuracy to de-
scribe the underlying physical phenomena of coupled transport processes. The material model
parameters are identified from the real climatic boundary conditions considering variety of
domain shapes and parameter settings.

References
1. A. P. Calderón. On an inverse boundary value problem. Computational & Applied Mathematics, 25
(2006), 133 - 138.
2. J. Sylvester and G. Uhlmann. A Global Uniqueness Theorem for an Inverse Boundary Value Problem.
Annals of Mathematics, 125(1) (1987), 153-169.
3. A. I. Nachman. Global Uniqueness for a Two-Dimensional Inverse Boundary Value Problem. Annals of
Mathematics, 143(1) (1996), 71-96.
4. R. E. Langer. An inverse problem in differential equations. Bulletin of the American Mathematical
Society, 39(10) (1933), 814-820.
5. J. Sýkora. Modeling of degradation processes in historical mortars. Advances in Engineering Software,
70 (2014), 203-214.
6. E. Somersalo and M. Cheney and D. Isaacson. Existence and Uniqueness for Electrode Models for
Electric Current Computed Tomography. SIAM Journal on Applied Mathematics, 52(4) (1992), 1023-
1040.

165
Parallel, High Performance, Fuzzy Logic Systems Realized in
Hardware

Tomasz Talaśka
UTP University of Science and Technology, Faculty of Telecommunication, Computer
Science and Electrical Engineering, ul. Kaliskiego 7, 85-796, Bydgoszcz, Poland
tomasz.talaska@gmail.com

Abstract

Fuzzy systems play an important role in many industrial applications [1]. Systems of this type
can be implemented using different techniques [2, 3]. The most popular is their realization in
software. This results from the ease of such implementation that facilitates modifications /
corrections and testing. On the other hand, such realizations are usually not convenient when
high data rate, low cost per unit and large miniaturization are required. In this work we pro-
pose efficient, fully digital, asynchronous (clock-less) realization of existing fuzzy logic (FL)
operators [4] suitable for the application in larger fuzzy systems implemented in low-power
application specific integrated circuits. The following FL operators are presented: bounded
sum, bounded difference, bounded product, bounded complement, union (MAX), intersection
(MIN), absolute difference, implication, and equivalence. All of them have been realized in
the CMOS 130nm technology and thoroughly verified in Hspice environment. The proposed
circuits can be scaled to any signal resolution and can be used in larger fuzzy systems working
in parallel and fully asynchronously. In the comparison with analog approach, digital realiza-
tion, presented in this work, offers important advantages. Circuits of this type feature high
noise immunity and low sensitivity to the variation of transistor parameters. Furthermore
digital data can be easily stored even for a long period of time that is different than in typical
analog solutions.

References
1. I. J. Rudas and Ildar Z. Batyrshin and A. Hernandez Zavala and O. Camacho Nieto and L.
Horvath and L. Villa Vargas. Generators of Fuzzy Operations for Hardware Implementation of Fuzzy
Systems. Advances in Artificial Intelligence, Mexican International Conference on Artificial Intelligence
(MICAI), (2008).
2. R. Dlugosz and W. Pedrycz. Lukasiewicz fuzzy logic networks and their ultra-low power hardware
implementation. Neurocomputing, Elsevier, vol. 73, (2010).
3. T. Talaśka. Implementation of Fuzzy Logic Operators as Digital Asynchronous Circuits in CMOS Tech-
nology. International Conference on Microelectronics (MIEL 2017), Nis, Serbia, (2017).
4. T. Yamakawa and T. Miki. The Current Mode Fuzzy Logic Integrated Circuits Fabricated by the
Standard CMOS Process. IEEE Transactions on Computers, C-35, 2, (1986).

166
Towards Performance-Portability of the Albany/FELIX
Land-Ice Solver to New and Emerging Architectures Using
Kokkos

Irina Tezaur, Jerry Watkins


Sandia National Laboratories
ikalash@sandia.gov, jwatkin@sandia.gov

Irina Demeshko
Los Alamos National Laboratory
irina@lanl.gov

Abstract

As high performance computing (HPC) architectures become more heterogeneous, climate


codes must adapt to take advantage of potential performance capabilities. This talk will focus
on performance-portability of the Sandia Albany/FELIX finite element land-ice solver to new
and emerging architecture machines. The computational time for an ice sheet simulation in
FELIX is divided into two pieces, each comprising approximately 50 percent of the total
run time: finite element assembly (FEA), and linear solves. In the context of two climate
applications implemented within Albany, the FELIX land-ice solver and also the Aeras global
atmosphere dycore, we will discuss our efforts (Demeshko et al, 2017) in transitioning the
FEA in Albany from an MPI-only to an MPI+X programming model via the Kokkos library
(Edwards et al, 2014) and programming model. In this model, MPI is used for internode
parallelism and X denotes a shared-memory programming model for intranode parallelism
(e.g., X=OpenMP, CUDA). With Kokkos data layout abstractions, the same code can run
correctly and efficiently on current and future HPC hardware with different memory models.
We will describe some key performance developments in the finite element assembly process
within the FELIX land-ice model, as well as future performance goals in strong and weak
scalability on a variety of different architectures including NVIDIA GPUs and Intel Xeon Phis.
A perspective towards performance portability of the Trilinos-based linear solvers utilized by
FELIX will also be provided.

References
1. I. Demeshko and J. Watkins and I. Tezaur and O. Guba and W. Spotz and A. Salinger and
R. Pawlowski and M. Heroux.. Towards performance-portability of the Albany finite element analysis
code using the Kokkos library. J. HPC Appl. (2017).
2. H. Edwards and C. Trott and D. Sunderland. Kokkos: Enabling manycore performance portability
through polymorphic memory access patterns. J. Parallel and Distributed Computing, 74(12) 3202-3216,
2014.

167
The Jacobi-Davidson Eigensolver on GPU Clusters

Jonas Thies
German Aerospace Center (DLR), Simulation and Software Technology
Jonas.Thies@DLR.de

Dominik Ernst
Erlangen Regional Computing Center (RRZE)
dominik.ernst@fau.de

Abstract

Compared to multi-core processors, GPUs typically offer a higher memory bandwidth, which
makes them attractive for memory-bounded codes like sparse linear and eigenvalue solvers.
The fundamental performance issue we encounter when implementing such methods for mod-
ern GPUs is that the ratio between memory bandwidth and memory capacity is significantly
higher than for CPUs. When solving large-scale problems one therefore has to use more
compute nodes and is quickly forced into the strong scaling limit. In this paper we consider
an advanced eigensolver (the block Jacobi-Davidson QR method [1]), implemented in the
PHIST software (https://bitbucket.org/essex/phist/). We aim to provide a blueprint
and a framework for implementing other iterative solvers like Krylov subspace methods for
modern architectures that have relatively small high-bandwidth memory. The techniques we
explore to reduce the memory footprint of our solver include mixed precision arithmetic and
recalculating quantities ‘on-the-fly’. We use performance models to back our results theoret-
ically and ensure performance portability.

References
1. M. Röhrig-Zöllner and J. Thies and M. Kreutzer and A. Alvermann and A. Pieper and
A. Basermann and G. Hager and G. Wellein and H. Fehske. Increasing the Performance of the
Jacobi-Davidson Method by Blocking. SIAM J. Sci. Comput. 37(6), 2015.

168
Fast, Flexible Particle Simulations: An Introduction to
MercuryDPM

Anthony Thonrton, Thomas Weinhart


University of Twente
a.r.thornton@utwente.nl, T.Weinhart@utwente.nl

Abstract

In this presentation we review some recent advances in discrete particle modelling (DEM/DPM)
undertaken at the University of Twente. We introduce the open-source package MercuryDPM
that we have been developing over the last few years.
MercuryDPM is an object-oriented C++ algorithm with an easy-to-use user interface and
a flexible core, allowing developers to quickly add new features. Its open-source developers’
community has developed many features, including moving and curved walls (polygons, cone
sections, helices, screw threads, etc); state-of-the-art granular contact models (wet, charged,
sintered, etc); specialised classes for common geometries (chutes, hoppers, etc); general in-
terfaces (particles/walls/boundaries can all be changed with the same set of commands);
restarting; visualisation (xBalls and Paraview); a large self-test suite; and numerous tutorials
and demos.
In addition, MercuryDPM has two major components that cannot be found in other DPM
packages. Firstly, it uses an advanced contact detection method, the hierarchical grid. This
algorithm has a lower complexity than the traditional linked list algorithm for poly-dispersed
flows, which allows for the first time large simulations with wide size distributions, as shown
below in the right image.
Secondly, it uses coarse-graining, a novel way to extract continuum fields from discrete
particle systems. Coarse-graining ensures by definition that the resulting continuum fields
conserve mass, momentum and energy, a crucial requirement in continuum modelling. The
approach is flexible and has been applied to model both bulk and mixtures, boundaries and
interfaces, time-dependent, steady and static situations. It is available in MercuryDPM either
as a post-processing tool, or it can be run in real-time, e.g. to define pressure-controlled walls.
We illustrate these tools and a selection of other MercuryDPM features via various ap-
plications, including size-driven segregation in chute flows, rotating drums, and dosing silos.
For more information about MercuryDPM please visit http://MercuryDPM.org; training
and consultancy packages are available via our spin-off company MercuryLab (http://MercuryLab.org).

References
1. Weinhart undefined and T. and Tunuguntla and D. R. and van Schrojenstein Lantman and
M. P. and van der Horn and A. and Denissen and I. F. C. and Windows-Yule and C. R. K.
and de Jong and A. C. and Thornton and A. R.. MercuryDPM: A Fast and Flexible Particle Solver
Part A: Technical Advances. International Conference on Discrete Element Methods (2016) 1353-1360.

169
Simulation of Chloride Migration in Cracked Concrete

Pavel Trávnı́ček, Jiřı́ Němeček, Jaroslav Kruis, Tomáš Koudelka


Czech Technical University in Prague
pavel.travnicek@fsv.cvut.cz, jiri.nemecek@fsv.cvut.cz, kruis@fsv.cvut.cz,
tomas.koudelka@fsv.cvut.cz

Abstract
One of the key parameters influencing durability of reinforced concrete structures is an in-
creased concrete permeability caused by cracking of concrete. In a largely cracked concrete
the diffusion of chlorides approaches the diffusion in free water that can be further increased
by the convection current due to the effect of temperature and/or small hydraulic pressure
[1].
This contribution is devoted to the numerical solution of a two-dimensional diffusion-
convection problem applied to chloride migration in cracked concrete. In the numerical solu-
tion chlorides are transported via two main mechanisms, the diffusion (Fick’s law) discribing
natural conditions and an electrical migration (Nernst-Planck equation) modeling remedia-
tion of reinforced concrete by applying an external electric current [2].The solution of the
diffusion-convection problem is based on FEM and is implemented into an in-house open-
source software. The effect of cracking in the mechanical model is coupled with the diffusion-
convection model through an effective diffusivity that is based on the level of concrete damage.
The concept of isotropic damage model is used where the scalar damage parameter evolves
based on the effective mechanical strain and concrete fracture energy. The damage parame-
ter is converted to the equivalent crack width and equivalent crack volume is calculated for
each finite element.Chloride diffusion is separately defined for cracked and sound concrete
using material parameters such as the degree of saturation of cracks, cracks constrictivity
and tortuosity, and porosity. Diffusivity in the cracked part is formulated as a function of
two predominant factors, i.e., volume fraction of cracks and the parameter representing crack
width. Using this approach, both single and multiple cracks can be modeled.
The analysis is presented on a practical example of a reinforced beam, analysed in three
different stages. Firstly, in the beam a crack is created by external mechanical load and the
distribution of concrete damage is calculated. Following is natural chloride ion diffusion into
the damaged beam showing increased diffusion in damaged parts. Finally, an extraction of
chloride ions using external electrical field (modeled by the Gauss law of electrostatics, [2])
is applied. The efficiency of the remediation technique is shown for both sound and cracked
areas of the beam.

References
1. K. Maekawa and T. Ishida and T. Kishi. Multi-Scale Modeling of Structural Concrete. Taylor &
Francis, (2008) 291-325.
2. J. Němeček and J. Kruis and T. Koudelka and T. Krejčı́. Simulation of chloride migration in
reinforced concrete. Applied Mathematics and Computation 319 (2018), 575–585.

170
Finite Volume Methods for Numerical Simulation of the
Discharge Motion Described by Different Physical Models

Jaroslav Fort, David Trdlicka


FME at CTU in Prague, Karlovo namesti 13, 121 35 Prague, Czech Republic
jaroslav.fort@fs.cvut.cz, david.trdlicka@fs.cvut.cz

Fayssal Benkhaldoun, Jean-Baptiste Montavon


LAGA, University Paris 13, 99 Av. J. B. Clement, 93430 Villetaneuse, France
fayssal@math.univ-paris13.fr, jean-baptiste.montavon@ens-cachan.fr

Abstract

The model for discharge simulation consists of the set of transport equations for charged
particles coupled with the elliptic equations for electric field. Values of unknowns change by
many orders of magnitude during discharge propagation resulting in very steep gradients
of particle density at the head of discharge. The head of discharge is a very small moving
region, therefore the dynamic grid adaptation is an essential tool, which makes a numerical
simulation of such phenomenon possible. The finite volume method [1] originally developed
for the simplest model of discharge propagation (the so called 2D minimal model) has been
extended using more general and physically more relevant models. The photoionization phe-
nomenon (modeled by the six additional PDE’s) and complex ”physical” boundary conditions
for charged particles are now considered on the electrodes. We present results of numerical
simulation of more complex problems from the point of view of discharge structure, like in-
teraction of discharge with conductive electrode. We also present the comparison of different
modifications of numerical algorithm.

References
1. F. Benkhaldoun and J. Fort and K. Hassouni and J. Karel. Simulation of planar ionization wave
front propagation on an unstructured adaptive grid. Journal of Computational and Applied Mathematics
236 (2012) 4623–4634.
2. M. Duarte and Z. Bonaventura and M. Massot and A. Bourdon. A numerical strategy to discretize
and solve the Poisson equation on dynamically adapted multiresolution grids for time-dependent streamer
discharge simulations. Journal of Computational Physics, 289, 129-148 (2015).

171
Constrained Derivative-free Optimization of a Two Shaft
Generic Turbofan Engine

Anke Tröltzsch, Martin Siggel


German Aerospace Center (DLR), SC-HPC
anke.troeltzsch@dlr.de, martin.siggel@dlr.de

Richard-Gregor Becker
German Aerospace Center (DLR), AT-TWK
richard.becker@dlr.de

Abstract

We would like to present our software package ECDFO (Equality-Constrained Derivative-


Free Optimization) which applies a model-based trust-region SQP algorithm. ECDFO was
extended to handle bound constraints as this is essential if we want to apply the optimizer
to real-life applications. Derivative-free optimization algorithms are widely used in practice
for several reasons: the explicit evaluation of the derivatives may be impossible, very time-
consuming or very inexact. The algorithm ECDFO has shown competitive performance on
analytical test problems, compared to other publicly available derivative-free optimization
software packages. Here, the software package ECDFO is applied to the optimization of an
aero engine performance model of a two shaft generic turbofan engine. The objective is to
minimize the thrust specific fuel consumption with respect to several thermodynamic design
parameters and, subject to several bound- and equality-constraints. In order to provide some
insight into the optimization problem, results of a parametric study on the problem conducted
prior to the optimization are presented briefly. The simulation of the aero engine performance
model is performed by the simulation code GTlab-Performance, developed at the German
Aerospace Center (DLR). The optimization package ECDFO is compared to the two publicly
available optimization codes ALGENCAN (Augmented Lagrangian Line Search Method using
Finite Differences Gradients) and ALPSO (Augmented Lagrange Multiplier Particle Swarm
Optimizer as a purely derivative-free optimization method). Convergence histories of the
objective function and the non-linear constraints are presented.

References
1. Anke Tröltzsch. A Sequential Quadratic Programming Algorithm for Equality-Constrained Optimiza-
tion without Derivatives. Optimization Letters, 10 (2), Seiten 383-399. Springer..

172
Modelling of Quazistationary Ionic Transport in Fluid
Saturated Deformable Porous Media

Jana Turjanicová, Eduard Rohan


University of West Bohemia
turjani@ntis.zcu.cz, rohan@kme.zcu.cz

Thibault Lemaire, Salah Naili


Université Paris-Est Créteil Val de Marne
thibault.lemaire@univ-paris-est.fr, salah.naili@univ-paris-est.fr

Abstract
The ionic transport in the charged porous media is a problem widely studied across many
applications. It is often studied in the context of geosciences, where it serves as the description
of clay’s swelling. Other important applications are in the research of fuel cells, or modeling
of biological tissues.
This work explores the available mathematical models, describing ionic transport through
fluid saturated porous media with the deformable solid phase charged by small electrical
charge, [1], [2]. We focus on the homogenization of the microstructure constituted by elastic
solid skeleton and two-component electrolyte filling the pores so that a specific geometrical
arrangement is taken into account. Electrochemical phenomena occurring due to the electric
double layer which is formed by interaction between charged solid-fluid interface and ionized
solution are considered. Since the porous medium is deformable, there is a tight coupling be-
tween the mechanical response of the porous media and the ionic transport in the pore fluid
due to the convection-diffusion influenced by the electrochemical phenomena. The mathemat-
ical model describing mechanical and electrochemical interactions at the microscopic level is
treated by means of the homogenization yielding the local microscopic problems to be solved
for characteristic responses in the representative periodic cell. By virtue of these character-
istic responses, the resulting upscaled model respects material microstructure with stronger
coupling between the electrokinetic system and the poroelasticity. The upscaling procedure is
then implemented in the in-house developed FEM based software SfePy, [4] and the macro-
scopic behavior of the homogenized body is illustrated by the numerical simulations.
The research is supported by project GACR 16-03823S and in part by project LO 1506
of the Czech Ministry of Education, Youth and Sports.

References
1. G. Allaire and O. Bernard and J.-F. Dufrêche and A. Mikelić and Andro . Ion transport
through deformable porous media: derivation of the macroscopic equations using upscaling. Computational
and Applied Mathematics (2015) 1-32.
2. T. Lemaire and J. Kaiser and S. Naili and V. Sansalone. Modelling of the transport in electrically
charged porous media including ionic exchanges. Mechanics Research Communications 37 (2010) 495-499.
3. R. Cimrman. SfePy-write your own FE application. arXiv preprint arXiv:1404.6391 (2014).

173
A Discrete Element Sea-Ice Model for Climate Applications

Adrian Turner
Los Alamos National Laboratory
akt@lanl.gov

Kara Peterson, Dan Bolintineanu, Daniel Ibanez


Sandia National Laboratories
kjpeter@sandia.gov, dsbolin@sandia.gov, daibane@sandia.gov

Andrew Roberts, Travis Davis


Naval Postgraduate School
afrobert@nps.edu, tjdavis1@nps.edu

Abstract

The current sea-ice component of the Department of Energy’s Energy Exascale Earth System
Model (E3SM) approximates the sea-ice cover as a continuous material rather than as a
series of discrete floes and assumes sufficient cracks exist within each model to ensure an
isotropic distribution of crack orientations. Such models were developed for grid resolutions
of ∼100 km, whereas current models, including E3SM, are routinely applying these physics
at much higher model resolutions of ∼5 km. Evidence from both remote sensing and in
situ observations suggest ∼10 km represents a transition scale below which the dynamics
of individual floes dominates the dynamics of sea ice [1]. To correct the deficiencies of the
current E3SM sea ice model, we are developing a new sea ice dynamical core based on the
Discrete Element Method (DEM). In this method collections of floes are explicitly modeled
as discrete elements, contact forces between the elements are determined and equations of
motion for individual elements are integrated in time [2]. This new model uses the Large-scale
Atomic/Molecular Massively Parallel Simulator (LAMMPS) model for its dynamical core
[3]. Here, we describe the development of the model, including the development of element
contact models suitable for sea ice at climate scales, attempts to improve model performance
by using the Kokkos framework to allow efficient computation on heterogeneous computing
architectures, and progress on methodologies to ameliorate the effect of element distortion
during deformation.

References
1. S. L. McNutt and J. E. Overland. Spatial hierarchy in Arctic sea ice dynamics. Tellus A, 55, (2003),
181-191.
2. P. A. Cundall and O. D. L. Strack. A discrete numerical model for granular assemblies. Geotechnique,
29, (1979), 47-65.
3. S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. J. Comp. Phys., 117, (1995),
1-19.

174
Two-level Schemes for Solving Transient Problems of
Barotropic Fluid

Petr Vabishchevich
Nuclear Safety Institute of RAS, Moscow, Russia
vabishchevich@gmail.com

Abstract

Transient flows of a compressible inviscid fluid are simulated [1]. The Euler equations system
includes the scalar advection equation for the fluid density and the vector advection equation
for velocity. It is assumed that the pressure depends on the density in a power-law manner.
A new possibility in using the advection-reaction equation for the pressure instead of the
standard equation of state is highlighted. To construct approximation in space, standard
Lagrangian methods [2] are used for the density and cartesian velocity components. The
focus is on constructing discretization in time [3]. For implicit approximations, the property
of mass conservation is exactly inherited whereas the property of the total mechanical energy
conservation is fulfilled only approximately. A nonlinear discrete problem at a new time
level is solved numerically using Newton’s iteration method. An iterative linearized scheme is
constructed. At each iteration, the linearization is carried out over the convective transport
field. Such schemes belong to the class of decoupling schemes, where splitting with respect to
physical processes is implemented. Namely, individual problems for the density and velocity
are solved separately. The theoretical analysis is supplemented by the results of numerical
experiments. Time-evolution of a 2D layer of a compressible fluid being initially at rest is
predicted after a density perturbation in a square computational domain.

References
1. E. Feireisl and T.G. Karper and M. Pokorny. Mathematical Theory of Compressible Viscous Fluids:
Analysis and Numerics. Springer, 2016.
2. S.C. Brenner and L.R. Scott. The Mathematical Theory of Finite Element Methods. Springer, 2008.
3. V. Thomee. Galerkin Finite Element Methods for Parabolic Problems. Springer, 2006.

175
Estimation of Parameters by Theory of Inverse Problems and
Search Metaheuristics for the Inversion of the Zoeppritz
Equations

Gerardo Alfredo Vargas-Contreras


Universidad Autonoma de Nuevo León, Facultad de Ciencias Fı́sico Matemáticas
galvac25@gmail.com

Abstract

The recognition of materials and structures inside of the earth is a problem that has always
remained present in both science and industry. Seismic is a branch of geophysics that helps
in the characterization and analysis of the Earth’s crust. In addition to the information that
seismic can provide, there are particular cases of methodologies used for the analysis of certain
physical proper- ties that have the subsoil materials, these are known as seismic attributes.
The AVO (Amplitude Vs. Offset) is a seismic attribute that seeks to quantify physical pa-
rameters of materials in the Earth structure for the purpose of an easy classification, these
are based on the equations of Zoeppritz, which directly describe The relation between the
amplitudes of the reflected and transmitted waves in a material from three physical proper-
ties, which are: the speed at which the waves P and S travel through two means delimited by
an interface and its density. Using evolutionary programming, the theory of inverse problems
and the a priori information of a controlled experiment, we will seek to solve the Zoeppritz
equations as an ill-posed problem in order to estimate the geophysical parameters for the
different seismic bodies of a proposed model.

References
1. Xin Yao and Yong Liu and Guangming Lin. Evolutionary programming made faster. Computational
Intelligence Group, School of Computer Science.
2. Mustafa undefined and E. D.. Estimation of Seismic Parameters from Multifold Reflection Seismic
Data by Generalized Linear Inversion of Zoeppritz Equations.. University Microfilms International.

176
A Computational CSP Model for Generation Assessment

Felipe Dı́az, Daniel Santana


University Institute for Intelligent Systems and Numerical Applications in Engineering.
ULPGC
felipe.diaz@ulpgc.es, danitegue@gmail.com

Abstract

A computational Concentrating Solar Power (CSP) model is presented. In other works of our
team, a solar radiation numerical model was developed [1] starting from the works of [2], and
considering the terrain surface through 2-D adaptive meshes of triangles which are constructed
using a refinement/derefinement procedure in accordance with the variations of terrain surface
and albedo. The effect of shadows is considered in each time step. Once the solar resource is
modelled, it arises the need of knowledge about the applications of it. Solar thermal power is
one of the most promising options to reduce the consumption of fossil fuels. In this context,
the aim of this work is the implementation of a CSP computational model, which allows the
user to assess the electrical energy production to be injected in a power system. This could
be a very interesting tool, in order to obtain a good power system management due to the
variability of the primary source, when accompanied by a good solar forecasting tool. Within
the different technologies available in this area, we have started modeling the most developed
and contrasted technology at a technical level, this is, parabolic troughs. In fact, the works
have been done having in mind the Andasol (Andalusia - Spain), and the SEGS (California
- USA) power plants. The operating strategy is based on solar tracking to maximize energy
efficiency. We need to establish, at every instant, the optimum slope of the collector to
minimize the incidence angle between the solar vector and the vector normal to the collector
opening, δexp,col . The one-axis solar tracking is managed through two reference systems, one
of them in the collector bench. Moreover, a parabolic trough solar field thermal model and
a steam turbine model are involved. Considering [3] and [4], the power block is assumed a
regenerative Rankine cycle with intermediate heating. The resolution of the thermodynamic
cycle, once the parameters are defined, is done through an iterative process. The whole
problem has been tested for Gran Canaria Island (Canary Islands - Spain).

References
1. F. DÍAZ and G. MONTERO and J.M. ESCOBAR and E. RODRÍGUEZ and R. MONTENEGRO.
An Adaptive Solar Radiation Numerical Model. Journal of Computational and Applied Mathematics 236
(18) (2012), pp. 4611–4622.
2. M. ŠÚRI and J. HOFIERKA. A New GIS-based Solar Radiation Model and its application to photo-
voltaic assessments. Transactions in GIS 8 (2) (2004), pp. 175–170.
3. V. DUDLEY. Test Results: SEGS LS-2 Solar Collector. Sandia (1994).
4. M.M. ROLIM and N. FRAIDENRAICH and C. TIBA. Analytic modeling of a solar power plant with
parabolic linear collectors. Solar Energy 83 (2009), pp. 126–133.

177
Multiobjective Optimisation of a Wave Energy Farm

Isabel Villalba, David Greiner, Felipe Dı́az


University Institute for Intelligent Systems and Numerical Applications in Engineering.
University of Las Palmas de Gran Canaria
isabel.villalba101@alu.ulpgc.es, david.greiner@ulpgc.es,
felipe.diaz@ulpgc.es

Marcos Blanco, Marcos Lafoz


Unit of Electric Power Systems, CIEMAT
marcos.blanco@ciemat.es, marcos.lafoz@ciemat.es

Juan I. Pérez
E.T.S.I.C.C.P., Polytechnic University of Madrid
ji.perez@upm.es

Abstract
This paper presents the optimisation of a wave energy farm configuration and its application
in electrical network stability analysis. The influence of the spatial distribution in a wave
energy farm power output, has been demonstrated. On the other hand, compliance with the
limit values imposed by the European Systems Operator (ENTSO-E) for the stability of
power systems, is a problem to be solved by the electrical generation devices with renewable
sources. The aim of the work is designing a farm configuration simultaneously maximising the
output of electric power and minimising the frequency excursions. Taking this into account,
a multiobjective optimisation using evolutionary algorithms has been applied [1]. Hydrody-
namic and PTO computations have been done using the Boundary Element Method (BEM).
Through the methodology described by [2] for the assessment of the total output power pro-
duced by a wave farm, different spatial wave farm configurations have been tested, obtaining
the total produced power (rst maximised tness function). On the other hand, that electrical
power is injected in a weak and isolated power system, where the frequency excursion analysis
has been carried out in order to minimise its value (second tness function). As a solution we
have obtained a non-dominated set of optimal wave farm configurations.

References
1. C.A. Coello Coello. Multi-objective Evolutionary Algorithms in Real-World Applications: Some Re-
cent Results and Current Challenges. In: Greiner D., Galván B., Périaux J., Gauger N., Giannakoglou K.,
Winter G. (eds) Advances in Evolutionary and Deterministic Methods for Design, Optimization and Con-
trol in Engineering and Sciences. Computational Methods in Applied Sciences, vol 36. Springer, Cham,
(2015).
2. A. Blavette and T. Kovaltchouk and F. Ronginre and M. Jourdain de Thieulloy and P.
Leahy and B.Multon and H.Ben Ahmed. Infjuence of the wave dispersion phenomenon on the flicker
generated by a wave farm. Proceedings of the 12th European Wave and Tidal Energy Conference,Cork,
Ireland, 27th Aug -1st Sept (2017).

178
Treatment of Grounding Line Migration for Efficient Paleo-ice
Sheet Simulations

Lina Von Sydow, Gong Cheng, Per Lötstedt


Uppsala University
lina@it.uu.se, cheng.gong@it.uu.se, perl@it.uu.se

Abstract

The full Stokes (FS) model for palaeo-ice sheet simulations have previously been highly im-
practical due to the high computational cost. One way to lower this cost is to use approxima-
tions such as the Shallow Shelf Approximation (SSA) and/or the Shallow Ice Approximation
(SIA), possibly coupling such approximations in some regions with FS in others. In order to
capture the important grounding line migration a FS model is required in a region around
the grounding line.
We propose and implement a new sub-grid method for grounding line migration in a FS
model with constant mesh. The beauty of this work is to avoid remeshing when the grounding
line moves in the computational mesh. A new boundary condition is introduced to accom-
modate the discontinuity in the physical and numerical model. The method is implemented
in Elmer/ICE using the finite element method. We will present convergence results of the
sub-grid method as the mesh is refined.

References
1. G. Cheng and P. Lötstedt and L. von Sydow. Efficient numerical ice-sheet simulations over long
time spans. Draft report.

179
Adaptive Markov Chain Monte Carlo Methods in Infinite
Dimensions

Jonas Wallin, Sreekar Vadlamani


Lund University
jonas.wallin81@gmail.com, sreekar.vadlamani@gmail.com

Abstract

Markov Chain Monte Carlo (MCMC) algorithms is, nowadays, the standard method for
sampling from a generic density. In recent years a lot of attention has been put on MCMC
for infinite dimensional problems, especially for inverse problems. A new set of algorithms
have been developed like the Crank–Nicolson (CN) random walk, these are needed since
the classical MCMC algorithms have been proven to break down in infinite dimensions. In
our work we focus on adaptive version of the CN algorithms , which implies that we learn
the parameters of our algorithm, while running the algorithm, i.e online learning. Adaptive
MCMC (AMCMC) is well established for finite dimensional problem, however the standard
adaptation methods also breaks down in infinite dimensional setting. We develop modification
of them so they work in the infinite dimensional setting. Finally, we show that our algorithm
outperforms state of the art algorithm on tested data sets.

References
1. J. Wallin and S. Vadlamani. Adaptive Markov Chain Monte Carlo methods in infinite dimensions.
preprint.

180
MercuryCG - From Discrete Particles to Continuum Fields

Thomas Weinhart
University Of Twente
t.weinhart@utwente.nl

Abstract

Micro–macro transition methods are used to both calibrate and validate continuum models
from discrete data, obtained from either experiments or simulations. Such methods generate
continuum fields such as density, momentum, stress, etc, from discrete data, i.e. positions, ve-
locity, orientations and forces of individual elements. Performing this micro–macro transition
step is especially challenging for heterogeneous and dynamic situations.
Here, we present a mapping technique, called coarse-graining, to perform this transition.
This novel method has several advantages: by construction the obtained macroscopic fields
are consistent with the continuum equations of mass, momentum and energy balance. Ad-
ditionally, boundary interaction forces can be taken into account in a self-consistent way
and thus allow for the construction of locally accurate stress fields even within one element
radius of the boundaries. Similarly, stress and drag forces can be determined for individual
constituents, which is critical for several continuum applications, e.g. mixture theory-based
segregation models. Moreover, the method does not require ensemble-averaging and thus can
be efficiently exploited to investigate static, steady and time-dependent flows. The method
presented in this paper is valid for any discrete data, e.g. particle simulations, molecular
dynamics, experimental data, etc.; however, for the purpose of illustration we consider data
generated from discrete particle simulations of granular mixtures flowing over rough inclined
channels. We show how to practically use our coarse-graining extension for both steady and
unsteady flows using our open-source coarse-graining tool MercuryCG. The tool is available
as a part of an efficient discrete particle solver MercuryDPM (www.MercuryDPM.org).

References
1. DR Tunuguntla and AR Thornton and T Weinhart. From discrete elements to continuum fields:
Extension to bidisperse systems. Computational Particle Mechanics 3(3), 349-365 (2016).

181
Discrete Element Simulation Based Investigation Into
Statistical Inference Problems for SAG Mill Operations

Daniel N. Wilke
University of Pretoria, South Africa
nico.wilke@up.ac.za

Nicolin Govender
University of Surrey, United Kingdom
govender.nicolin@gmail.com

Raj K. Rajamani
University of Utah, United States of America
rajkrajamani@gmail.com

Patrick Pizette
IMT Lille Douai, France
patrick.pizette@imt-lille-douai.fr

Abstract

As statistical inference tools have matured over the last five decades, scientists and engineers
have been embracing a statistical approach towards problem solving, rather than the con-
ventional deterministic approach. In particular, inference and characterisation problems in
engineering have matured considerably in this regard with the regular availability of Krig-
ing, Inverse Regression and Artificial Neural Networks (ANNs) to name a few strategies [2].
Instead of resolving a single value for a variable, the underlying probability function for a
variable is estimated. In this study, we consider statistical inference problems for the com-
minution application of SAG mill operations when only sparsely sampled data is available.
Comminution is a large consumer of total energy of planet earth, estimated up to 10
percent. Proper inference models may assist in improving mining operations that may lead
to significant energy savings. In turn, discrete element simulations remain costly even with the
utilisation of general purpose graphical processing units (GPGPUs) [1]. Consequently, data
can only be sparsely sampled. This study explores potential statistical inference problems in
SAG mill operations when limited time series operation data is available.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the
Titan X Pascal GPU used for this research.

References
1. N. Govender and D.N. Wilke and S. Kok and R. Els. Development of a convex polyhedral discrete
element simulation framework for NVIDIA Kepler based GPUs. J. Comp Appl. Math. 270(2014) 386-400.
2. W. Härdle and L. Simar. Applied Multivariate Statistical Analysis. Springer Verlag, 2003.

182
A Geometric Partitioning Scheme for the Direct Parallel
Solution of Steady CFD Problems on Staggered Grids

Sven Baars, Mark van der Klok, Fred Wubs


University of Groningen
S.Baars@rug.nl, mlvanderklok.jr@gmail.com, f.w.wubs@rug.nl

Jonas Thies
German Aerospace Center (DLR)
Jonas.Thies@dlr.de

Abstract

When solving stationary flow problems, it is often advantageous to use Newton-like methods
rather than performing long time integrations. In such an approach, linear systems with the
large and sparse Jacobian matrix have to be solved. In the case of some classical structured-
grid finite volume schemes (so-called Arakawa grids), the Jacobian of the incompressible
Navier-Stokes equations is a special type of saddle point matrix. Using sparse direct solvers,
one can use fill-reducing orderings to speed up the factorization process, e.g., PT-Scotch [1].
In general, such a black box approach cannot exploit the special properties of the saddle
point system, i.c., that the Jacobian is related to a stable solution of the Navier-Stokes
equations. In [2] a fill-reducing ordering technique was proposed, which leads to a stable
factorization without the need of pivoting during the numerical factorization. In this work,
we turn to parallel sparse direct solvers and examine the effect of the partition shapes on
the efficiency of such methods. A novel geometric partitioning is proposed that leads to
a reduction of the size of the Schur-Complement compared to Cartesian or octree-based
partitioning. The new ordering may also be beneficial for preconditioning techniques based
on domain-decomposition.

References
1. C. Chevalier and F. Pellegrini. PT-SCOTCH: a tool for efficient parallel graph ordering. Parallel
Computing 34 (2008), 318-331.
2. A.C. de Niet and F.W. Wubs. Numerically stable LDLT-factorization of F-type saddle point matrices.
IMA Journal of Numerical Analysis, 29 (2009) 208-234.

183
How to Recognize the Anomalous Diffusion?

Agnieszka Wylomanska
Wroclaw University of Science and Technology
agnieszka.wylomanska@pwr.edu.pl

Abstract

Anomalous diffusion in crowded fluids, e.g., in cytoplasm of living cells, is a frequent phe-
nomenon. A common tool by which the anomalous diffusion of a single particle can be classi-
fied is the time-averaged mean square displacement (TAMSD). However there are also differ-
ent statistics that can be useful in this problem. A validation of anomalous diffusion processes
for single-particle tracking data is of great interest for experimentalists. In this presentation
we demonstrate statistical methods useful in the anomalous diffusion property recognition.
One of the example is the rigorous statistical test based on TAMSD for classical anoma-
lous diffusion process, namely fractional Brownian motion (FBM) or visual test for light and
heavy-tailed behaviour recognition. We demonstrate also the role of codifference, the general
measure of dependence, adequate for processes with infinite variance, in the problem of the
anomalous diffusion property exhibition.

References
1. Sikora Grzegorz and Burnecki Krzysztof and Wylomanska Agnieszka. Mean-squared displace-
ment statistical test for fractional Brownian motion. Phys. Rev. E 95, 032110, 2017.
2. Sikora Grzegorz and Teuerle Marek and Wylomanska Agnieszka and Grebenkov Denis.
Statistical properties of the anomalous scaling exponent estimator based on time averaged mean square
displacement. Phys. Rev. E 96, 022132, 2017.
3. Burnecki Krzysztof and Wylomanska Agnieszka and Aleksei Chechkin. Discriminating between
light- and heavy-tailed distributions with limit theorem. PLoS ONE 10(12): e0145604. doi:10.1371/journal.
pone.0145604, 2015.
4. Burnecki Krzysztof and Wylomańska Agnieszka and Aleksei Beletskii and Vsevolod Gon-
char and Aleksei Chechkin. Recognition of stable distribution with Levy index alpha close to 2. Phys.
Rev. E 85, 056711, 2012 .

184
Efficient Evaluation of Space-Time Boundary Integral
Operators on SIMD Architectures

Jan Zapletal, Michal Merta


IT4Innovations, VŠB – Technical University of Ostrava
jan.zapletal@vsb.cz, michal.merta@vsb.cz

Stefan Dohr, Günther Of


Graz University of Technology
dohr@math.tugraz.at, of@tugraz.at

Abstract

The boundary element method (BEM) has become a well established tool for solving partial
differential equations. An inseparable part of any BEM code is the evaluation of boundary in-
tegral operators including singular integrands. In the talk we present our approach to efficient
implementation of the (semi-)analytic approach and of the regularized tensor Gauss quadra-
ture scheme. Although OpenMP threading has become more or less standard in scientific
codes, vectorization (now also included in OpenMP) is often neglected. We thus concentrate
on the optimizations such as data alignment and padding, array-of-structures to structure-
of-arrays transition or loop collapsing leading to optimal utilization of the available vector
processing units.
The model problems used for the verification of the suggested approach include the steady-
state heat (Laplace) equation as well as the evolutionary heat equation. For the latter, a
time-stepping scheme is often used to treat the time and spatial derivatives separately. How-
ever, we employ a space-time boundary integral formulation leading to the discretization of
the whole space-time boundary at once. Although this approach leads to large systems of
linear equations, it naturally exposes more parallelism and is well suited for modern high
performance computing infrastructures.

References
1. J. Zapletal and M. Merta and L. Malý. Boundary element quadrature schemes for multi- and many-
core architectures. Computers & Mathematics with Applications, 2017, 74, 157–173.
2. J. Zapletal and G. Of and M. Merta. Parallel and vectorized implementation of analytic evaluation
of boundary integral operators. Preprint.

185
Algorithmic Patterns for H-matrices on Many-core Processors

Peter Zaspel
University of Basel
peter.zaspel@unibas.ch

Abstract

In this work, we consider the reformulation of hierarchical (H) matrix algorithms for many-
core processors with a model implementation on graphics processing units (GPUs). H ma-
trices approximate specific dense matrices, e.g., from discretized integral equations or kernel
ridge regression, leading to log-linear time complexity in dense matrix-vector products. The
parallelization of H matrix operations on many-core processors is difficult due to the com-
plex nature of the underlying algorithms. While previous algorithmic advances for many-core
hardware focused on accelerating existing H matrix CPU implementations by many-core pro-
cessors, we here aim at totally relying on that processor type. As main contribution of the
presentation, the necessary parallel algorithmic patterns allowing to map the full H matrix
construction and the fast matrix-vector product to many-core hardware are introduced. Here,
crucial ingredients are space filling curves, parallel tree traversal and batching of linear al-
gebra operations. The resulting model GPU implementation hmglib is the, to the best of
the authors knowledge, first entirely GPU-based Open Source H matrix library of this kind.
The presentation is concluded by an in-depth performance analysis and a comparative per-
formance study against a standard H matrix library, highlighting profound speedups of our
many-core parallel approach.

References
1. P. Zaspel. Algorithmic patterns for H-matrices on many-core processors. arXiv preprint,
arXiv:1708.09707.

186
Sensitivity Analysis of a Non-Ideal Expanding Flow to
Perturbations of the Design Conditions

Giulio Gori, Pietro Marco Congedo


INRIA Bordeaux - Sud-Ouest, Team Cardamom
giulio.gori@inria.fr, pietro.congedo@inria.fr

Marta Zocca, Alberto Guardone


Department of Aerospace Science & Technology, Politecnico di Milano
martamaria.zocca@polimi.it, alberto.guardone@polimi.it

Olivier Le Maitre
LIMSI-CNRS
olm@limsi.fr

Abstract

This paper presents a sensitivity study aimed at quantifying the role of uncertainties affecting
the nominal geometrical layout of an experimental test-rig. The considered test case consists
in a converging-diverging nozzle used to produce a supersonic expansion of a siloxane MDM
flow in a non-ideal regime. Further details regarding the experiment can be found in the
provided reference. Due to the manufacturing process, small flaws necessarily affect the ac-
tual geometry of the test section. The sensitivity analysis takes into account these uncertain,
which are characterized and propagated through the CFD model. Moreover, perturbations
that necessarily affect the nominal operating conditions, such as the total temperature and
the total pressure at the inlet of the nozzle are considered. An evaluation of the Sobol indexes
allows to assess the relevance of each source of uncertainty and provides an indication of the
major causes making the actual test-rig depart from the designed behavior. Numerical predic-
tions, complemented by the related error bars, are eventually compared against experimental
measurements.

References
1. G. Gori and M. Zocca and G. Cammi and A. Spinelli and A. Guardone. Experimental Assessment
of the Open-Source SU2 CFD suite for ORC applications. Energy Procedia 129 (2017) 256-263.

187

S-ar putea să vă placă și