Sunteți pe pagina 1din 11

T ree codes for vortex dynamics

Application of a programming framework

Bell Communications Research Morristown NJ and

Computer Science Rutgers University Piscataway NJ

Mechanical and Aerospace Engineering Rutgers University Piscataway NJ

Abstract

This paper describes the implementation of a fast

N body algorithm for the study of multi lament vor

tex simulations The simulations involve distribu

tions that are irregular and time varying We adapt

our programming framework which was earlier used

to develop high performance implementations of the

Barnes Hut and Greengard Rokhlin algorithms for

gravitational elds We describe how the additional

subtleties involved in vortex lament simulations are

accommodated e ciently within the framework

We describe the ongoing experimental studies and

report preliminary results with simulations of one mil

lion particles on the node Connection Machine

CM The implementation sustains a rate of almost

of the peak machine rate These simulations are

more than an order of magnitude larger in size than

previously studied using direct methods We expect

that the use of the fast algorithm will allow us to study

the generation of small scales in the vortex collapse

and reconnection problem with adequate resolution

Introduction

Computational methods to track the motions of

bodies which interact with one another and possi

bly

sub ject to an external eld as well have been the

sub

ject of extensive research for centuries So called

N body methods have been applied to problems in

astrophysics semiconductor device simulation molec

ular dynamics plasma physics and uid mechanics

This paper describes the implementation of a tree

code for vortex

dynamics simulations which we apply

to the study of vortex collapse and reconnection

us to study the generation of small scales with ade

quate resolution and make it possible to realistically

simulate the ows around bodies of complex shapes

encountered in engineering applications The advan

tages of the fast particle vortex methods will be espe

cially important

in the high Reynolds number regime

which cannot be studied with the current grid based

methods nite di erence nite volume and spectral

because of the resolution limitations imposed by cur

rent computers

Computing the eld at a point involves summing

the contributions from each of the N particles

The direct method evaluates all pairs of two body in

teractions While this method is conceptually sim

ple vectorizes well and is the algorithm of choice for

many applications its O N arithmetic complexity

rules it out for large scale simulations involving mil

lions of particles iterated over many time steps For

vortex simulations the largest number of particles re

ported is around fty thousand

Larger simulations require faster methods involving

fewer interactions to evaluate the eld at any point

In the last decade a number of approximation algo

rithms have emerged These algorithms exploit the

observation that the

e ect of a cluster

of particles at a

distant point can be

approximated by

a small number

of initial terms of an appropriate power series This

leads to an obvious divide and conquer algorithm in

which the particles are organized in a hierarchy of

clusters which allows the

approximation to be applied

e ciently Barnes and Hut applied this idea to

gravitational simulations More sophisticated schemes

were developed by Greengard and Rokhlin and

subsequently re ned by Zhao Anderson Bet

ter data structures have recently been developed by

Callahan and Kosara ju

We expect that the use of the fast algorithm will allow Several parallel implementations of the algorithms

men tioned above have been developed Salmon

implemented the Barnes Hut algorithm on the

NCUBE and Intel iPSC Warren and Salmon re

ported experiments on the node Intel Touchstone

Delta and later developed

hashed implementations of

a global tree structure which they report in

They have used their codes for astrophysical simula

tions and also for vortex dynamics This paper builds

on our CM implementation of the Barnes Hut

algorithm for astrophysical simulations and contrasts

our approach and conclusions with the aforementioned

e orts

This abstract is organized as follows Section de

scribes the application problem in some detail and

outlines the Barnes Hut fast summation algorithm

Section describes how we use our framework for

building implicit global trees in distributed memory

as well as methods for accessing and modifying these

structures e ciently Section describes experimental

results on the Connection Machine CM

Vortex methods in uid dynamics

Many ows can be simulated by computing the

evolution in time of vorticity using the Biot Savart

law Biot Savart models o er the advantage that the

calculation e ort concentrates in the small regions of

the uid that contain the vorticity which are usually

small compared to the domain that contains the ow

This not only reduces considerably the compu

tational expense but allows better resolution for high

Reynolds number ows Biot Savart models also allow

us to perform e ectively inviscid computations with no

accumulation of numerical di usion

Vortex methods are also appropriate for studying

the many complex ows that present coherent struc

tures These coherent structures frequently are closely

interacting tube like vortices Some of the ows of

practical interest that present interacting tube like

vortex structures are wing tip vortices a va

riety of D jet ows of di erent shapes and

turbulent ows A generic type of interaction be

tween vortex tubes is collapse and reconnection which

is the close approach of vortex tubes that results in

large vorticity and strain rate ampli cation Col

lapse and reconnection is an example of small scale

formation in high Reynolds number ows Small scale

generation is very important for the energy transfer

and dissipation processes in turbulent ows

The study of small scales formation in high

Reynolds number ows tends to require substantial

amount of computational resources In multi lament

U 0.40 overpredicting initial condition N = 2.4576 x 10^6 0.39 δ = 0.005 0.38
U 0.40
overpredicting initial condition
N = 2.4576 x 10^6
0.39
δ = 0.005
0.38
δ = 0.01
δ = 0.02
0.37
δ = 0.03
δ = 0.04
0.36
δ = 0.05
0.35
δ = 0.06
0.34
0.33
0.32
δ
C = 0.07595
0.31
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
q a

Figure Induced velocity U vs nondimensional grid

size q a

models the O N nature of the computational ex

pense of the Biot Savart direct method where N is

the number of grid points severely limits the vortex

collapse simulations leaving

the most interesting cases

of collapse beyond the cases that have been examined

to date

Recent multi lament simulations have raised

doubts about the convergence of the results

Figure presents a convergence study of the veloc

ity U induced on itself by a circular vortex ring with

thickness C and radius a The resolu

tion is represented along the horizontal axis by the

nondimensional grid size q a h a where is the

smoothing parameter in the simulation and h a is the

minimum distance between grid points Two types of

discretization schemes are employed One underpre

dicts and the other overpredicts the velocity U We

observe in Figure that in order for convergence it

is necessary to have both a su ciently high number

of grid points N and a su ciently small value of the

smoothing parameter in agreement with the con

vergence theorems The small values of nec

essary for convergence make it necessary to employ

millions of particles for these particular values of the

ratio C

The Biot Savart model

The version of the vortex method we use was de

veloped by Knio and Ghoniem The vorticity of a

vortex tube is represented by a bundle of vortex la

ments l t each of them with circulation l The

N f lamen ts forming the bundle are advected accord The strength of the particles is p p p and

ing to the velocity eld

u

x

N

l

x l d l

C jx l j

l

g

jx l j

where g exp

Discretization

g s ds s The term r m

is the truncation error that results from using a nite

number of terms in the multipole expansion

Initial Conditions

The multi lament ring is constructed around the

center line

Each lament of the vortex ring is discretized by n

grid points or vortex elements Once this is done the

order of the summations in Equation is unimpor

tant i e is solved numerically at N discrete points

or vortex elements p by using the approximation

u

x

x p p p

p

N

jx p j g jx p j

where the lament ordering has to be considered in

the computation of the central di erence p

p p p

This is a characteristic of the lament

approach in D

vortex methods In contrast with the vortex arrow

approach updating the strength of the vor

tex elements in the lament method does not require

the evaluation of the velocity gradient which involves

the computation of another integral over all of the

particles Also laments with form of closed curves

satisfy the divergence free condition of the vorticity

eld This is not always the case at all times in the

vortex arrow approach

The velocity eld in eq can be computed more

e ciently by using the multipole expansion

a cos b sin c sin

where The grid points are located at

equally spaced intervals or at variable intervals

with the smaller intervals on the collapse region We

call this geometry the Lissa jous elliptic ring because

of its pro jections on two orthogonal planes for c

The thickness of the multi lament ring is C The cir

culation distribution i and the initial lament core

radius i also need to be speci ed Besides the fact

that its parameter space contains cases of very rapid

collapse the low number

ration allows a complete

of parameters of this con gu

parameter study at less com

putational expense The case a b c corre

sponds to the circular ring we use for the static studies

Figure Low aspect ratio elliptic rings a b c

correspond to rings with periodic behavior that can

be used for dynamic and long time behavior testing of

the algorithm After thorough testing is carried out

production simulations of collapsing vortex con gura

tions with c will be performed

In order to evaluate the quality of the simulations

we have also implemented di erent types of diagnos

tics These include the motion invariants linear and

angular impulse and energy For low aspect ratio el

liptic rings we measure the period of oscillation and

u

x

m

n

j k n

M

j k n j k

the deviation of the initial planar shape of the ring

at each period Other types of diagnostics have been

implemented to obtain physical insight into the col

D j k n j k r x

where

r m

M

j k n j k

N

p

p p j p p n j k

k

and

lapse problem These include measurements of vor

ticity strain rate and vortex stretching The e cient

evaluation of these diagnostics requires the use of the

fast summation algorithm employed in the simulation

itself

Fast summation method

To reduce the complexity of computing the sums

in Equation we use the Barnes Hut algorithm To

organize a hierarchy of clusters we rst compute an

D j k n j k

n

n j k j k j

k

n

j k

oct tree partition of the three dimensional box region

of space enclosing the set of bodies The partition is

computed recursively by dividing the original box into

eigh t octants of equal volume until each undivided box distributed memory The tension between the com

contains exactly one body Alternative tree decom munication overhead and computational throughput

positions have been suggested the Barnes Hut is of central concern to obtaining high performance

algorithm applies to these as well

Each internal node of the oct tree represents a clus

ter Once the oct tree has been built the moments of

the internal nodes are computed in one phase up the

tree starting at the leaves The next step is to com

pute induced velocities each particle

traverses the tree

in depth rst manner starting at the root For any in

ternal node if the distance D from the corresponding

box to the

particle exceeds the quantity R where R

is the side length of the box and is an accuracy pa

rameter then the e ect of the subtree on the particle is

approximated by a two body interaction between the

particle and a point vortex located at the geometric

center of the tree node The tree traversal continues

but the subtree is bypassed

Once the induced velocities of all the bodies

are known the new positions and vortex element

strengths are computed The entire process starting

with the construction of the oct tree is repeated for

the desired number of time steps

For convenience we refer to the set of nodes which

contribute to the acceleration on a particle as the es

sential nodes for the particle Each particle has a dis

tinct set of essential nodes which changes with time

One remark concerning distance measurements is in

order There are several ways to measure the distance

between a particle and a box Salmon discusses

various alternatives in some detail For consistency

we measure distances from bodies to the perimeter of

a box in the L metric In our experiments we vary

to test the accuracy of the

code compared to solutions

computed using the direct method at speci c points

The framework and its application

While the physical models of N body applications

The challenges can be summarized as follows

The oct tree is irregularly structured and dy

namic as the tree evolves a good mapping must

change adaptively

The data access patterns are irregular and dy

namic the set of essential tree nodes cannot be

predicted without traversing the tree The over

head of traversing a distributed tree to nd the es

sential nodes can be prohibitive unless done care

fully

The number of oating point operations necessary

to update the position can vary tremendously be

tween bodies the di erence often ranges over an

order of magnitude Therefore it is not su cient

to map equal numbers of bodies among proces

sors rather the work must be equally distributed

among processors This is a tricky issue since

mapping the nodes unevenly can create imbal

ances in the work required to build the oct tree

The techniques we use to guarantee e ciency in

clude a exploiting physical locality b an implicit

tree representation which requires only incremental

updates when necessary c sender driven communi

cation d aggregation of computation and commu

nication in bulk quantities and e implementing the

all to some communication abstraction

Exploiting physical locality

We use orthogonal recursive bisection ORB to dis

tribute bodies among processors The space bound

ing all the bodies is is partitioned into as many boxes

as there are processors and all bodies within a box

are assigned to one processor At each recursive step

di er greatly the computational structure is more the separating hyperplane is oriented to lie along the

or less application independent It is natural there

fore to develop programming tools which reduce the

time to develop high performance application codes

This section describes our programming framework

for hierarchical N body algorithms this framework

has been used earlier for implementing the Barnes

Hut and Greengard Rokhlin algorithms for gravita

smallest dimension the intuition is that reducing the

surface to volume ratio is likely to reduce the volume

of data communicated in later stages Each separator

divides the workload within the region equally When

the number of processors is not a power of two it is

a trivial matter to adjust the division at each step

accordingly

tional elds Warren and Salmon have developed The ORB decomposition can be represented by a

a di erent framework we

in this section

contrast the two approaches

binary tree the ORB tree a copy of which is stored

in every processor The ORB tree is used as a map

All tree codes resolve common issues of building which locates points in space to processors Storing

traversing updating and load balancing large trees in a copy at each processor is quite reasonable when the

n umber of processors is small relative to the number Represen tation We assign each BH tree node

of particles

an owner as follows Since each tree node represents

We chose ORB decomposition for several reasons a xed region of space the owner is chosen to be the

It provides a simple way to decompose space among

processors and a way to quickly map points in space

to processors This latter property is essential for

sender directed communication of essential data for

processor whose domain contains a canonical point

say the center of the corresponding region The data

for a tree node the multipole representation for ex

ample is maintained by the owning processor Since

relocating bodies which cross processor boundaries each processor contains the ORB tree it is a simple

for nding successor particles along a lament and calculation to gure out which processor owns a tree

also for building the global BH tree Furthermore node

ORB preserves data locality reasonably well and per

mits simple load balancing While it is expensive to

recompute the ORB at each time step

the cost of

incremental load balancing is negligible

The ORB decomposition is incrementally updated

in parallel as follows At the end of an iteration each

ORB tree node is assigned a weight equal to the total

number of operations

of particles in each of

performed in updating the states

the processors which is a descen

dant of the node A node is overloaded if its weight

exceeds the average weight of nodes at its level by a

small xed percentage say We identify nodes

which are not overloaded but one of whose children

is overloaded call each such node an initiator Only

the processors within the corresponding subtree par

ticipate in balancing the load for the region of space

associated with the initiator The subtrees for dif

ferent initiators are disjoint so that non overlapping

regions can be balanced in parallel At each step of

the load balancing step it is necessary to move bodies

from the overloaded child to the non overloaded child

This involves computing a new separating hyperplane

we use a binary search combined with a tree traver

sal on the local BH tree to determine the total weight

within a parallelpiped

Virtual global trees

Unlike the original implementation of Warren and

Salmon we chose to construct a representation of

a distributed global oct tree An important consider

ation for us was to investigate abstractions that allow

the applications programmer to declare a global data

structure a tree

for example without having to worry

about the details of distributed memory implementa

tion For this reason we separated the construction

of the tree from the details of later stages of the al

gorithm The interested reader is referred to for

further details concerning a library of abstractions

N body algorithms

for

Clustering techniques which exploit the geometrical prop

erties of the distribution will preserve locality better but might

lose some of the other attractive properties of ORB

The only complication is that the region corre

sponding to a node can be spanned by the domains

of multiple processors In this case each of the span

ning processors computes its contribution to the node

the owner accepts all incoming data and combines the

individual contributions This can be done e ciently

when the combination is a simple linear function as

is the case with all tree codes

Construction Each processor rst builds a local

oct tree for the bodies which are within its domain

At the end of this stage the local trees will not in

general be structurally consistent For example Fig

ure shows a set of processor domains that span a

node the node contains four bodies one per proces

sor domain Each local tree will contain the node as

a leaf but this is inconsistent with the global tree

as a leaf but this is inconsistent with the global tree Figure An internal node which

Figure An internal node which appears as a leaf in

each local tree

The next step is to make the local trees be struc

turally consistent with the global oct tree This re

quires adjusting the levels of all nodes which are split

by ORB lines We omit the details of the level

adjustment procedure in this paper A similar pro

cess was developed independently in an addi

tional complication in our case is that we build the

oct tree until each leaf contains a number L of bod

ies Choosing L to be much larger than reduces the

size of the BH tree but makes level adjustment some

what tricky

The level adjustment procedure also makes it easy

to update the oct tree incrementally We can insert

and delete bodies directly on the local trees because

we do not explicitly maintain the global tree After the

insertion deletion within the local trees level adjust

ment restores coherence to the implicitly represented

distributed tree structure

Computing multipole moments

For gravitational simulations once level adjustment

is complete each processor computes the centers of

mass and multipole moments on its local tree This

phase requires no communication Next each proces

sor sends its contribution to a node to the owner of

the node Once the transmitted data have been com

bined by the receiving processors the construction of

the global oct tree is complete This method of reduc

ing

a tree computation into one local step to compute

partial values followed by one communication step to

combine partial values at shared nodes is widely use

ful

For multi lament vortex dynamics computing the

vortex element strengths requires an extra phase in

the tree construction The vortex element strength

of a particle is de ned as the displacement of its two

neighbors along the lament equation Once the

vortex element strengths are computed we compute

the multipole moments on the local trees Finally

each processor sends its contribution to a node to the

owner of the node so

that individual contributions

are

combined into globally correct information

Finding neighbors along a lament requires com

munication when the neighbors are owned by other

processors To e ciently determine which particles

have missing neighbors and require communication

each processor needs to maintain a linear ordering

of particles in the same lament within its domain

Once the linear ordering is computed a processor can

go through the list to compute the vortex element

strength and request information from missing neigh

bors

The linear ordering must be maintained dynami

Sender driven communication

Once the global oct tree has been constructed it is

possible to start calculating accelerations For Barnes

Hut algorithm each processor can compute the accel

eration for

its particles by traversing the oct tree and

requesting essential tree data from owners whenever

it is not locally available

This naive strategy of traversing the tree and

transmitting data on demand has several drawbacks

it involves two way communication the mes

sages are ne grain so that either the communication

overhead is prohibitive or the programming complex

ity goes up and processors can spend substantial

time requesting data for tree nodes that do not exist

It

is signi cantly easier and faster to rst construct

the locally essential trees The owner of a node com

putes the destination processors for which the node

might be essential this involves the intersection of

the annular region of in uence of the node with the

ORB map Each processor rst collects all the in

formation deemed essential to other nodes and then

sends long messages directly to the appropriate des

tinations Once all processors have received and in

serted the data received into the local tree all the

locally essential trees have been built

Sender driven communication is used not only

in transferring essential data but also in all the

other communication phases This is true for the

Greengard Rokhlin algorithm as well For exam

ple to compute the vortex element strength for each

particle in the multi lament vortex method its two

neighboring particles in the lament must be fetched

through communication if they are not in the local

tree Since the data dependency is symmetrical for

the computation the owner of a particle can easily

determine if this particle should be sent out By main

taining an

upper bound on the maximum distance be

tween two neighboring particles it becomes possible

to locate the neighbors of a particle quickly

Bulk processing

cally as particles move across processor boundaries ei We separate control into a sequence of alternat

ther because of their induced velocity or during work ing computation and communication phases This

load remapping We use a splay search tree to helps maintain simple control structure e ciency is

store the indices of particles of the same lament obtained by processing data in bulk For example

When particles enter leave a processor domain its l up to a certain point it is better to combine multi

ament number and index within the lament f i are

sent to the new processor so that i can be inserted

ple messages to the same destination and send one

long message Similarly it is better to compute the

into the search tree for lament f in the destination essential data for several bodies rather than for one

We use a splay tree implementation because of its sim at a time Another idea that proved useful is sender

plicity and self adjusting property

directed communication send data wherever it might

be needed rather than requesting it whenever it is

needed Indeed without the use of the CM vector

was taken directly from our earlier gravitational sim

ulation code For example the ORB decomposition

units we found that these two ideas kept the overhead load remapping communication protocol and the BH

because of parallelism minimal

All to some communication

The communication phases can all be abstracted as

the all to some problem Each processor contains a

set of messages the number of messages with the same

destination can vary arbitrarily The communication

pattern is irregular and unknown in advance For ex

ample level adjustment is implemented as two sep

arate all to some communication phases The phase

for constructing locally essential trees uses one all to

some communication

The rst issue is detecting termination when does

a processor know that all messages have been sent

and received The naive method of acknowledging

receipt of every message and having a leader count

the numbers of messages sent and received within the

system proved ine cient

A better method is to use a series of global reduc

tions the control network

of the CM to rst compute

the number of messages destined for each processor

After this the send receive protocol begins when a

processor has received the promised number of mes

sages it is ready to synchronize for the next phase

We noticed that the communication throughput

varied with the sequence in which messages were sent

and received As an extreme example if all messages

are sent before any is received a large machine will

simply crash when the number of virtual channels has

been exhausted In the CMMD message passing li

brary version each outstanding send requires a

virtual channel

ited

and the number of channels is lim

Instead we used a protocol which alternates sends

with receives The problem is thus reduced to ordering

the messages to be sent For example sending mes

sages in order of increasing destination address gives

low throughput since virtual channels to the same re

ceiver are blocked In an earlier paper we de

veloped the atomic message model to investigate this

phenomenon Consistent with the theory we nd that

sending messages in random order worked best

Application to vortex simulation

tree module are exactly the same The only modi

cations required for the vortex simulation are the ex

tra phase step to compute the vortex element

strength of each particle and the dynamic update of

lament trees which contain indices of particles in one

lament step

build local BH trees

for every time step do

construct the BH tree representation

adjust node levels

compute the strength of each particle

compute partial node values on local trees

combine partial node values at owners

owners send essential data

calculate induced velocity

update velocities and positions of bodies

update incremental data structures

local BH tree and filament trees

if the workload is not balanced

update the ORB incrementally

enddo

Figure Outline of code structure

Step builds a local BH tree in each processor Af

ter that the local BH trees are maintained incremen

tally and never rebuilt from scratch Step constructs

an implicit representation of the global BH tree by

combining information in the local trees Step up

dates the local trees after particles move to their new

positions

Step broadcasts essential information and is the

most communication intensive phase The sender

oriented communication avoids two way tra c in

fetching remote data and aggregates messages for

higher communication throughput

Step computes the induced velocity for each vor

tex and is most time consuming We use the Knio

Ghoniem algorithm to solve the Biot Savart equations

using monople dipole and quadrapole terms in the

multipole expansion to approximate the e ect of a

cluster We use CDPEAC assembly language to im

plement this computation intensive routine One sig

ni cant optimization was to approximate the expo

nential smoothing term in equation When is

We implemented the multi lament vortex simula larger than we ignore the term for smaller values

tion using our framework Figure gives a high level

description of the code structure Most of the code

we compute the rst seven terms in the Taylor expan

sion This guarantees accuracy to within at least ve

decimal places

Step redistributes particles when workload be

comes imbalanced The ORB is incrementally up

dated so that workload is balanced among processors

Comparisons with previous work

We sketch the important di erences between our

framework and the global hashed structure of Warren

and Salmon Warren and Salmon use a dif

ferent criterion for applying approximations They do

not construct essential trees thereby saving the space

overhead associated with local essential trees Instead

they construct an explicit representation of the oct

tree Each body is assigned a key based on its posi

tion and bodies are distributed among processors by

sorting the corresponding keys Besides obviating the

need for the ORB decomposition this also simpli es

the construction of the oct tree

These advantages are balanced by other factors

the advantages of sender directed communication are

lost the data structures are not maintained incre

mentally but rebuilt from scratch each iteration and

the computation stage is slowed down by com

munication the latency is largely hidden by multiple

threads to pipeline tree traversals and to update ac

celerations but the program control structure is com

plicated and less transparent Finally it is not clear

how hashed oct trees can be extended gracefully to

multi lament simulations where each lament must

maintain a cyclic ordering Indeed their uid dynam

ics simulation involves only free particles vortex ar

rows

While our vortex simulations are very di erent from

those of to make meaningful comparisons the ex

periments for gravitational simulations are quite sim

ilar Our performance gures compare favorably with

those reported by Warren and Salmon With

million particles distributed

uniformly

in space the

total time per iteration for our

code

is

seconds on a

node CM less than of the

time

is devoted to

managing the Barnes Hut tree constructing locally es

sential trees preparing essential data via caching and

balancing workload In contrast the time per itera

tion for a uniform distribution of million particles

on the node Delta Touchstone reported by Warren

and Salmon is seconds and seconds

using the hashed data structure In the latter imple

mentation over of the time is spent constructing

and manipulating the tree structure

Performance

Our platform is the Connection Machine CM with

SPARC vector units Each processing node has

M bytes of memory and can perform oating point

operations at peak rate of M op s We use

the CMMD communication library The vector

units are programmed in CDPEAC which provides an

interface between C and the DPEAC assembly lan

guage for vector units The rest of the program is

written in C

The vortex dynamics code along with several di

agnostics are operational Starting with the gravita

tional code the e ort involved adding the phase to

compute vortex element strength and writing the com

putational CDPEAC kernel As expected the latter

part took most of the e ort the data structures were

adaptedand debugged in less than a week

We are currently running extensive tests to check

the accuracy of the Barnes Hut method as a function

of the approximation parameter Preliminary results

show that with quadrupole expansions we can obtain

up to ve digits of accuracy when is about and

the time step is

phase time second adjust level compute vortex strength and moments construct global tree collect local
phase
time second
adjust level
compute vortex strength and moments
construct global tree
collect local essential data
compute the new velocity
update particle position
move particles into right processors
remap the workload
total time

Figure Timing breakdown for vortex simulation

Figure shows the timing breakdown for a D vor

tex simulation on a node CM The input con

guration is a layers lament vortex ring

Each lament has particles and the total num

ber of particles is about one million The radius of

the cross section of the vortex ring is and the

core radius is We use in the opening

test for multipole approximation so that the error in

total induced velocity is within The vector

units compute the induced velocity at M op sec

and the simulation sustains an overall rate of over

W e use a b and c in Equation

M op sec per node averaged over time steps not

including the rst

Since the locally essential trees overlap consider

ably the total size of

local trees exceeds the size of the

global tree Figure shows the memory overhead ra

tio of total distributed memory

used divided by global

tree size versus granularity number of particles per

node on a node CM This ratio is less than as

the granularity exceeds particles

even for very small

per processors

Ratio of essential and global tree size (P = 32)

Ratio

4.60 theta = 0.25 theta = 0.33 4.40 theta = 0.5 4.20 4.00 3.80 3.60
4.60
theta = 0.25
theta = 0.33
4.40
theta = 0.5
4.20
4.00
3.80
3.60
3.40
3.20
3.00
2.80
2.60
2.40
2.20
2.00
1.80
3
Granularity x 10
0.00
2.00
4.00
6.00
8.00
10.00

Figure Memory overhead versus granularity for dif

ferent top view side view
ferent
top view
side view

Figure Low aspect ratio elliptic ring oscillates be

tween the states shown The ring has N

particles The last time shown is after time steps

t x

top view side view Figure Collapsing vortex ring This ring is dis
top view
side view
Figure Collapsing vortex ring This ring is dis

cretized with N particles The collapse state

shown is obtained by time steps t

with a b and c see Eq This ring

presents an oscillating behavior between the two states

presented in Fig The ring was discretized with N f

laments with a total number of particles N

The nal state in Fig is after time steps

t x The second case tested corresponds

to a collapsing vortex ring with a b and

c The initial condition and the collapse state

are shown in Fig This ring has N f laments

with a total number of particles N The

nal state corresponds to time steps t

Ac knowledgments

We thank Apostolos Gerasoulis of Rutgers for helping

set up the collaboration and helpful discussions This

work is supported in part by ONR Grant N

and ARPA

contract DABT C Hy

percomputing Design The content of the informa

tion herein does not necessarily re ect the position of

the Government and o cial endorsement should not

be inferred

The code was developed and tested on the CM at

the Naval Research Laboratories the National

Center

for Supercomputer Applications University of Illinois

at Urbana Champaign and the Minnesota Supercom

puting Center

References

As part of the initial testing we have examined A S Almgren T Buttke and P Colella A

two cases The rst is a low aspect ratio elliptic ring

fast adaptive vortex method in three dimensions

Journal of Computational Physics C Greengard Convergence of the vortex l

C Anderson An implementation of the fast mul

tipole method without multipoles SIAM Journal

on Scienti c and Statistical Computing

C Anderson and C Greengard The vortex

merger problem at in nite Reynolds number

Communications on Pure and Applied Mathemat

ics XLII

R Anderson Nearest neighbor trees and N body

simulation manuscript

J Barnes and P Hut A hierarchical O N log N

force calculation algorithm Nature

J T Beale and A Ma jda Vortex methods I

Convergence in three dimensions Mathematics

of Computation

S Bhatt M Chen J Cowie C Lin and P Liu

Ob ject oriented support for adaptive methods on

parallel machines In Object Oriented Numerics

Conference

S Bhatt and P Liu A framework for parallel

n body simulations manuscript

P Callahan and S Kosara ju A decomposi

tion of multi dimension point sets with applica

tions to k nearest neighbors and N body poten

tial elds th Annual ACM Symposium on The

ory of Computing

S C Crow Stability theory for a pair of trailing

vortices AIAA Journal

S Douady Y Couder and M E Brachet Di

rect observation of the intermittency of intense

vorticity laments in turbulence Physical Review

Letters

V M Fernandez Vortex intensi cation and col

lapse of the Lissajous El liptic ring Biot Savart

simulations and visiometrics Ph D Thesis Rut

gers University New Brunswick New Jersey

V M Fernandez N J Zabusky and V M

Gryanik Near singular collapse and local intensi

cation of a Lissa jous elliptic vortex ring Non

monotonic behavior and zero approaching local

energy densities Physics of Fluids A

ament method Mathematics of Computation

L Greengard and V Rokhlin A fast algorithm

for particle simulations Journal of Computa

tional Physics

H S Husain and F Hussain Elliptic jets Part

Dynamics of preferred mode coherent structure

Journal of Fluid Mechanics

F Hussain and H S Husain Elliptic jets Part

Characteristics of unexcited and excited jets

Journal of Fluid Mechanics

M Warren J Salmon and G Winckelmans Fast

parallel tree codes for gravitational and uid dy

namical N body problems Intl J Supercomputer

Applications

R T Johnston and J P Sullivan A ow visual

ization study of the interaction between a helical

vortex and a line vortex Submitted to Experi

ments in Fluids

S Kida and M Takaoka Vortex reconnection

Annu Rev Fluid Mech

O M Knio and A F Ghoniem Numerical study

of a three dimensional vortex method Journal of

Computational Physics

A Leonard Vortex methods for ow simulation

Journal of Computational Physics

A Leonard Computing three dimensional in

compressible ows with vortex elements Ann

Rev Fluid Mech

P Liu W Aiello and S Bhatt An atomic model

for message passing In th Annual ACM Sym

posium on Paral lel Algorithms and Architecture

P Liu and S Bhatt Experiences with parallel n

body simulation In th Annual ACM Symposium

on Paral lel Algorithms and Architecture

A J Ma jda Vorticity turbulence and acoustics in

uid ow SIAM Review Septem

ber

J Salmon Paral lel Hierarchical N body Methods

PhD thesis Caltech

J Singh Par al lel Hierarchical N body Methods

and their Implications for Multiprocessors PhD

thesis Stanford University

R E Tarjan Data Structures and Network Algo

rithms Society for Industrial and Applied Math

ematics

Thinking Machines Corporation The Connection

Machine CM Technical Summary

Thinking Machines Corporation CDPEAC Us

ing GCC to program in DPEAC

Thinking Machines Corporation CMMD Refer

ence Manual

Thinking Machines Corporation DPEAC Refer

ence Manual

M Warren and J Salmon Astrophysical N body

simulations using hierarchical tree data struc

tures In Proceedings of Supercomputing

M Warren and J Salmon A parallel hashed oct

tree N body algorithm In Proceedings of Super

computing

G S Winckelmans Topics in vortex methods for

the computation of three and two dimensional

incompressible unsteady ows Ph D Thesis Cal

ifornia Institute of Technology Pasadena Cali

fornia

F Zhao An O N algorithm for three dimen

sional N body simulation Technical report MIT