Sunteți pe pagina 1din 20

Computational Astrophysics

William Dorland Department of Physics University Honors Program University of Maryland

In collaboration with
Ramani Duraiswami, Nail Gumerov, Kate Despain, Derek Juba, Yuancheng Luo, Amitabh Varshney, George Stantchev, Bill Burns, Frank Herrmann, John Silberholz, Matias Bellone, Manuel Tiglio

Computational Astrophysics
An intensive summer program for graduate students and postdoctoral fellows July 13-24, 2009

Overview
Maryland astrophysics effort: what, why Results of integration: Middleware library to accelerate development in Fortran 9x: Flagon Maryland astromap

In collaboration with
Ramani Duraiswami, Nail Gumerov, Kate Despain, Derek Juba, Yuancheng Luo, Amitabh Varshney, George Stantchev, Bill Burns, Frank Herrmann, John Silberholz, Matias Bellone, Manuel Tiglio

Binary Black Holes with GPUs


Gravitational wave astronomy requires source characterization -- large-scale problem in numerical relativity Project: determine nal relative orientations of spin axes for black holes of different masses as a function of initial conditions -- 7-D ensemble, expensive

Binary Black Holes with GPUs


Gravitational wave astronomy requires source characterization -- large-scale problem in numerical relativity Project: determine nal relative orientations of spin axes for black holes of different masses as a function of initial conditions -- 7-D ensemble, expensive

Binary Black Holes with GPUs


Development on local Teslas [now production-ready] 60x speed-up achieved on MPI+CUDA-enabled Monte Carlo sampling of 7-D space (systems of ODEs, postNewtonian approx) Next step: 0.5M hours on NCSA Lincoln cluster 1536 cores, 384 Teslas Production cluster (in addition to new local resource)

For more info today:


Frank Herrmann, John Silberholz, Matias Bellone, Manuel Tiglio

Milky Way in X-Rays from Chandra

Pulsar in Crab Nebula: Chandra, Hubble

Everything you can see in these pictures is associated with super-hot, turbulent plasma (ionized gas). To understand the details, one needs to understand how astrophysical objects turn gravitational energy into light. This requires high-performance computing, now being accelerated with NVidia GPUs.

Key Questions Addressed


How stable is the hardware + software platform? Which major algorithms can successfully exploit GPU hardware? What is the development and maintenance path for porting large-scale applications to this platform? Is there a net benet? Or is the programming model too cumbersome? Which algorithms can exploit clusters of GPUs? Aiming for a $1-5M purchase in 2009-10 timeframe. CPU? GPU? Which vendor?

Refuting the Moores law argument


Argument ~ Wait for processor performance to solve complex problems, without algorithm improvement Is this true? Yes, for algorithms with linear asymptotic complexity No!! For algorithms with different aymptotic complexity
3 Most scientic algorithms ~ N 2 or N 3

For a million variables, we would need about 16 2 N generations of Moores law before an algorithm was comparable with an O(N) algorithm. Implies need for sophisticated algorithms, but are they programmable on a GPU?

Turbulence theory is guided by computation/simulation

Properties of plasma turbulence are important in laboratory experiments (such as fusion research), in space physics (solar wind), and in astrophysics (ISM, accretion ow luminosity) Most computations from Maryland carried out at national supercomputing facilities, on hundreds to thousands of processors.
Fusion turbulence

Turbulence theory is guided by computation/simulation

Properties of plasma turbulence are important in laboratory experiments (such as fusion research), in space physics (solar wind), and in astrophysics (ISM, accretion ow luminosity) Most computations from Maryland carried out at national supercomputing facilities, on hundreds to thousands of processors.
Accretion flow luminosity [Goldston, Quataert, Igumenshchev, 2005]

Turbulence theory is guided by computation/simulation

Properties of plasma turbulence are important in laboratory experiments (such as fusion research), in space physics (solar wind), and in astrophysics (ISM, accretion ow luminosity) Most computations from Maryland carried out at national supercomputing facilities, on hundreds to thousands of processors.

GPU Goal

Conversion of legacy Fortran 9x code important to scientic community



Maintainability, portability Developed Flagon, an F9x wrapper to ease transition to CUDA Flagon available at Sourceforge as Open Source project

GPU Performance Achievable by Non-Expert


Plasma turbulence code (Orzag-Tang problem) ported from existing Fortran 95 code in one day achieves 25x speedup GPU non-expert (me) Expert (Despain) then upped to 32x

Starmap
Upgrading current clusters, testing Teslas and looking at OpenCL Major questions remaining to be answered: 1. Multi-GPU computing necessary for astro apps we care about Current and continuing focus 2. Is development platform rich enough for GPU non-experts?

S-ar putea să vă placă și