Sunteți pe pagina 1din 32

O

NASA Technical

Memorandum

102890

NASA's Supercomputing Experience


F. Ron Bailey

N91-I_777 (NASA-TM-I02890) CXPER[ENCE (NASA) NASA's 30 p SUP_RCOMPUTING CSCL 12A

G_/p

December

1990

National Aeronautics and Space Administration

\
\

"L 11 _ _ L _ _ tm

NASA

Technical

Memorandum

102890

NASA's Supercomputing Experience


F. Ron Bailey, Ames Research Center, Moffett Field, California

December

1990

NationalAeronautics and Space Administration Ames Research Center Moffett Reid, California 94035-1000 \

\
\

NASA's

SUPERCOMPUTING F. Ron Bailey Ames Research Field, CA 94035,

EXPERIENCE

NASA Moffett

Center USA

ABSTRACT A brief overview of NASA's recent experience in supereomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamic Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamies, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. The paper concludes with a brief comment on the future directions and NASA's new High Performance Computing Program.

1.

Introduction The National Aeronautics and to solve Space Administration of supercomputing problems (NASA) in science has for many and years

been cation

one of the pioneers of supercomputers

in the development

systems

and in the appliengineering.

challenging

Today supercomputing installations at every NASA could to develop not have been

is an essential NASA research accomplished with

tool within the Agency. and development center and to make them. supercomputers at Ames Center. innovations successful, into the 1980s parallel computers Research new

There are supercomputer and their use has enabled scientific discoveries that

new aerospace

technologies without advanced

NASA's installation STAR-100 performance cessing. testbed the techniques. Massively

experience of the vector Illiac-IV

began Research These designed

in the Center two

1970s and systems and

with the were

the the pro-

parallel

computer

CDC

computer through

at Langley

fin'st of a new

generation increases these

of high-performance the architectural were to learn systems

to achieve provided and vector processor

dramatic vector a valuable computing of at system,

of parallel they

Although environment

marginally to effectively continued

in which

use parallel SIMD

Architectural Parallel

investigations Processor,

with the development

a massively

Goddard Space at the California 1980s pioneer munity. Cray NASA

Flight Center, and the Cosmic Cube/Hypercube, parallel MIMD systems, Institute of Technology and NASA's Jet Propulsion Laboratory. In the also established the Numerical Aerodynamic Simulation (NAS) Program to make them available to the aerospace comacquired the first full-scale Cray-2 and first the first supercomputer UNIX environment.

supercomputing technologies and The NAS Program, for example, computer systems and developed

YMP

NASA's involvement in supercomputing applications also began in the 1970s. Acquisition of the Illiac-IV, STAR-100,andCray-1computersspurredthe development of innovative parallel computing algorithms. New computational disciplines, such as computational_ fluid dynamicsandcomputationalchemistrysoonemergedandit became clear that supercomputers were a truly enablingtechnology.Today, supercomputers are being appliedroutinely to advancetechnologiesin aeronautics, transatmospherics, space transportation, andspaceexploration,andto studythe Earth,otherplanetarysystems, and objectsin outerspace.
The purpose in supercomputing supercomputing to remain nology computers lence puting directions Examples of this paper from applications. edge two is to provide perspectives: The NAS and of supercomputing development applications and astrophysics Finally, computing. the paper a brief overview of selected system technology through NASA activities and effort

supercomputing system

development of NASA's and bring use

Program

is used as an example community. chemistry, are presented concludes

at the leading to the aerospace in advanced in atmospheric applications

this techof superin turbumechanics. supercominto future

research technology science

NASA's and structural to illustrate with

is illustrated

examples

physics,

aerodynamics, in science.

aerothermodynamics,

a look

in high-performance

2.

Numerical The NAS

Aerodynamic Program

Simulation

Program The first goal is to provide Department as a necessary and related computing in computer of Defense element disciplines. capability hardware and today a national (DOD), in ensuring The second through and provides software comother congoal techto

1 has two major that is available industry,

goals.

putational government tinued

capability agencies,

to NASA,

and universities, aerodynamics large-scale improvements

leadership

in computational in advanced

is to act as a pathfinder incorporation

systematic service

of state-of-the-art

nologies. The NAS Program began full operation in 1986 over 1700 users at 133 locations in the United States. The (NPSN), puters, 2.3 Intel stations, through NAS complex in Fig. 5880 of computers, 1. It currently mainframe capacity, prototype hardware called includes computer 5 VAX parallel data Machine combining rates the and NAS

Processing and Cray mass

System YMP/8128 storage and

Network supercomsystem with work-

is shown an Amdahl

Cray-2

associated parallel

terabytes Touchstone

of on-line Gamma local

minicomputers, computer. ranging

80 Silicon computer, All computer from

Graphics systems

a 32,000-processor an extensive

Connection area network

a 128-processor are linked and Ultrambits/sec. All

Ethernet,

HYPERchanne!, 10 to 800

net technologies

to provide

computers operate under the UNIX operating system work communications provided via the well-known Thus, a user can access any computer, run jobs,

with DOD Berkeley data

internet (TCP/IP) netUNIX "r" commands. among computers using

and transfer

Fig.l
a single given set of commands in references users NSFnet, Research

NAS Processing System Network (NPSN). A more complete description of the NPSN is

on all computers.

2 and 3. access including Network the NPSN through a wide Internet. 1544 choice NAS kbits/sec of existing national Science communiFoundationResearch

Remote cation Regional sponsored

networks

the DOD-sponsored Science that provides

MILnet,

the National links

and NASA's

is also a member between

of the Bay Area Ames

Center, Bay Area universities, Department of Energy is the main communication path to DOD laboratories, the university NASnet. throughput at bandwidths The 1984 design uration. words) 2's very NAS community. uses to remote ranging Program's it decided NAS, For NASA Ethemet from and aerospace technology users. NASnet bridging workstation

laboratories, and industry. MILnet while NSFnet, and BARRnet serve industry users, fast NAS serves has developed and high sites 31 remote to provide currently response

graphics

NASnet

56 to 1544 kbits/sec. in advanced Research, supercomputer Inc.'s Cray-2 technology computer, of Energy, full-scale (over which began Cray-2 began was config64-bit in in

experiences to acquire Cray in collaboration 1985, NAS the central

when

at the time. This which large system was

with the Department took delivery of 2.048 capacity to sustain memory of the first gigabytes

proto-

type testing

in 1985. In late 34 times

had a memory coupled

capacity

268 million XMP/48.

of the Cray a computing

The Cray-

memory,

with its ability

rate of 250 million

floating-point operationsper second,allowed computationalfluid dynamicsresearchers


to increase increase virtually The plex able their their eliminated application program without size by more code overhead than during an order The tackling execution of a magnitude very large of large and complexity significant transfer modification. memory programs. comwere (3-D)

memory-disk

net effect

was decreased

preparation

time for researchers

increasingly

problems, plus improved job turnaround to perform fluid dynamics simulations vehicle configurations that were

time. More importantly, researchers about realistic three-dimensional just a few years before. environment and other based DOD across new subsystems common subsystems available proprietary protocols, Cray-2. Prior and

aerospace

not possible

Providing a major added. nitions tions UNIX computer later mented resource interactive networking Berkeley had that UNIX NPSN Only could subsystems

a uniform design even as new

user interface goal. Furthermore, supercomputer architecture UNIX were

and operating generations

was across were defirestricthe felt and

the environment (implementation that was free system, implemented Today, UNIX added

had to remain on openly

an open meet this been

system goal.

for hardware network only was operating

and software interface

interfaces) operation chosen and

of vendor internet onthe

and many

to this,

implemented

on minicomputers

and workstations, UNIX implementation as a definition needs. users when that

not suitable systems. supercomputers

for supercomputers. was NQS, to treat to meet was

is the standard could

for superbe implea batch of both common

The key to the successful

on the Cray-2 For example, the flexibility

on other

and, if necessary, management and batch commands,

enhanced system, creates

supercomputing to provide UNIX, network

supercomputing.

Moreover, seamless

coupled

with

a so-called

environment

that enables

users to move easily between systems, allows for easy file access across machine boundaries, and enhances user-code portability. tecture bilities systems. Enhanced the NPSN supercomputers. user productivity The design was was another scientific implemented directly at the major NPSN design NAS concept and has allows proved for modularity beneficial essential in the for ready installation of the

and command initiation Finally, the open archiof new and other capanew YMP

implementation Cray

goal.

To meet

this goal, to IRIS

was designed

to include

workstations by installing

as the researcher's Silicon

interface Graphics

workstations and connecting them This was a unique implementation supercomputer supercomputer workstation are submitted and analyzed. workstation's workstations, qualitative where installations. becomes literally with, text and small numerical

to the Cray-2 though a local area time and has since been replicated with UNIX and network The user where amount of the workstation. complex produce input data

network. at many the at his

Combined an extension data and where simulations

software, resides

files are created

and modified, an enormous

applications are displayed of data, the

and interacted Because

and results

graphics display capability is especially beneficial. researchers can digest results in a reasonable amount picture of the physical phenomenon simulated. Numerous

By using graphics of time and gain a graphics packages

have been developed to operate acrossworkstations and Cray supercomputersin a distributed mannerand to allow researchers to display any facet of their datafor easy interpretationandevaluation 4-7.Today, interactivegraphical post-dataprocessingis an essentialaspectof today's sophisticated numericalsimulationprocess. Providing ready accessby the national aerospace researchcommunity from their homelocationswasanotherNPSNdesigngoal.This led to the development of NASnet, a unique high-performancenationwide communication network that links remote user locations to the NAS facility via land-links, with communication speeds up to 1544kbits/sec.NASnet connects the communicationlines from the NAS facility to local areanetworksat eachremotesite throughEthemetbridgesandtherebycreatesan extension of the NPSN local network.The implementationof UNIX andnetwork softwareon remoteworkstationsprovidesan environmentin which remoteresearchers havevirtually the sameinteractivecapability as usersat the NAS facility itself. Presently,NASnet is being redesignedto utilize new internet gateway technology and thereby increaseits performanceto approximately6200Kbits/sec. 3.
Computational Computational supercomputing 1960s to treat embedded tions to fluid Fluid fluid Dynamics dynamics has been one of the earliest drivers in advancing in the late airfoils with soluin both

within NASA. highly nonlinear waves 8, rapidly problems. flow

The introduction of numerical techniques flows, such as transonic flow over lifting led to greatly discipline increased grew rapidly interest with The

shock

in computational improvements

computer systems mance have been shown orders here of magnitude used

and algorithms. Fig. 2 shows that improvements closely paralleled by improvements in numerical that the cost of performing due to advances in numerical to solve in computers over methods a given calculation and also three period. forms

in computer performethods 9. The data has decreased of magnitude the example equations in other three due shown that

illustrates

orders

to improvements is for methods

a 15-year

While

two approximating of a perfect

of the Navier-Stokes of the trend

govern the fluid dynamics as well. Computational the aerospace is more below, and future involved

gas, it is indicative

disciplines

fluid dynamics 10,11. While The base in research. an advanced of aerospace

is now industry's

a well-established main focus of NASA's aerodynamics

discipline is on design efforts, fluid technology systems.

within which

NASA

and

industry

applications, dynamic

NASA processes to

objectives of new

are discussed

are to (a) advance generations

fundamental vehicles

understanding and propulsion

of basic

(b) develop

for application

100

--

_.

2-D VISCOUS FLOWS i (REYNOLDS AVERAGED x/NAVIER-STOKES) \

10 -_\ IBM 704 \

R \
2-D INVISCID (NONLINEAR)-,_ \ \\IBM \ \ \ \ \ "3-D INVISCID 709Q _\ _",_ /(NONLINEAR) 1BM7094 --\\ -_ _\ IBM 360-50 \_ \\ \,41 IBM 360-67 _ _ \_ \ \ IBM \ CDC 76OO. '1t,,370-195 \ \ \

o
--

8
.J 14,1 M.01

CDC 0400 _II\ IBM 360-91 \ \ CDC 6600 \ --

ILLIAC-4

e"CRAY-1 \ CRAY-XMP

CDC 205

COMPUTERS (GIVEN ALGORITHM) 0 D A ALGORITHMS (GIVEN COMPUTER) .001 1950 ! I 1960 I I 1970 1 I 1980 I I 1990

YEAR OF AVAILABILITY Fig. 2 Comparison of numerical simulation cost trend resulting from improvements in fluid dynamics algorithms and in computers.

3.1

Turbulence Turbulence

Phystcs is the most of bodies turbulence this common fluid motion fluids, encountered fluid mixing, some is still not well in nature. and heat enough It greatly transfer. minds understood affects Because for over to be

the resistance of its importance, a century. effectively

moving attention,

through

study has attracted turbulence applications.

of the best scientific be controlled,

Despite controlled

in engineering For example, skin friction

If it could

the benefits

would be significant. reduction in fuselage fuel consumption With is possible 12

in the case of a fleet of transport alone would save approximately

aircraft a 20 percent $1 billion a year in

the development to obtain

of supercomputers solutions

and highly to the

accurate

numerical equations

algorithms, for simplified

it

numerical

Navier-Stokes

geometry principles ment. which Used tionships

and at low Reynolds and can be treated in this way, flow between

numbers. as though

These

numerical observed are providing

simulations data new between from

are based a physical

on first experifor

they were

numerical variables, has yet been

simulations possible.

insights pressure

into interrelaand strain

such as the relationship

no measurement

In addition offers, both lating laboratory for the space

to quantitative first time,

information the averaging structure

of unprecedented to visualize and filtering turbulent inherent

detail, flows

numerical as they prior obtained

solution evolve to calcucapabilof the Fig. 3 obtained in from

the opportunity Each

and time without experiments.

in results

in a turbulent

flow can be inspected

the statistical

characteristics

of significant

flow features.

This visualization

ity is valuable both for identifying new relationships among the various elements dominant flow structures and for suggesting new types of physical experiments. shows the calculated vortical structures near a flat plate 13. These results were

Fig. 3 Calculated

vortical

flow structures

in a turbulent boundary

layer.

ORIGrNAL OF POOR

PAGE

IS

QUALITY

from the database turbulent boundary nents of the flow computational

provided velocity

by direct

(first

principles)

numerical

simulation

of a flat-plate in the

layer 14. The direct simulation A problem

provided

pressure

and all three compogrid points hours of CPU

for each time step and for each of 9.4 million of this magnitude requires

domain.

about 200

time on a Cray-2 computer. Turbulence physicists have analyzed the results shown in Fig. 3 and have concluded that locations of significant contributions to Reynolds stresses occur
flows 13.

adjacent to large hook-shaped vortical structures that are common Information of this sort aids in the development of statistical reflect the underlying physical behavior of turbulence.

in turbulent models that

accurately

3.2

Aerodynamics Computational aerodynamics altitudes models to trade is concerned numbers differ employed geometric with below numerical five where Due simulations air behaves to computer approximate of aircraft as a perfect to the governlimitations, solutions or

flying

at normal

and at Mach

gas. The one

computational is forced

in levels complexity.

of approximation for less

ing Navier-Stokes generally

equations

and in geometric for geometric become more

simplicity

more approximate solutions less severe and the solutions solution flow field grow rapidly and treat viscosity

complexity 9. As the approximations become complex, the computer resources needed for a that neglect layer viscosity Formulations the boundary lift. research include envi(inviscid) in the are in widespread use for aerodythat include layer is sepaenvelope,

15. Generally,

approximations

only in the boundary

namically clean configurations viscous terms become important rated from the surface. inlets These in engine and with those of modeling

at or near cruise conditions. when treating flows where for example, associated the most over flows

occur,

near boundaries with vortex-enhanced

of the flight

The level ronment bulent lation mean

receiving

attention (RANS). a scale

within R_NS

the NASA models flow

is Reynolds-Averaged equations terms eddy fluctuations,

Navier-Stokes averaged

all terms to the tur-

of the Navier-Stokes introduces and

that is large

with respect changes.

yet small relative representing component. modeled. These Turbulence and

to the macroscopic of energy be determined is the cannot

This formubetween and must the the be full and

the transport

and momentum analytically major item

fluctuating of the

phenomenologically exploitation computational Increases formulation advancing dimensional Cray-1 research.

modeling continues

pacing

RANS

approach

to be a focus

of experimental

in computer RANS (2-D)

performance

have

dramatically of increasing

affected

the utility

of the RANS on

16. Fig. 4 illustrates simulation calculation systems

the effect capability of transonic available

supercomputer

performance

during the last 15 years. In 1975, a twoflow over a simple airfoil strained the capabilat that time. wing The increased by performance 1983. By 1986 of the it was 3-D flows

ity of the CDC made

7600

it possible

to compute

simple

INCREASES ADVANCES

IN SUPERCOMPUTER IN COMPUTATIONAL
1975

POWER ENABLE FLUID DYNAMICS


1983
COMPUTER: CRAY-1 EFFECTIVE SPEED: 30 MFLOPS COMPUTATION TIME: 20 HOURS

COMPUTER: CDC-7600 EFFECTIVE SPEED: 3 MFLOPS COMPUTATION TIME: 5 HOURS

ii'l

1986
COMPUTER: CRAY-XMP:4 EFFECTIVE SPEED: 200 MFLOPS COMPUTATION TIME: 15 HOURS

1989
COMPUTER: CRAY YMP8 EFFECTIVE SPEED: . 1000 MFLOPS COMPUTATION TIME: 6 HOURS

EQUATIONS SOLVED: REYNOLDS-AVERAGED 1 MFLOPS = 1 MILLION FL0-AT_G

POINT

FORM OF NAVIER-STOKES EQUATIONS CALCULATIONS pE_R SECOND

Fig. 4 Advances in Reynolds-Averaged

Navier-Stokes simulation capability with increases in supercomputer performance.

possible enables shown

to compute simulation at the bottom

flows about

over very

wing complex side. results

bodies

on the Cray

XMP/4.

Today

the Cray

YMP/8

geometries

such as the space

shuttle

configuration

right-hand detailed about

Fig. 5 shows pute the transonic

more flow

from

the use of the RANS space shuttle obtained agreement along apparently in localized emergency

approximation

to comcom-

the integrated

in ascent

mode 17. Here,

puted surface pressures computed and measured complex compared with the tional configuration. with flight wind data

are compared with those results are in excellent A limited tunnel and Mach with there amount computed tunnel to predict proposed of flight

from wind tunnel tests. The over large portions of this data side is available of the and is in wall the fuselage agreement tunnel

test pressure

pressure data, found

Fig. 6. At this particular than interference. approach flight the current Although envelope

number the wind adequate

of 1.05,

the computation

is in better due to wind regions, loads abort

is disagreement

the computaneeded to refine

is considered

the aerodynamic

and to evaluate

maneuvers.

ORIGINAL OF POOR

PAGE

IS

QUALITY

{a) Fig. 5 Comparison of pressure contours between computation (outermost) and wind tunnel (innermost), Moo -- 1.05, tz = -3 , and Re = 4.0 x 106/ft (3% model). (a) Side view, (b) top view.

4.

Aerothermodynamics

Aerothermodynamics combines the discipline of computational fluid dynamics with the modeling of real gas properties. Vehicles flying at Mach numbers greater than approximately five create shock waves strong enough to cause the chemistry of the air to change significantly. Therefore, to adequately simulate such flows, additional equations governing the behavior of the real gas chemistry must be added to the Navier-Stokes equations. Clearly, the level of computational complexity increases over that of computational fluid dynamics alone.

10

.8

Cp .4

-.4 0 O -.8
.1 I l I I I

1100

1500
X

1900

2300

Fig. 6 Comparison of pressure coefficient (CP) from computation (-), wind tunnel (o), and flight test (V right side, A left side) along _ = 70 line side of the orbiter fuselage,/vt** = 1.05, o_= -3 , Re = 4.0 x 106/ft and 10/9 elevon deflection. The goal of computational aerothermodynamics ments both in the case of external flow around is to predict aerothermal environvehicles and the internal flow within aerodynamic interactions, System to and of the (TPS) from next

propulsion units. The physical phenomena which must be addressed include forces and moments, convective and radiative heating rates, gas and surface catalytic materials, the vehicle. surfaces, interactions layers with these active cooling of Thermal is critical Protection to the design and plasma and their effect on electronic communications

Understanding

phenomena

generation aerospace transportation tected from the very high thermal current basis state-of-the-art, for the solutions Real-gas processes effects for chemicalthe RANS to these include

systems and to ensure that they are adequately proloads encountered in hypervelocity flight. At the equations coupled with the real-gas effects form the

phenomena. radiative thermochemical phenomena nonequilibrium occur. where finite-rate equation typically species

and energy-exchange

To account

for reactions,

conservation equations for each chemical species set. For a relatively simple case of dissociating consider nearly nine air species. triples the number Including the associated of equations

must be added to the RANS and ionizing air, one must conservation In addition, equations to account

for each

to be solved.

for thermal

11

nonequilibrium there are additional energy equationsto describethe energy exchange betweenthe various energymodes(translational,rotational,vibrational, andelectronic). Finally, when thermalradiation is important, the radiative flux mustbe included in the energyequation. Supercomputers havebeen essentialin the developmentof computationalaerothermodynamicsandhavepermittedthe modelingof complex3-D flow fields with finite-rate chemistry. As an example,Fig. 7 showsthe thermochemicalnonequilibrium flow field about the AeroassistFlight Experiment (AFE) vehicle at oneof its trajectory points18 The left sideof the figure showscomputednormalizeddensitycontours.Superimposed on these are circles representingthe location of the bow shock as determined from experimental shadowgraphs obtainedfrom ballistic rangetests. The experimentaland computationalbow shock locationsare nearly identical. Shown on the right side of the figure is a comparisonof the vibrational and translationalstagnationline temperatures. As illustrated, there is a significant difference in the two temperatures. This difference,
depending on the entry conditions, heating to the vehicle 19. can result in a significant increase in the total radiative

4 .... Vibrational Translational

Experiment Theory "_ -_ "0 .04 .08 z (m) .12 .16

(a)

(b)

Fig. 7 Thermochemical nonequilibrium flow field about the aeroassist flight experiment (AFE) vehicle. (a) Computed density contours, computed and experimental bow shock locations; (b) vibrational and translational temperatures along the stagnation line.

12

5.

Computational During

Chemistry computational chemistry nature radiative For example, predict has become of a variety computational intensity factors, a dynamic and versatile notably methods colliboth chemare more

the past decade, in understanding

tool to assist gases, are currently

the fundamental

of substances, chemistry

materials,

and their interactions.

being used to accurately

high-temperature

transport properties, reaction sions. These data are required external resistant oxygen and internal to the atoms. chemistry harsh istry methods are being used

rates, and rates of energy exchange by computational aerothermodynamics of aerospace of space vehicles. and, In addition, the to study and design space-based in particular,

during particle to simulate computational which impact

flow fields environment

materials

of energetic

Computational

involves

solving

the Schrodinger

equation

to determine

the

properties of atoms, molecules, and matter. well-documented and will not be described computers mance gains considered, For example, realistic more have basis have had a profound influence have had of an order supercomputers the ability of magnitude have to perform These these time.

The techniques here. It should on computational naturally

used to solve this equation are be noted, however, that superchemistry. much While larger (FCI) the perforto be in of YMP, techniques. calculations for evaluation and Cray from experimenCI wave onecould problems

allowed impact detailed to those accuracy even

a far greater with

on computational benchmarks the Cray-2 determined and use

Full Configuration FCI calculations in accuracy have the to very results. to obtain program. constants for high speed results results

Interaction 20, using

sets has provided methods. results Furthermore, computer

researchers

approximate produced

that are superior the FCI results Finally, to obtain a specific reaction prediction performance. to obtain orders rate

tal techniques. functions fourth reduced of the

demonstrated

that multireference approximately has accuracy

reproduce

of supercomputers was frequently

tremendously

the time required or to impact

Previously,

if a desired

be obtained, to be useful

the real time required

too long for the results

High-temperature crucial modeling and often mental orders which almost to the accurate of the engine impossible

air and flow

air plus

hydrogen are seldom are available,

species vehicles they

are and are

of the external Experimental temperature. at high

around When data

hypersonic

rate constants 8 is a comparison

available

uncertain data.

by several As shown,

of magnitude. recombination rate constant

Fig.

of the theoretical of experidata is three data maze

rate constant

21 for the three-body Reaction

of H + H + H2 with a variety the scatter calculations to clarify in the experimental like what these is often

at high temperatures

of magnitude. data.

are providing a confusing

do not exist elsewhere,

as well as helping

of experimental

13

17

50
1

100
I

T,K 300
I

500
I

1000
I

3000
I

5000
I

O O "7 16 O O
I IN ! O

&

&

E , 15
ID

$ A o El O O

PRESENT CALCULATION

"" _" _" -.4L,._._

_.. _.El_

E
U

- 14

EXPERIMENTAL

DATA

MOST PROBABLE

VALUES

13 1.0

t 1.5

1 2.0

I 2.5 log (T), K

I 3.0

' 3.5

4.0

Fig, 8 Comparison of calculated and experimental rate constant for the three-body recombination of H+H+H 2.

6.

Structural Computational

Mechanics structural greatly mechanics influenced is another major computational 22. The finite thrust element within method,

NASA

and has been

by supercomputers

developed by the aerospace industry, is the predominant technique used for the analysis of solid and structural mechanics problems. In the early 1970s, NASA became heavily involved NASTRAN computers dynamic propulsion ment in finite finite analysis. systems, more element element used modeling program. to solve areas system through Today, linear include The structures, finite and the development techniques problems analysis of advanced in turn of the widely static used superand vehicles, matedincreased element nonlinear employing for both composite placed

are widely

Application and space sophisticated capability.

structural application models

of aerospace has required

structures.

als to vehicle of much demands

and propulsion

in particular, that have

the develop-

numerical

on computer of the

An example importance and redesign

application shuttle

of supercomputers rocket booster

to a structural (SRB) structural

problem design The

of critical assessment accident

is the space efforts

solid

following

the accident

that destroyed

the Challenger.

14

was believed

to have been

caused

by the failure

of a case joint

in the right rocket

motor 23.

An analysis of the joint was performed at three levels of detail 24 shown in Fig. 9. The first level consisted of a 2-D stress analysis for a shell model of the entire SRB that included calculated lated an equivalent at level stiffness including model the joint. of the joint. Finally, condition Analysis was At the second level, the displacements 2-D shell model calcuand the of a l-degree original reduced 1 were used as boundary conditions for a more refined

of an SRB segment at level circumferential redesigned gap motion The researchers preparation tural formed analysis, joints. slice

at the third level for a detailed performed joint to O-ring NASA and used The Center. solid finite

the displacement the

2 was used as a boundary of the joint. results showed surface was The

3-D analysis on both performance. computer a minicomputer finite element substantially

that the redesigned

at the O-ring analysis located

and the sensitivity using

SRB

performed

various Center

systems. element were

The strucperin 1985

were

at Langley equations,

Research

for model

and verification using 54,870 supercomputers

and for post processing. and the 3-D Research at Ames

2-D shell

analysis performed

on Cray

This effort,

and 1986, was a graphic illustration far remote locations. The solution 14 hours on a VAX 11/780 computer

of how effectively supercomputers time for the full SRB model and was reduced to 14 minutes

can be used from (level 1) exceeded on a single Cray-2

Fig. 9 Three levels of modeling detail for solid rocket booster (SRB) case joint structural analysis.

15

ORIGINAL OF POOR

PAGE

IS

QUALITY

processor solution 6 seconds 9.2 billion

when fully vectorized. More recent algorithm improvements have reduced the time to 13 seconds on the Cray-2 using 4 processors and microtasking and to on the Cray floating-point calculations YMP using 8 processors for matrix and microtasking. factorization took In the latter 6 seconds, case a rate the of operations per second.

1.517 billion

7.

Astrophysics Supercomputers are also heavily objects used (stars, within NASA methods galaxies, Computation observations that cannot to perform to investigate sources, affected radio has astrophysical black nearly holes, every analytic investiand X-ray area of

gations. evolution sources, approach

For example, interstellar

the use of computational clouds, etc.) has proven the universe.

the formation

of astronomical to understanding the study

to be an essential

part of the overall

scientific theory

astrophysics. and permits

It is a tool for interpreting of complex

that complements be done

phenomena

in the laboratory.

Applications of supercomputers to astronomy is too broad a topic to cover here; thus, attention is focused on computer application of galactic formation and evolution 25. Three features power requiting wavelengths data requiring The dynamics galaxies collide. study problem. act with experiments study drive present research These complex interaction efforts are models, source and give (a) the (b) greatly modeling, rise to the play improved and analysis. leads up the one bulk to concentrate of the mass practically dimensions. on the need for the computational processes detail at all of supercomputers. nonlinear requiring high-speed of the stellar complex of many physical

observational

and (c) reduction

of raw observational

reduction, global

enhancement,

dynamics The

of galaxies stars make

of the

population.

in most never The n-body

of interest. Stars move freely through The collision mean free path is large of the stellar dynamics is then the study

these systems and they compared to the system of a collisionless,

gravitational

The n-bodies are the stars, and they act like mass-points, or particles, the 1/r 2 forces of Newtonian gravitation. The models used for describe the time development of the self-gravitational particle

that internumerical Early

system.

experiments dealt with 2-D systems, but, with the advent of supercomputers in the late 1970s, fully 3-D experiments were made possible. The number of particles followed in an experiment motion potentials at each leapfrog is typically are calculated integration method. described above has been used to study galaxy formulation and internal of star forfrom the step 100,000. are obtained The forces necessary that to integrate are calculated by solving the equations equation of of the particles from potentials of motion on a grid. These

from the tabulated and the equations

star densities

the Poisson

are integrated

by the time-centered

The approach

dynamics and the interaction of galaxies 26. One mation stimulated by the collision of galaxies.

example is the investigation Far infrared measurements

16

Infared Astronomy Satellitehaverevealeda classof very luminous interactinggalaxies which appearto be starburstgalaxies.These results have stimulated researchon the natureof the interactionsandthe ways in which the star-formationprocesscanbe stimulated throughthe interaction.An exampleof an optical photographof an interactingpair of galaxiesis shownin Fig. 10.The photograph wasmadein sucha way to emphasize the regionsof high starformation. The observationsshow vividly the violent disruption of the two galaxiesandthe regionsof high currentstarformation,particularly at the centers of the two galaxies. One of the computationalapproaches to understanding the starburstphenomenon lies in understandingthe dynamicalinteractionof two spiral galaxiesembeddedin massive halos by numericalexperiments. For example,onecan follow the dynamicalinteraction of two disk galaxiesto understand the natureof the contractionandsubsequent expansion of the two galaxies.Fig. 11 showsfour snapshots of a collision sequence in a numerical

Fig. 10 Interacting

galaxies

UGC

12914/12915. (Photograph Research Center.)

courtesy

of Dr. H. Bushouse,

NASA

Ames

GRI_i?_IAL PAGE OF POOR

IS

17

QU._LITY

ORIGi[_!AL PAGE OF POOR

IS

QUALITY

Fig. 11 Time development

of numerical

experiment on interacting orbital plane.

galaxies

as viewed from above the

experiment of the total

26. The disks number

were oriented

in the plane The

of the orbit and both the disk particles particles (100,000). units. shown The are a sampling counter (2048) shown in the bottom

and the surrounding right of the frames

halo particles of particles is the elapsed

are shown. time

in the calculation

in nondimensional

The time sequence

corresponds to roughly 150 million the time variations of the densities longer systems. term project toward

years. Understanding the nature of the interaction and and momenta in the two galaxies is the first step in a the nature of the star-formation process in these

understanding

8.

Atmospheric Atmospheric

Modeling modeling climate to assess on supercomputers change. the influence of man's has become activity a principal tool in predicting is clearly evident NASA in has

weather light

and studying

The importance

of this activity

of the need

on the environment.

18

been a major contributor to advancingthe applicationsof supercomputers to


including: ice; response assimilation engaged atmosphere. The study shown modeling of Mars' atmospheres. water cycle is one example missions vapor to Mars understanding global modeling of the interactions and chemistry layer from to changing spaceborne of the between chemical sensing The insight planets. the oceans, atmospheric instruments. study not only into the dynamics atmosphere, simulation of the dynamics ozone acquired of data in studying of the Earth's atmosphere

this field land and the and is also our

including NASA

of the Earth's

composition; increases

the atmospheres

of the planets,

but also lets us gain

of the Earth's

of the use of supercomputers during the fluctuates during

to

planetary

Spacecraft of water

1960s and '70s have the course has an that Mars

that the total amount by an amount

in its atmosphere

of a year

equivalent

to about

1 km3 of ice. This indicates

active hydrological ous other reservoirs. clear, waxes that finally but it is known and wanes remains sublimes

cycle in which water is exchanged between the atmosphere and variThe nature and distribution of these other reservoirs is not entirely that one of them the seasons the in late vapor. spring, summer. the summer's Surprisingly, is the "residual" that one words, behind however, In other it leaves north is made when polar the polar dioxide cap--not seasonal water cap serves survives the one that the one ice cap CO2 with because of CO2 frost--but

throughout away water

an underlying the seasonal carbon

ice cap that is as a source at the south

less volatile pole pole never

and can survive disappears.

heat. This remnant

for atmospheric all year long. The supplies does models proof. puters. tions The losses. years). as cloud easily the

CO2 cap at the south

For reasons

not understood,

significar'.ce water water that treat To model go?

of this asymmetry each been transport problem, transport and carry simulations may to transport, precipitation, just as much there boundary demand NASA There have

in polar summer, many so crudely atmospheric properly, is important are required

cap behavior and the south but

is that: cap does cannot

if the north not, then are based

cap on

to the atmosphere atmospheric of the atmospheric on a globe, aspect multi-year this, however,

where

suggestions,

all of them that to solve requires long

that the results transport, one needs in order (one

be considered supercomof time. gains two or Earth such could

It is this aspect of motion temporal Thus, Even

the full 3-D equaperiods net annual is about

out the calculations

for very to know Mars year model

of the problem

not be enough layer

to properly mixing, Adding

the problem.

The rea-

son is that in addition formation, of atmospheric increase water

are important

physical

processes

that occur,

etc., that can affect "physics"

the behavior

as transport.

to the problem

the computational years,

by an order researchers

of magnitude. at Ames problems. The of polar speed Research They and mixing Center have memory processes, have been a of these cloud

During using transport

the past several supercomputers as well have model

the Cray

at NAS the

to attack examination

these models.

constructed

as various enabled

physical

supercomputers

19

formation, andboundarylayerexchangeata level of detail not previouslypossible 27.For example, the ability of its atmosphereto mix water vapor into the polar regionsduring winter is a key issueconcerningthe currentMars climatesystem.In Fig. 12contoursof a nearly conserveddynamicalquantity (isentropic potential vorticity, IPV) are shown to illustrate this mixing process.The IPV field is taken from a simulation of the Martian winter circulation using a 3-D climate model being developedby researchers at Ames ResearchCenterandbasedon the work in reference27. The center of the figure is the north pole while the edgesare at about30 degreesnorth latitude.The large gradientsi;_

Fig.12Simulated isentropic potential vorticity (IPV)fieldattheMartian north poleinwinter.

20

ORIGINAL OF POOR

PAGE

IS

QUALITY

the IPV
that small Martian resolution,

field just time

poleward motions a wave

of the dark must number

ribbon down

surrounding and penetrate

the pole is evident,

represent

a barrier At is the

large-scale and

wave

break of three

if mixing experiment,

is to occur. therefore,

this particular polar

disturbance

but its amplitude Antarctic and spatial was sluggish suggesdo not readily

the mixing vortex during to move water

it produces is relatively winter. water

is weak. stable

For this particular code which do form to earlier one

and similar the transport latitudes, clouds

in this respect circulation

to the Earth's temporal on Mars they

stratosphere in its ability tions. precipitate Recently lems does

By running

at high

for example, we have

it was found found

that the wintertime

to the very high that when

is contrary on Mars, suggestions. of the most how does is driven model

to earlier

Similarly,

to the ground;

this also is contrary have been used

supercomputers All previous return.

to solve that once boundary

perplexing water move

probback it

of atmosphere-surface not readily

interactions studies By running

on Mars 28. Namely, indicate water

into the surface?

out of the soil,

a coupled

layer/soil

at very high ver-

tical and temporal resolution, return. This will significantly statements can be made about

a physical process has been discovered affect the field and open up new areas recent transport and cloud studies.

that facilitates the of research. Similar

9.

A Look To meet

to the Future future and challenges (HPC) application in many computing governing flows in computational Program. of high-performance for continued science For have been example, to develop vehicle have of NASA's power. equations science, NASA has initiated the High the realfluid it has

Performance development science ization cantly required

Computer

The goal of the I-IPC Program computing leadership in the known for tools

is to accelerate Timely require but,

technologies in aerospace. missions

to meet signifi-

and engineering of the goals increased the

requirements

and engineering field over

of computational a century, design. are

dynamics,

supercomputer-level

performance

for understanding system

the physics

of aerodynamic Recently,

and for aerospace methods

and propulsion appeared.

multidisciplinary

These

methods

used

to

perform computer simulations of coupled phenomena such as those needed to study wing flutter. Flutter is the motion due to the complex interaction of the instantaneous changes in wing forces exceed shape due to aerodynamic forces and the instantaneous change in aerodynamic due to wing shape. The effect of flutter can be catastrophic if dynamic forces structural limits. However, flutter can, in some cases, be suppressed by a coupled of control surfaces. Simulation of flutter control requires combining fluid structural mechanics, of simulation of wing obtained by solving and a nonlinear equation, and control laws into a time-dependent formulation. One flutter control is presented in Fig. 13. In this work 29, the the model inviscid structural form equations of the fluid of motion, dynamic a simple equations con(full

deflection dynamics, example results trol law

were

21

AEROELASTICALLY

OSCILLATING

WING

CONTROL

SURFACE

ACTIVATED

ACTIVE

CONTROL

SUPPRESSES

OSCILLATION

Fig.

13

Computed

pressures suppress

on a low aspect flutter. Moo = 0.9,

ratio

wing

with

and without altitude

a control = 30,000 ft.

surface

activated

to

c_ = O, simulated

ORIGINAL PAGE IS OF POOR QUALITY 22

potential) fixed neous gray

in a fully coupled

manner.

Wing

flutter

is established

with

the control

surface

(upper figure). Next, the control deflections reduce and maintain on the wing surface represent

surface is activated (middle) so that its instantathe wing steady (bottom). The various shades of of instantaneous pressure onthe wing.

the levels

The example above was a highly approximate treatment for a simple shape. Today's aeroscience challenge is to provide integrated, multidisciplinary simulations of aerospace vehicles systems, computer (teraFLOPS). In space simulation, and Earth simulation sciences, data. impact NASA Today's has made space significant solid Earth and Earth and strides sciences its global better in weather/climate and analysis assessof addressing will require of challenges changes, are the throughout complete systems manner. their external capable mission profiles. This structural that these a trillion requires calculations floating-point the simulation controls, will take of propulsion and chemistry tens of hours in on aerodynamics, of sustaining response,

an integrated

It is estimated

operations

per second

of stellar/galactic and monitoring

evolution,

modeling,

multi-spectral/multi-sensor multidisciplinary ments space the of their complex phenomena, modeling potential formation

of the Earth environment for such

on the future of the universe, modeling power

of man, nonlinear

understanding

and the origins of memory these

of life. To begin problems capability. problems airborne can Highly

interdisciplinary of computational cases

teraFLOPS In many ground-based

and terabytes power such

the computational but in others, must power

to solve in a flight

be located systems, autonomous

in

facilities,

as multi-sensor

or satellite

the computational space exploration and ultra reliable facing the Agency Performance The puting basic

be located

environment.

systems, e.g., Mars Rover sample return, spaceborne computers. These specific provide a computational imperative

will require high-performance grand challenges and others for a NASA initiative in High

Computing. to be taken by science focusing research. by the HPC Program on system a sustained because is to fully exploit highly specific Cray depends highly parallel algorithms approximately parallel proof very limand costs. and for comand

approach technology

architecture, computing

parallel objectives YMP.

computer

One of the program's of an 8-processor it presents approach

is to develop

technologies 1000 times cessor supercomputer

that will enable the sustained approach. was chosen

rate of a teraFLOPS; Highly the best alternative

performance

technology

to the conventional number

The conventional

on a small

fast processors sharing a very fast common ited in the future by electronic technology electronic Highly share research packaging), parallel development has been processors costs ongoing processor greatly within to memory increase the NAS

memory. This approach will be severely limits (speed of light, circuit density, bandwidth, relax microcomputer and within and high market. other development on technology, Parallel NASA programs parallelism, program demands

with the much

larger

processing

23

the past few years.The primary thrust hasbeento evaluateparallel processingarchitectures and to developefficient parallel applications.Resultshavebeenencouraging.For example,resultshavebeenobtainedthatshowa 16,000-processor CM-2 will out-perform a Cray-2 singleprocessor for certainlargecomputationalfluid dynamicsapplications 3.
Developing parallel concepts application is another algorithms objective and architecture capable of fully utilizing parallel highly proces-

of the program.

High-performance,

sor testbed facilities will be implemented for developing software. They will be scalable so that initial feasibility scales and at reduced cost. The applicability on selected of new algorithms Three areas (a) will be demonstrated have been chosen:

both applications can be demonstrated processor research

and systems at smaller and applications.

parallel

architectures

multidisciplinary

Computational aerospace vehicles Space

Aerosciences: throughout Sciences: changes

integrated, their mission multidisciplinary and assessment

multidisciplinary profiles modeling of their and

simulation

of

(b)

Earth

and

monitoring on the

of the future

Earth and environment (c) Remote missions Finally, enhanced institutes. needed the

its global

impact

Exploration and remote basic computer

and Experimentation: exploration science centers research goals

extended-duration

human

exploration

and experimentation research is crucial infrastructure NASA obtained to achieving newly potential increased today's a powerful within NASA will be

both at NASA Strengthening

research basic

and at several

computationally the innovative capabilities. the current needs is aimed achievement safety

oriented approaches

to meet the initiative's highly parallel

and exploit offer

In summary, the-art tics and in Earth ing these enable ability benefits needs a teraFLOPS

computers in computer

to leap-frog has unique speed. HPCI This

state-ofat meetwill

and accelerate

the growth

speed.

NASA

in aeronaucan achieve and reliscientific

and space by providing

sciences increase

for greatly over

the technology

that before

the end of the decade capability. resource and increased

or a 1000-fold optimization, vehicles, observations.

performance in aerospace of space

time and cost reduction, and establish

to maximize

24

10. 1.

References V. L. Peterson Simulation


.

and W. F. Ballhaus, NASA CP-2454

Jr., History (1987). and Future of the May First

of the

Numerical

Aerodynamic

Program,

B. T. Blaylock cessing System Supercomputing, F. R. Bailey Congress Israel, August

and F. R. Bailey, Network," Vol. and II (Boston,

"Status Mass.,

Developments International 1988)

of the NAS Conference

Proon

Proceedings

Third 15-20,

pp. 9-10. of the 16th

P. Kutler,

"NAS-The Council 2, 1988)pp. Implementation Environment," 1987)pp. and 82-95.

Year,"

Proceedings Sciences

of the International 28-September "An

of the Aeronautical 1210-1223. of a Distributed International

(Jerusalem,

D. Choi System

and C. Levit,

Interactive

Graphics

for a Supercomputer 1 (Winter P. C. Buning, in Computational Applications, P. Buning, of CFD

Journal

of Supercomput-

ing Applications, S. E. Rogers, Applications computing T. Lasinski, Visualization (1987). V. Watson, Computer Conference 8. R. Magus (1970) 9.

F. J. Merritt, Dynamics," 1987)pp. S. Rogers,

"Distributed International

Interactive Journal

Graphics of Super-

Fluid

1 (Winter D. Choi, Using

96-105. G. Bancroft, AIAA and F. Merritt, Paper Flow

Graphics

Workstations,

87-1180-CP

P. Buning, Graphics for and Show

D. Choi, Visualization (Los Angeles,

G. Bancroft, of Flow

F. Merritt, Fields, AIAA 17-19, Flow

and 1987).

S. Rogers,

Use

of

Aerospace

Engineering

CA, February Transonic

and H. Yoshihara,

"Inviscid

Over

Airfoils,"

AIAA

J., 8

pp. 2157-2162. "Impact Proc. of Computers pp. 68-79. Status of Computational Fluid Dynamics in on Aerodynamic Research and Develop-

V. L. Peterson, ment," IEEE

72 (1984)

10.

P. Kutler, the United

J. L. Steger, States, AIAA

and F. R. Bailey, Paper

87-1135-CP

(1987). Directions in Computational Fluid

11.

P. Kutler Dynamics,

and A. R. Gross, NASA

Progress (1988).

and Future

TM- 100091

25

12. D. M. Bushnell,"NASA Research on ViscousDrag ReductionI'I," in


nar Uram Turbulent Boundary (New Layers, Orleans, and Energy Sources Technology 12-16, Spatial Simulated OH, July Boundary 1984). Character Boundary 1988). Layer Conf., and H. E. Weba LA, February P. R. Spalart, (Cincinnati,

Proc. eds.

LamiE. M.

13.

S. K. Robinson, Evolution National

S. J. Kline, Structures

and Layer,

TimeFirst

of Coherent Fluid Dynamics Direct

in a Numerically

Congress

14.

P. R. Spalart, NASA

Simulation (1986).

of a Turbulent

up to Rtheta=14IO,

TM-89407

15.

V. L. Peterson, Aerospace,"

J. Kim, T. L. Hoist, Proceedings

G. S. Deiwert, 77 (1989) Useful

D. M. Cooper, pp. 1038-1055. Design,

A. B. Watson, Important

and to

F. R. Bailey,"Supercomputer

Requirements of the IEEE, Computation

for Selected

Disciplines

16.

T. Hoist,

Navier-Stokes

in Aircraft

AIAA

Paper

90-

1800 (1990).

17.

P. G. Buning,

I. T. Chiu, F. W. Martin,

Jr., R. L. Meakin,

S. Obayashi,

Y. M. Rizk, Vehicle in and Third Institute,

J. S. Steger, and M. Yarrow, "Flow Field Simulation Ascent," Proc. Fourth International Conference World Inc., Supercomputer 1989) pp. 20-28. Enhanced Thermochemical Flight Experiment Exhibition,

of the Space Shuttle on Supercomputing Supereomputing

Vol. II (International

18.

G. Palmer, Around

Nonequilibrium Vehicle, AIAA

Computations Paper 90-1702 Thermal

of Flow (1990). Radiation for

the Aeroassist G. A. Bird, Flight

19.

J. N. Moss, an Aeroassist

and K. D. Virendra, AIAA

Nonequil,brium 88-0081 (1988).

Experiment,

Paper

20.

C. W. Bauschlicher, S. R. Langhoff, and P. R. Taylor, "Accurate Quantum Mechanical Calculations," Advances in Chemical Physics, 77 (1990) p. 103. D. W. nation Schwenke, "Calculations of Rate Constants Phys. for the Three-Body 89 (1988) p. 2076. Needs for StrucVol. 1I Recombi-

21.

of H2 in the Presence

of H2,"

J. Chem. "Potential Inc.,

22.

C. P. Blankenship tural Analysis," (International

and R. J. Hayduk, Proc. Second Institute, Commission 1986).

Supercomputing 1987) pp. 180-202. Shuttle

International

Conference

on Supercomputing,

Supercomputing

23.

Report

of the Presidential DC, June

on the Space

Challenger

Accident

(Washington,

26

24. W. H. Greene,N. F. Knight, Jr., and A. E. Stockwell, Structural


Space Shuttle SRM and Tang-Clevis Joint, "Galactic NASA TM-89018 and Institute, Approach 1987) (1986). Evolution," World Inc.,

Behavior

of the

25.

B. F. Smith International Exhibition,

R. H. Miller, on

Formation and

Proc.

Fourth

Conference

Supercomputing Supercomputing "A Computational

Third

Supercomputer pp. 1-6. Formation Vol. and II

Vol. II (International

1989)

26.

B. F. Smith and R. H. Miller, Evolution," (International Proc. Second Supercomputing R. M. Haberle, of the Martian

to Galaxy

International Institute,

Conference Inc.,

on Supercomputing, pp. 207-213. "Simulation

27.

J. B. Pollack, Circulation (1990)

J. Schaeffer, Atmosphere,

and I. Polar

H. Lee,

of the General Res., 95

Processes,"

J. Geophys.

pp. 1447-1473. and B. M. Jakosky, Cap on Mars," "Sublimation J. Geophys. and Res., Transport 95 (1990) of Water from the

28.

R. M. Haberle North Residual

Polar

pp. 1423-1437. of Wings

29.

G. P. Guruswamy, with Active Control and

E. L. Tu, and P. M. Goorjian, Surfaces, AIAA "Explicit Paper and

Transonic

Aeroelasticity

87-0709-CP Implicit

(1987). Solution of the Navier-Stokes

30.

C. Levit

D. Jesperson,

Equations on a Massively Parallel and Fluid Dynamics--Advances Structures, 30 (Pergamon Press,

Computer," Computational Structural Mechanics and Trends, Special Issue of Computers and 1988).

27

N/ A
N86ond Aeconaullcs Sp4t_ Admintelr I_on

Report Documentation
2. Government Accession No.

Page
3. Recipient's Catalog No.

t. Report No.

NASA

TM- 102890
5. Report Date

4. Title and Subtitle

NASA's

Supercomputing

Experience

December

1990

6. Performing Organization Code

7. Author(s)

81 Performing Organization Report No.

F. Ron Bailey

A-91024
10. Work Unit No.

505-59
9, Performing Organization Name and Address 1 1. Contract or Grant No.

Ames Research Moffett Field,

Center CA 94035-1000
t 31Type of Report and Period Covered

12. Sponsodng Agency Name and Address

Technical Administration

Memorandum

National

Aeronautics DC

and Space

14. Sponsoring Agency Code

Washington,

20546-0001

t5. Supplementary Notes

Point of Contact:

F. Ron Bailey, (415) 604-5065

Ames

Research

Center,

MS 200-4,

Moffett

Field,

CA 94035-1000

or FTS 464-5065 Conference 1990, Singapore, December 11-12, 1990.

Presented
16. Abstract

at Singapore

S upercomputing

A brief perspectives: Numerical tions chemistry,

overview early

of NASA's

recent

experience and advanced is illustrated Current

in supercomputing supercomputing by discussion capabilities physics,

is presented applications. carried technology

from NASA's

two role

systems systems

development development

in supercomputing are illustrated

of activities in advanced

out by the applica-

Aerodynamic and structural

Simulation with examples mechanics.

Program. Capabilities

in turbulence

aerodynamics, applications

aerothermodynamics, by examples on the future

in science

are illustrated

in astrophysics and atmospheric modeling. The paper concludes with a brief comment directions and NASA's new High Performance Computing Program.

17. Key Words (Suggested

by Author(s))

18. Distribution Statement

Supercomputing Computational science Numerical simulation


19. Security Classif. (of this report)

Unclassified-Unlimited Subject
20. Security Classif. (of this page)

Category

- 59
22. Price

21. No. of Pages

Unclassified
NASA FORM 1626 0CT86

Unclassified

30

A03

For sale by the National Technical Information Service, Springfield, Virginia 22161

S-ar putea să vă placă și