Sunteți pe pagina 1din 60

Running MSEAS

Wayne Leslie, editor


Pat Haley
Pierre Lermusiaux
Matt Ueckermann
Oleg Logutov
Jinshan Xu

Cambridge, MA
September 27, 2011
Contents

1 Introduction 2

2 Setting-up Modeling Domains 3

2.1 GRIDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 Set up grid for large domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3 Mask the large domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.4 Set up grid for small domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.5 Mask the small domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Data Acquisition and Preparing Data for use in MSEAS 7

3.1 Examples of Data Acquisition and Processing


(as used in OOI project) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1.1 Rutgers Glider Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1.2 Forcing (Data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1.3 Forcing (Model) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.1.4 SST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.1.5 SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.1.6 Gulf Stream Feature Analysis . . . . . . . . . . . . . . . . . . . . . . 14

4 Gridding Data in MSEAS (Objective Analysis – OAG) 15

4.1 Set up global objective analysis (OAG) . . . . . . . . . . . . . . . . . . . . 15

5 Preparing Fields for the Dynamical Models in MSEAS 17

5.1 PE initial: preparing OA fields for PE model . . . . . . . . . . . . . . . . . . 17

5.2 PE Forcing: Creating atmospheric forcing for the PE model . . . . . . . . . 18

5.2.1 Acquiring the METCAST (NOGAPS and COAMPS Real-Time) Data 18

i
5.2.2 Acquiring the NOGAPS Archive Data . . . . . . . . . . . . . . . . . 21

5.2.3 COAMPS Archive Data . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.2.4 Review and cross-comparison . . . . . . . . . . . . . . . . . . . . . . 24

5.2.5 Preparing Forcing Data for the PE Model . . . . . . . . . . . . . . . 25

6 Setting up and running Barotropic Tide Calculations 28

7 PE: running PE model 29

7.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7.2 Formulation Of The MSEAS Free Surface Primitive Equation Model . . . . . 31

7.2.1 Continuous Free Surface Primitive Equations . . . . . . . . . . . . . . 32

7.2.2 Control Volume Formulation of the Free Surface Primitive Equations 33

7.2.3 Temporal Discretization . . . . . . . . . . . . . . . . . . . . . . . . . 34

7.2.4 Time Dependent, Nonlinear “Distributed-σ” Spatial Discretization Of


The Free Surface Primitive Equations . . . . . . . . . . . . . . . . . . 37

7.3 Fully Implicit Nesting Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 44

7.4 Domains, Initialization, Tidal Forcing and Surface Elevation: Algorithms and
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7.4.1 Setting Up Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7.4.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

7.4.3 Tidal Forcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

7.4.4 Solving The Equation for the Surface Elevation . . . . . . . . . . . . 54

8 Model Products for the Web 55

9 Model Web Pages 55

1
1 Introduction

This document provides information on some of the steps necessary to set up and run portions
of the MSEAS system. In general it does not provide background information (equations,
references, etc.). That information can be found elsewhere. The section on the PE model does
go into some detail of the background of the model. This document provides the foundation
for, and will evolve into, a more general users manual. Currently much of the documentation
is exercise-specific, i.e. directories currently are specific for a historical exercise.

The critical components of the MSEAS system described herein include:

• Grids
• PE-mask
• Data acquisition
• OAG (global objective analysis)
• PE-initial (PE-model initialization)
• PE forcing (atmospheric forcing)
• Tides
• PE-model (primitive equation model)
• Web distribution

The Grids program is used to set up a model grid for a selected ocean domain. The PE-
mask program is used to produce land/sea mask grid points by modifying an existing GRIDS
NetCDF file. The objective analysis package (OAG) is used to map irregularly positioned
measurement data onto the regular grid produced by GRIDS. PE-initial is used to prepare
initialization (and assimilation) fields for the PE-model. The PE-model is used to predict
the ocean state (temperature, salinity, velocity, etc.) in time and space. Barotropic tides are
generated using a local tidal model.

2
2 Setting-up Modeling Domains

The initial step in running MSEAS is to define a modeling domain (or set of domains for
nesting). The setting-up of modeling domains is accomplished with three separate packages.

1. GRIDS: a package to perform the basic definition functions.


2. PE mask: a package to define land masks for the domains.
3. Cond Topo: a MATLAB package to condition the domain topography for improved
Primitive Equation performance.

2.1 GRIDS

The GRIDS package enables the user to design a model domain for MSEAS. In particular,
the GRIDS package allows the user to:

• set-up and view the horizontal extent and resolution of a domain


◦ define horizontally collocated, nested sub-domains
• define the vertical discretizations
• extract topography from a gridded, evenly spaced database (usually netCDF)
◦ extract topography between GRIDS files in nested domain configurations.
• Condition the extracted topography for improved simulations
◦ clipping high and low values
◦ filtering

The GRIDS package, itself, is split-up into three programs:

grids

This is the primary program. It performs most of the operations described above. A complete,
detailed GRIDS manual is available.
Input: netDCF topography file
ASCII parameter file
Output: netcdf domain descriptor (GRIDS file)
GMETA (NCAR graphics) plot file

crs2fne

A utility to help design collocated nested sub-domains.

3
Input: GRIDS file (from above)
ASCII parameter file
Output: ASCII text

extract grids

A utility to align topographic and masking data between one GRIDS file and a nested sub-
domain GRIDS file.
Input: 2 GRIDS file (nested)
Terminal input
Output: Overwrites one of the GRIDS files

PE mask

The program PE MASK appends/modifies the land/sea mask data to an existing GRIDS
NetCDF file. The land/sea data is first computed from the raw bathymetry contained in the
GRIDS NetCDF by specifying a transition isobath (e.g. zero meters) between Land and Sea.
The user then modifies the mask to remove problem spots (isolated ocean points, channels
too narrow, etc).

COND TOPO

COND TOPO is a series of MATLAB scripts designed to reduce the slope of the topography
for use in the Primitive Equation model. These scripts are designed to modify only the
topography in areas where the slope exceeds a user-specified value according to a user-
specified measure.

2.2 Set up grid for large domain

Create the directory /projects/projectname/Grids/Large Domain

Copy template files into the working directory:


cp /projects/template/Grids/make grid /projects/projectname/Grids/Large Domain
cp /projects/template/Grids/startup.m /projects/projectname/Grids/Large Domain
cp /projects/template/Grids/steal mask.m /projects/projectname/Grids/Large Domain

In addition to the files listed above, the following files will also be used during the procedure:

check mask.m
get dom.m
get coast.m
steal mask.m

You must ensure that these files exist within your matlab path. This can be local or in a
pre-defined folder in your matlab path.

4
Step 1 - edit the script make grid

Make the appropriate edits to define the type of grid (COORD), the grid spacing (DX,
DY), the latitude and longitude of the center of the grid (LONC, LATC), grid center offsets
for nesting (DELX, DELY), rotation (ROTANG), number of grid points in each direction
(NO X, NO Y), number of vertical levels (NO LVL) and the thicknesses of the levels (DZT):

0 COORD grid type: [0] cart. [1] geo. sph. [2] rotated sph.
1000.0 DX (m) Grid spacing in x-direction.
1000.0 DY (m) Grid spacing in y-direction.
-72.7 LONC (deg) Longitude of center of grid.
39.1 LATC (deg) Latitude of center of grid.
0.0 DELX (m) X-offset, grid-trans. centers.
96000.0 DELY (m) Y-offset, grid-trans. centers.
60.0 ROTANG (deg) Angle grid is rotated with respect to East.
173 NO X Number of grid points in x direction.
156 NO Y Number of grid points in y direction.
30 NO LVL Number of vertical model levels.
1.0 DZT (m) Thicknesses of T boxes.
.
.
.
t HYBRID [t] hybrid system [f] step levels
14 KC coordinate interface level.
t OPTIMUM [t] change DZTs for hcf [f] do not change DZTs

DZT defines the thickness of T boxes to specify the vertical grid, however, when optimizing,
which is normally the case, the thicknesses entered are just a placeholder - actually, they
serve as a count of the levels. KC defines the number of levels in the top sigma layer.

Note that there are two steps in the make grid process. It is critically necessary to make sure
to get the second pass (at the bottom of the file) correct (i.e. consistent with the first pass).
The first step generates a first-cut grid, the second step provides cutoffs for the minimum and
maximum allowable depths (HSHLLW, HDEEP) and adds local Shapiro or median filtering.

Step 2 - run make grid

Once the code has been run, review the plotted results to verify the quality: idt gmeta.filename

• First page of plots depicts the grid


• Second page of plots shows the bottom topography
• Third page of plots identifies the locations of invalid coordinate systems for a first cut,
all may be bad

Secondly, review the results for the filtering on the grid/topography: idt gmeta.filename-filt

5
• First page of plots shows the clipped and filtered bottom topography
• Second page of plots shows the hydrostatic consistency factor for the upper system
• Third page of plots shows the hydrostatic consistency factor for the lower system

Iterate the results as necessary. Problems may not be evident until the dynamical model is
run. In this case, iterate further.

2.3 Mask the large domain

Run pe mask within MATLAB

Using the GUI:

• select grids large domain.nc


• select large domain coastline data

(files can be found in /projects/template/Grids/Large Domain)

Use check coast.m to check the coastline.

Run steal mask within MATLAB

In order to create a land-mask that will have no problems, the script steal mask must be
run. Edit the steal mask.m file and insert the appropriate file names for gmskfile and gridfile.
Gmskfile contains a previously created mask; gridfile is the new output. ’

The automatically generated mask has a number of problems. The guess is not always that
good, and there can be problems where there are single, isolated, points, or very thin entries
to bays. Also, three points need to be the same on nested domains.

2.4 Set up grid for small domain

Create /Grids/Small Domain


Copy make grid and *.m files
edit make grid
For nested domains, either X or Y (or both) may be shifted
Again, run pe mask within MATLAB

pe mask

• OOI grids file

6
• Coastline file from awacs
• make a blocky mask and save.

2.5 Mask the small domain

Extract grids - extract a mask from the large domain to the small domain
/home/phaley/PhilEx/Bin/extract grids
give large domain .nc file
give small domain .nc file
extract from large to small
extract the mask.

check coast.m gives a good start for the masking of the small domain - used to check mask of
big domain near small domain but outside small domain – shows any problem with alignment.

cp /projects/qpe/Grids/UNH4p5km/check coast.m
cp -r /projects/qpe/Grids/NestBeta/*.m .
edit check coast.m
Change file names for coarse and fine grids
crsfile = ’sw06/.....’
fnefile = ’OOI/.....’

run check coast.m - for OOI the initial run showed we need an 18km shift of the small domain
– otherwise there would have been problems

Re-edit make grid to get shift it by the correct amount. – changed DELY to 114000

The process of grid definition and subsequent masking is likely to be iterative. The entire
process could be repeated in order to eliminate potential problems and to define a grid which
covers an appropriate oceanic region.

3 Data Acquisition and Preparing Data for use in MSEAS

The examination and preparation of data for use in MSEAS is an important undertaking.
During our real-time forecasting operations, nearly half of our resources will be devoted to
it. Data should never be put into a dynamical model without an examination for consistency
and reliability.

The recommended data format is an ASCII format called MODS. This format is described
in Readme.datamng. The primary advantage of an ASCII format (besides portability) is the

7
ease it provides the user for ”hands on” editing.

Datamng
This package is a series of programs to manage data for MSEAS. Currently this package is
geared towards in situ profile data. The main purpose of this package is to formalize the
most frequent operations performed on the data. Functionally, this package can be described
as:

Inquiry codes
minmax Reports min/max statistics of the data.
timestat Reports time statistics of the data.
File manipulation codes
cat hydro Concatenates multiple MODS data files.
select Extracts casts based on cast id.
select depth Extracts casts based on minimum allowed depth.
Selpoly Extracts casts based on a polygon.
selpos Extracts casts based on position.
seltime Extracts casts based on time.
thinclimo Reduces gridded climatology resolution by factor of 2.
Data manipulation codes
add top Adds a surface value to data.
shifttime Adds a constant time shift to the data.
smooth casts Smooths the data with linear or Gaussian filters.
Data conversion codes
convcast Converts data file formats.
saclant2mods Convert SACLANT formatted data to MODS.

There are a large number of codes which will convert incoming data in ascii or netcdf format
to the MODS format. Generally, a new code must be written for each set of incoming data
as data providers do not adhere to a any sort of standard.

AddSalt
This program adds salinity to temperature-depth profiles. The purpose of the program is to
eliminate a particular type of inconsistency. Say a CTD survey (depth, temperature, salinity)
is augmented with XBTs (depth, temperature). The end result can be a mis-match between
the temperature and salinity data. The worst case scenario would be the creation of fictitious
density fronts from the mis-match. The goal is to create consistent salinities based on the
local water mass properties. This program should be used with caution. A poorly designed,
inconsistent salinity is worse than none at all.

8
3.1 Examples of Data Acquisition and Processing
(as used in OOI project)

3.1.1 Rutgers Glider Data

The directory in which this data is collected is /projects/ooi/Data/Synoptic/Rutgers.

Step 1, run ’./daily data get.com’

The script to collect the data is named “daily data get.com”. This script calls a matlab
script (opendap2mods XXXX.m) for each the four currently operating gliders. These scripts
download the glider data via OpenDAP and generate mods files. Under routine conditions
this script should not need to be modified. However, if RU15 is re-launched, it will need to be
re-activated in “daily data get.com”. If this happens, the OpenDAP file name for this glider
will need to be updated in “opendap2mods ru15.m”. The OpenDAP file names can be seen at
http://tashtego.marine.rutgers.edu:8080/thredds/catalog/cool/glider/mab/Gridded/catalog.html.

In order to allow all users to successfully run the script, specific matlab paths have been
included at the start of the script. At a future time, this will be generalized.

In each opendap2mods XXXX.m file, for example, “opendap2mods ru05.m”, there is a line:
url=’http://tashtego.marine.rutgers.edu:8080/thredds/dodsC/cool/glider/
mab/Gridded/20091030T0000 osse ru05 active.nc’; which is the filename being accessed via
OpenDAP each time the script is executed. The filename is assumed to be unchanged through
whole exercise period. However, if the script should fail to execute properly, this line should
be checked, in order to verify the fact that there is such file in the remote website.

It is also easy to see if they have new glider launched by just checking the above link.

Step 2, edit and execute ’./daily data prep.com’

The next script is called “daily data prep”. There is a sequence of these for different sets of
dates. We plot the data together in groups of three days, so the “daily data prep” scripts
(.com) are set for groups of one, two or three days.

For example, on Nov 9, 2009, we have a three-day group to show (data from Nov07 to
Nov09), so we need to use a three-day file, which is named daily data prep Nov07-09.com.
The template for this three-day file was daily data prep Nov04-06.com. It was copied to
daily data prep Nov07-09.com. On Nov 10, 2009, we need to use one-day file, i.e. copy
daily data prep Nov07.com to daily data prep Nov10.com.

The most important editing necessary is to match the mods files names to those in the script.

9
The mods files are created with a string that includes year, month, day and time. These are
usually the same for all four gliders but can vary. Next the dates for which the data needs
to be aggregated must be edited. These edits occur throughout. It is the most tedious and
time-consuming part of the process. Be careful of dates in file names and Julian dates to be
specified for “seltime”.

The aggregation of data for assimilation is done very early in the script. The data is com-
bined together using “cat hydro”. It is then selected for the period Oct 25 to present using
“seltime”. The data is extended to the near-surface (add top) and thinned (reduced to every
tenth cast) using “thin”.

The plots generated are copied to sub-directories under /srv/www/htdocs/Sea exercises/OOI-


OSSE09/Data. Each three-day set of data goes into a sub-directory identified by the first
date of the three-day grouping. Today’s (Nov. 09) data will complete a three-day (Nov07-09)
cycle. It goes into the “Nov07” directory. The next grouping will go into a “Nov10” directory.

Local web page - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Data/index gliders.html.


Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Data

The index page is segmented for each three day window. The names of the images must be
edited in the index page to match the new plots. I usually move the images for the first day
(of the three days) into an “Older” sub-directory and the images for the second day into an
“Old” directory. This reduces the clutter.

3.1.2 Forcing (Data)

PC to mseas: The NOGAPS (and COAMPS) forcing is being retrieved automatically every
three hours by a program (Metcast) running on my PC. I have turned off automatic updates
and the screen saver on my PC so it should always be available. There is a Desktop Icon for
this program; currently it is 5 icons up from the lower left corner in the first column. There
is a small square blue icon (a quick launch icon) in the system tray at the bottom to the
far right. It is entitled the “Metcast Retriever Monitor”. When double-clicked it will show
the status of the current session. Towards the top it should say something like “Retrieval
transaction worked properly.” And “Now sleeping until 1000”. With luck this will continue
to work throughout the week. It usually does. If not, we have to restart the session.

To restart the Metcast session:


On the left of the monitor is an icon (see above) that says “Metcast Client”. Open it. There
should be a map icon identified as “ORION” with a big check mark in it. Look towards the
top where it says “Area/List”. Click on that. The drop-down should have an entry that
says “Schedule” and it should have a check mark next to it. To restart the session, click the
“Schedule” entry to turn off the check mark and then click it again to restart it. Metcast
should now be successfully running once again. To verify that the session is running properly,

10
open the “Metcast Retriever Monitor” and ensure that there is a message “Now sleeping until
¡next time¿.”.

There is a “WinSCP” session continually running to keep moving files from the PC to mseas.
It is set up to keep the PC folder and my directory on mseas synchronized. However, once
files are processed from the incoming format the synchronization can get out of sync. When
that happens a big window comes up on the PC asking what to do. Click on “Skip All” and
things will get back into sync.

Conversion from grib format: The files are received in “grib” format. They are processed in
the directory /home/wgleslie/jmvwin/noddsfls/ORION/GRIB. In this directory is a script
called “wgrib.com”. Execute the “wgrib.com” script and the files will be converted and
moved.

Subsampling: The incoming files cover a region larger than we need for the project. Therefore
they must be sub-sampled and moved into the OOI area. This takes place in /projects/ooi/Data/Synop
tic/Nodds/2009/Nov/NOGAPS. There is a script in this directory called “files to script.com”.
It needs one small edit. There is a date attached to a script that this script creates and runs.
The whole thing could be changed but this is the current set-up. Edit the date to match
today’s date. Then execute the script. The files will be sub-sampled and moved to their
home; which is /projects/ooi/Data/Atm/
NOGAPS/2009/Nov.

3.1.3 Forcing (Model)

Makeflux: The next step occurs in the directory /projects/ooi/Data/Atm. We need to run a
code to make the fluxes. The first step is to edit a script called “new day.com”. This creates
the files necessary to make the fluxes. The script is taking the last working set of files and
making a new set from them. This is usually done by naming the output files today’s date
and the input files are the most recently run date. There are two lines which look like this:

sed –in-place -e ’s/11 12/11 13/g’ mkmkflux nogaps 08Nov2009.in

This line is changing the ending date for the forcing. The pair of numbers stands for a
month/date combination. Under normal circumstances we would increment the date by one.
Running the “new day.com” script will generate the next script to be run. For today it will
be called “FrcJob 09Nov2009”. Execute this script. It will create the flux files for today.

PE forcing: The final step is to create the PE forcing file for the PE model. This is done in
domain-specific and date-specific sub-directories under /projects/ooi/Data/Atm/NC Files.
There is a “sed file” and “sed script” pair that does most of the work necessary to produce
the files necessary. “sed file” contains the dates that need to be changed. Usually these are
each incremented by one. “sed script” contains the names of the files to be changed. Edit
“sed script” so that the newest directories will be created from the most recent directories.

11
Again, this is usually accomplished by incrementing the dates by one.

Now run sed script. sed script will produce a “submit jobs” script. This script should have
no need for edits. Once sed script has been run, execute the newly modified “submit jobs”
script. This will submit jobs to the compute nodes to make the PE forcing file.

Plots of the wind should be made in the “SW06” sub-directory for the day. This is sim-
ple. Execute the “PlotJob nogaps wind” script to create the gmeta file for the plots. Edit
the “movem wind” script to create gifs of the winds and move those gifs to the direc-
tory from which these can be viewed on the web (/srv/www/htdocs/Sea exercises/OOI-
OSSE09/Winds). “movem wind” will have to be edited to ensure that the plot of the
last day of interest is moved. For example (Nov 10): add this line: mv med101.gif no-
gaps wind Nov13.gif ; note, the previous line is “mv med097.gif nogaps wind Nov12.gif”; so
add 4 to 97 to result in 101, and change Nov12 to Nov13.

The daily averages of the winds are created by running the script “coamps ave.m”. This
should not need any editing. Run this script and then move the png files to the wind web
directory (specified above). Move to this directory and edit the two index files to include the
last date.

3.1.4 SST

• Jet Propulsion Lab

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/


Comparisons/SSTJPL/index sst.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/
Comparisons/SSTJPL
There is a “wget.com” file in the directory. Edit the date in the filename and execute
the script. The data files usually lag the current day by two days. The index page
already has the appropriate dates and image names in place. It is only necessary to
move the comment pointers

• JHU-APL – 3-day Composites

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/


Comparisons/SSTJHU/index sst.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/
Comparisons/SSTJHU
Images are downloaded from - http://fermi.jhuapl.edu/avhrr/gs n/09nov/index thumb.html
Right-click on the most recent “3-Day” and save the image in the original name. Copy
the image to the appropriate directory and edit the index page as necessary.

12
• JHU-APL – Snapshots

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/


Comparisons/SSTJHU/Snapshots/index sst.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/
Comparisons/SSTJHU/Snapshots
Pick an image that appears to be significantly clear. Click on the thumbnail to bring
up a full-size image. Right-click on the image and “Save image as” the original file
name. To have the potential to be turned into a netcdf file, the png image must be
converted into a gif image. On a PC this can be done with the Microsoft Office Picture
Manager. Under “File” click on “Export”. There will be a pull-down on the right that
says “Export with this file format”. Set this to “GIF” and click on OK. Move both
files to the appropriate directory and edit the index page as necessary.

• Oceanwatch

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/


Comparisons/OceanWatch/index.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/
Comparisons/OceanWatch
Images and netcdf files are downloaded from -
http://oceanwatch.pifsc.noaa.gov/las/servlets/constrain?var=66
Choose the output form as “Color Plot”. Set the longitude and latitude ranges as
(35N, 44N, 77W, 67W). Where it says “Show reference map”, set it to “no”. Once
these are set, click on the red “Next”. An image will pop up. Save the image in the
form “ooi oceanwatch sst 102709.gif”. Copy the file to the appropriate directory. Edit
the index file and add the date to one row and copy, paste and edit the image name in
the following row.

3.1.5 SSH

• University of Colorado

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/


Comparisons/Altimeter/index ssh.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/
Comparisons/Altimeter
Images downloaded from - http://argo.colorado.edu/ realtime/global realtime/geovel.html
For the track data: http://argo.colorado.edu/ realtime/global realtime/
At the Colorado web site, the date for the image is chosen from the drop-down menu.
Longitudes are set to 283 and 293 (minimum and maximum, respectively). Latitudes
are set to 35 and 44 (minimum and maximum, respectively). Where it says “Plot

13
Velocity Vectors” choose “No”. Click on the button at the bottom that says “Submit
Values”. This will bring up a page with a picture of a satellite and text that says “Click
on the picture of TOPEX to plot.”. Click on the picture. Right click on the picture and
save it in the form “ooi ssh 110709.gif”. The index page already has the appropriate
dates and image names in place. It is only necessary to move the comment pointers.

• Oceanwatch

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/


Comparisons/OceanWatch/index.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/
Comparisons/OceanWatch
Images and netcdf files are downloaded from -
http://oceanwatch.pifsc.noaa.gov/las/servlets/constrain?var=49
Choose the output form, either “Color Plot” or “Netcdf file”. Set the longitude and
latitude ranges with the same values as above but different format (35N, 44N, 77W,
67W). Where it says “Show reference map”, set it to “no”. Once these are set, click on
the red “Next”. An image or downloadable netcdf file will pop up. Save the image in
the form “ooi oceanwatch ssh 102709.gif” and the netcdf file with the same file name
but with an “nc” extension.

3.1.6 Gulf Stream Feature Analysis

Local web site - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Background/Comparisons/


GSNCOFA/index.html
Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Background/Comparisons/GSNCOFA

Images downloaded from -


https://oceanography.navy.mil/legacy/web/LIBRARY/Metoc/Atlantic/Regional+NATL/
SATANAL/OFA/Color+Composite/index.html

The images all have the same original name “gsncofa.gif” - I then move them to a name of
the form “gsncofa 110909.gif”. On the local web page. the images are put three in a row.
Add the date to one row and copy, paste and edit the image name in the following row.

Note that if the images are not downloaded in a timely fashion, they will be lost. No
repository for these images has been located, as yet.

14
4 Gridding Data in MSEAS (Objective Analysis – OAG)

The mapping of irregular observations to three dimensional data is accomplished using Ob-
jective Analysis techniques. Objective Analysis utilizes the Gauss-Markov or minimum error
variance criterion to map the available data onto horizontal grids. The process repeats for
different vertical levels and analysis times. MSEAS includes two flavors of Objective Analy-
sis: the full-matrix (global) Objective Analysis (OAG) and a local approximation (OA). Both
OA’s are actually 2-level OA’s producing first a slowly varying ”mean” field from synoptic
data and/or climatology. A 2nd level OA maps the synoptic data onto this mean field.

The climatology used for a particular application may vary, depending on the particular
application. We generally use the World Ocean Atlas, a product of the Ocean Climate Lab-
oratory of the NODC (http://www.nodc.noaa.gov/OC5/indprod.html). The current version
of the 1-degree resolution product is the World Ocean Atlas 2009. A higher resolution version
(1/4 degree) is available at http://www.nodc.noaa.gov/OC5/WOA01/qd ts01.html. The cli-
matologies are available on a monthly, seasonal and annual basis.

It is very important to review the climatological profiles. This is necessary both for com-
parison with the available in situ data, but also to ensure that the climatological profiles
properly represent the area in question. In regions where there are land masses, there are
circumstances where the climatological field is recovered from ocean areas not applicable to
the area under study.

OA The local objective analysis program uses a local approximation to the full correlation
matrix. In particular, it allows the user to limit the contributions to a pre-specified num-
ber of the most strongly correlate points (historically, this package was developed after a
student ”burned up” an entire year’s allocation on a supercomputer for one analysis). This
approximation gives the local OA an advantage in speed, but tends to produce noisier output.

OAG The global objective analysis program inverts the entire correlation matrix. This
produces naturally smooth fields at a cost of time and the memory requirements of the
program. In the cases where these costs are acceptable, the global OA is highly recommended
over the local OA.

4.1 Set up global objective analysis (OAG)

Move to appropriate local directory:


cd /projects/projectname/OAG/year/date (e.g. /projects/projectname/OAG/2009/Nov09)

cp existing/* date (e.g. cp Nov08/* Nov09)

15
Files in the directory to be aware of:
AddSSTjob: pastes SST onto OA
CstDenJob: creates mean TS profile
day.table: contains information regarding the runs in this directory
OaJob: script which runs the OA code
Setupjob: script to create the sub-directories within this directory
SubScript: script to record job submissions and any parameters

Next, copy one sample directory for each domain (MAB and NJS refer to a pair of nested
domains):
CpPE Nov08/MAB01 MAB01
CpPE Nov08/NJS01 NJS01

Examine the data file for various statistics by running the following 2 programs:
minmax – overall statistics, note number of profiles
timestat – starting time, median time and final time

Now set up the OA runs:

• cd into first directory (e.g. MAB01)

• edit oag.in to set time for first OA and new data file

• edit PlotJob to set time (same as oag.in)

• cd into next directory (e.g. NJS01) and repeat above edits

• cd back up to day directory (e.g. Nov09) edit setupjob

1. set cntst (count-start) to the index of the first directories (usually 1)


2. set jdayst (julian date start) to the integer julian date of the oag.in
3. set frcst (fraction day starting index) to the index of jdfrac that corresponds to
the fractional part of the julian date set in oag.in
4. set lastfullday to the julian date of the last desired OA day that will have a full
set of fractional days (e.g. if setting up for 6 OA’s at a quarter day interval we’ll
have a complete set on the first day followed by 2 more OA’s on the second day.
Then lastfullday would be set equal to jdayst)
5. set frcfnl (fractional day final index) to the index of jdfrac that corresponds to the
fractional part of the last OA day desired (e.g. following example for lastfullday,
frcfnl would be 2)

run setupjob to create additional subdirectories based on the first ones

edit SubScript, updating the stopping number in the while loop to correspond to the number
of directories created by setupjob

run SubScript to launch the OaJobs on the cluster.

16
5 Preparing Fields for the Dynamical Models in MSEAS

These packages represent the final stage before running a dynamical model.

5.1 PE initial: preparing OA fields for PE model

This package takes gridded volume fields and prepares them for insertion into the MSEAS
PE model as either initialization and boundary condition data or as assimilation fields. Its
specific tasks include:

1. Assembly of the objectively analyzed fields with the model grid definitions.
2. Interpolation of the data from flat analysis levels to terrain following model levels.
3. Construction of velocity fields via geostrophy. These velocity fields are then vertically
interpolated and decomposed into baroclinic and barotropic fields. The determination
of the barotropic is particularly sensitive and PE initial devotes considerable machinery
to it.
4. Scaling of error fields.

Move to appropriate local directory:


cd /projects/projectname/PE initial/year/date (e.g. /projects/projectname/PE initial/2009/Nov09)

cp existing/* date (e.g. cp Nov08/* Nov09)

Files in the directory to be aware of:


day.table contains information regarding the runs in this directory
PiJob script which runs the PE initial code
Setupjob script to create the sub-directories within this directory
SubScript script to record job submissions and any parameters

Next, copy one sample directory for each domain


CpPE ../Nov08/MAB01 MAB01
CpPE ../Nov08/NJS01 NJS01
And one concatenation directory for each domain
CpPE ../Nov08/MABcat MABcat
CpPE ../Nov08/NJScat NJScat

Edit setupjob in the same manner as the one for the OA fields. One additional parameter to
set in this job – set moid (month ID) to the date directory name for the OA’s (e.g. in OAG
examples, set moid to “Nov09”)

cd into first directory (e.g. MAB01)


edit pi ass.in to set tstart and tstop to the modified julian date of first OA and set OA input
file

17
edit PlotJob to set time (same as pi ass.in)
cd into next directory (e.g. NJS01) repeat above edits
cd back up to day directory (e.g. Nov09), run setupjob

edit SubScript updating the stopping number in the while loop to correspond to the number
of directories created by setupjob

run SubScript to launch the PiJobs on the cluster.

cd into first concatenation directory (e.g. MABcat)


edit cat ass job to include new assimilation file, eliminating any previous overlap files
cd into next concatenation directory (e.g. NJScat) repeat above edits

When PiJobs are done, submit cat ass jobs

5.2 PE Forcing: Creating atmospheric forcing for the PE model

This package takes gridded surface flux fields (wind stress, net heat flux, evaporation - pre-
cipitation, and shortwave radiation) and interpolates them onto the model grid.

5.2.1 Acquiring the METCAST (NOGAPS and COAMPS Real-Time) Data

Steps:

1. acquire the data


2. transfer the data from the PC to mseas
3. process the data – convert from grib to nodds format
4. sub-sample the data for the region of interest

Code/Scripts:

1. acquire the data – Metcast


2. transfer the data – winscp
3. process the data – wgrib list.F, wgrib.com
4. sub-sample the data – files to script.com, ext nogaps.F

Acquiring the data

Set-up: The Navy Operational Global Atmospheric Prediction System (NOGAPS) and Cou-
pled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) forcing fields are retrieved
automatically by the METCAST program, which is currently running only on one Windows

18
PC. The potential for running on a Linux system is being investigated. There is a Desktop
Icon for this program; it is entitled ”Metcast Client” and has the word ”METOC” across the
image. This program was written by and acquired from the US Navy and is used to: create
domains for which to retrieve forcings, select products, set up requests (schedule retrieval),
and retrieve the forcings.

1. Start the program by clicking on the desktop icon – a screen is displayed which is
entitled ”METCAST Requestor” (image at left, below).
2. To create a domain, click on ”Area/List” and then ”Create New Area” – the steps to
complete this task will be obvious.
3. To select products, a domain must be chosen. Highlight one of the available domains
by clicking on it. Next click on ”Select Products” under ”Area/List”. We use ”Grids:
US Navy FNMOC” products. When this is clicked, all potentially available products
are shown (image at center, below). The possible levels and available time intervals
(taus) are displayed. The set of available products should be reviewed in order to select
the appropriate group. Any product is selected by highlighting and ensuring that the
asterisk is located next to it. Once done, clicking ”OK” will finish the process.
4. To schedule the retrieval of the fields, click on ”Setup Requests” under ”Area/List”.
The time interval at which the download is requested is flexible; it is set by specifying
the times at which the data is desired (image at right, below).

For most projects the NOGAPS and COAMPS forcing fields are set up to be automatically
retrieved via METCAST every three hours (see Step 4 above). The retrieval usually works
without problem, but there can be instances when the retrieval process fails for unknown
reasons. Therefore the status of the retrieval process must be monitored. When METCAST
is running, there is a small, square, blue icon (a quick launch icon) in the system tray at the
bottom to the far right. It is entitled the ”Metcast Retriever Monitor”. When the icon is
double-clicked, it will show the status of the current session. Towards the top it should say
something like ”Retrieval transaction worked properly.”, and ”Now sleeping until 1000”. If
this text is not displayed, there is a problem and the METCAST session has to be restarted.

To restart the Metcast session (using a project named ”ORION” as an example):


As described above, on the left side of the monitor screen of PC’s desktop is an icon that says
”Metcast Client”. Open it. There should be a map icon identified as ”ORION” with a big
check mark in it. Look towards the top where it says ”Area/List”. Click on that pull-down.
The pull-down should have an entry that says ”Schedule” and it should have a check mark
next to it. To restart the session, click the ”Schedule” entry once to turn off the check mark
and then click it again to restart it. Metcast should once again be successfully running. To
verify that the session is running properly, open the ”Metcast Retriever Monitor” and ensure
that there is a message ”Now sleeping until ¡next time¿”.

Transferring the data from the PC to MSEAS

The files downloaded via METCAST must be transferred from the PC to mseas in a timely
fashion or they will be deleted from the PC. The automatic removal of the files is embedded

19
as part of the software. The time interval has been set to 72 hours. The METCAST system
does not archive the data at the originating end. Once the files are gone from the PC, it
is not possible to recover them from METCAST. A ”WinSCP” session is run continually to
keep moving files from the PC to the desired location on mseas. It is set up to keep the PC
folder and a chosen directory on mseas synchronized. However, the directories will get out of
synchronization as files are automatically deleted from the PC. When that happens winscp
displays a big window on the PC asking what to do. Click on ”Skip All” and things will get
back into sync.

Processing the data

Conversion from grib format: The files from the METCAST system are received in ”grib”
format and processed into an ascii format called ”nodds format”. The ”grib” format was
developed by meteorologists and refers to ”GRIdded Binary”. The ”nodds” format is a very
simple ascii format used by the NAVY. The data files are processed (again using ORION as an
example) in the directory /home/username/jmvwin/noddsfls/ORION/GRIB. This directory
will be project-specific. It has been set up to mirror the folder structure on the PC of the
METCAST file acquisition (the ”/jmvwin/noddsfls/ORION/GRIB” part). Scripts and codes
have been written to: determine what files are present, generate a script that will process
the ”grib” formatted files into ”nodds” format, convert the files and then move the processed
files into a directory for storage. Once set up for the individual project, this process can be
automated, as shown below. The codes are project-specific for a few reasons:

1. filenames for the incoming METCAST files vary depending on the project location
globally and the product being downloaded – this requires the wgrib list code to be
written to read that particular filename format
2. the code specifies where the output of the conversion process should go, and this is
specific to each individual project
3. the directory to which the data is written is generally specified by year and month – the
code must be edited to properly recognize the file names, times and directory locations

For ORION, this code is called wgrib list ORION.F and the script to run this executable is
called wgrib.com. When the wgrib.com script is executed, the grib files will be converted
into nodds format, with those converted files written to an appropriate directory, and then
moved into an archive directory. If the files were not moved to the archive directory, during
any lengthy experiment, files will accumulate quickly and they would also be re-processed
each time the script was run. This would soon become unwieldy and consume unnecessary
time. In addition, the ls command will fail once a large number of files accumulate, so
files must be moved to prevent this from happening. The wgrib.com script can be run
automatically via a cron script. An entry like the one below in a crontab will run the script
at 0530 daily:

30 5 * * * /home/username/jmvwin/noddsfls/ORION/GRIB/wgrib.com >> wgrib.log

20
Subsampling: Incoming files generally cover a region much larger than is needed for a par-
ticular project. It is helpful to sub-sample them before they are moved into the appropriate
final area. For example, for ORION, also known as OOI, this takes place in the directories
/projects/ooi/Data/Synoptic/Nodds/2009/Nov/[COAMPS,NOGAPS]. There is a script in
this directory called files to script.com. Currently the script requires one small edit on a
daily basis to replace a date associated with a script filename that this script creates and
subsequently runs. This will be changed but it is the current set-up. Edit the date to match
today’s date. Then execute the script. The script creates a listing of the files, executes a sub-
sampling code (ext ooi), written to a sub-directory and then moved to their home; which,
in this case, is /projects/ooi/Data/Atm/[COAMPS,NOGAPS]/2009/Nov/. The directory
structure for the final set of files, either COAMPS or NOGAPS, follows a set pattern. This
is to allow the next step of the process (mkmkflux) to properly locate the available files.
The critical part of the directory structure is to have /year/mon/ as the end of the directory
tree. In general, we have been using the rubric:
/projects/project name/Data/Atm/forcing type/year/mon. The ext ooi code reads the full
domain nodds output and produces output files for a smaller domain.

File information: Appendix I provides a description of the files that were used for the QPE
project of Aug/Sep 2009 and provides a general synopsis of directory structure and file names.
This information provides an example as to what may be available for other projects. Other
projects will likely use a different set of products and may also find that there is a different
set of products available for the particular time and location.

5.2.2 Acquiring the NOGAPS Archive Data

Steps:

1. acquire the data from archive via wget (ftp)


2. process the data – convert from grib to nodds format
3. sub-sample the data for the region of interest

Code/Scripts:

1. acquire the data – wget (ftp)


2. process the data – wgrib list.F, wgrib.com
3. sub-sample the data – files to script.com, ext nogaps.F

Acquiring the data

NOGAPS global data files can be individually acquired from the USGODAE archives. This
is a process entirely unrelated to the real-time METCAST process and has, to this point,
generally been carried out during a project reanalysis period. It could, however, be carried

21
out during a real-time experiment to provide an additional or alternative source of data. The
NOGAPS files are stored at USGODAE in daily directories. For example, for 1 August 2009,
the directory from which the files are acquired is
http://www.usgodae.org/ftp/outgoing/fnmoc/models/nogaps/2009/2009080100. An exam-
ple script to acquire all the data for the month of August 2009 is available as wget nogaps godae daily.com
This script uses wget to gather all the desired files. The files from the NOGAPS archive are
received in ”grib” format and must be processed into the ”nodds format”.

Processing the data

Once the files are acquired:

1. create a listing of all the grib files (e.g. ls US* > ls.out)
2. run wgrib list qpe nogaps (the source for this executable is wgrib list qpe nogaps.F).
The ”ls.out” file from step one is the input for wgrib list qpe nogaps and the output
file is the script that you will use to convert the grib files (e.g. wgrib august complete.com).
3. make the output script file from step two (e.g. wgrib august complete.com) exe-
cutable and run it, directing the standard output to a log file (e.g. ./wgrib august complete.com
> & wgrib august complete.log). Having the log file will allow for review of the meta-
data of the grib files. The directory to which the converted files are written is specified
in the conversion software (wgrib list qpe nogaps.F).

Subsampling: Incoming files from the NOGAPS archive are global. Therefore, as for the real-
time METCAST files, they must be sub-sampled and moved into an appropriate area. For
QPE, this takes place in /projects/qpe/Data/Forcing/Nodds/2009/[Aug,Sep]. Once again
the script is called files to script.com, and again, as described in the ”subsampling” section
of the processing of the real-time data, the script needs the date specification edited on a
daily basis. This date is attached to a script filename that this script creates and runs. Once
the script is executed the files will be sub-sampled and moved to their home; which, in this
case, is /projects/qpe/Data/Atm/Archive/NOGAPS/2009/Aug.

File information: Appendix II describes the files that were used for the QPE project of
Aug/Sep 2009. This information provides an example as to what will likely be available for
other projects. Other projects will likely use a different set of products and may also find
that there is a different set of products available for the particular time and location.

5.2.3 COAMPS Archive Data

Steps:

1. acquire the data files from source


2. process the data – convert from binary to nodds format, mask/extrapolate over land,
interpolate to regular grid, sub-sample the data for the region of interest

22
Code/Scripts:

1. acquire the data – wget (ftp), ftp coamps.com


2. process and sub-sample the data – rewrite coamps interp.m

Acquiring the data

COAMPS archive data files have been specially provided to us by NRL-Monterey. Specif-
ically, the COAMPS files are retrieved as a tar file for each available set of forecasts. Our
primary contact for these archived files has been Dr. Jason Nachamkin, Research Scientist
at NRL-Monterey (jason.nachamkin@nrlmry.navy.mil).

The tar files are put on a special server for us and we use wget to retrieve the files. An
example script to acquire the data is ftp coamps.com. The tar files are then moved to di-
rectories for each forecast realization, as shown in movem aug 021210. This script creates
the individual sub-directories, moves the tar files into their respective sub-directories and
then extracts the individual forcing files from the tar file.

Processing the data

The files from the COAMPS archive are received in ”binary, big-endian, 32-bit IEEE floating
point binary” format and must be processed into the ”nodds format”. In addition, these files
are received at 18km resolution and the makeflux software to subsequently process this data
into an appropriate form for the PE model needs this data at fractional degree resolution.
Therefore this data must be interpolated to a regular decimal-degree grid. Furthermore,
the data files need to be masked and extrapolated where there are land masses as many of
the quantities vary significantly between land and water (e.g. temperature). The format
conversion, masking/extrapolation, interpolation and sub-sampling to a desired region are
performed using Matlab. The script used is rewrite coamps interp.m. This script proceeds
from sub-directory to sub-directory and processes all available fields for all available times.
It should be noted that the process can take as long as an hour for each forecast realization.
The sub-sampled data has a final home in the
/projects/qpe/Data/Atm/Archive/COAMPS/Interp/2009/[Aug, Sep] directories.

For the PHILEX project, data was received at both 9km and 27km resolution, covering differ-
ing spatial domains. These have been interpolated, as described in the paragraph above, to a
decimal-degree grid in the directory /projects/philex/Data/Forcing/COAMPS/ via the mat-
lab scripts rewrite coamps complete mask interp [09,27]km.m. In addition, the two sets of
fields have been combined, using the matlab script rewrite coamps mask combo 09km 27km.m.
This script creates a data sets which encompasses the 27km domain, but is written at a
decimal-degree resolution, and utilizes the higher resolution 9km data where available.

File information: Appendix III describes the files that were used for the QPE project of
Aug/Sep 2009. This information provides an example as to what will likely be available for
other projects. Other projects will likely use a different set of products and may also find
that there is a different set of products available for the particular time and location.

23
5.2.4 Review and cross-comparison

Steps:

1. Review product descriptions and meta-data


2. Review log files
3. Plot individual snapshots
4. Compare snapshots from different products

Code/Scripts:

1. plot nodds.m
2. compare plots.m

Once the data from the various data sources has been converted into the nodds format, the
data units, values, structures, etc. must be reviewed and cross-compared. Most variables are
of appropriate standard units (temperatures in degrees Kelvin, relative humidity in percent,
pressure in Pascal, etc.) but this must be verified in whatever manner is possible (meta-data,
documentation, consultation with experts, etc.). There are instances in which, for example,
cloud cover is provided in tenths (or octals) as opposed to percent or pressure is provided in
millibars as opposed to Pascal.

The table below identifies the units that the makeflux codes expect when reading the fields
and what makeflux converts them to after reading. Any fields acquired must be made to
match the units expected as input for makeflux.

To date we have found that the variable which has the greatest inconsistency is precipitation.
Usually the measurement value is in mm or kg/m2 (which is equivalent). The difficulty,
however, generally lies in the time period of interest and whether or not the precipitation is
an accumulation or a snapshot. Some archived precipitation received at one hour intervals
is a snapshot – the amount of precipitation which is predicted to fall during that one hour
period. Other products are summed over the previous 12 hours. After 12 hours the values
reset to zero and then accumulate for the subsequent 12 hours. In this case, each succeeding
instance must have the previous instance’s value subtracted from it, to provide a value for
that time interval. Note in the table above that precipitation is expected to be an amount
during a 12 hour period. The COAMPS archived products at one hour intervals must be
scaled by a factor of 12 to be appropriate. It is critical that differences between the variables
be identified and that the variables eventually made to be of uniform units. If not, then the
forcing fields from the different sets of products will be significantly different.

The real-time METCAST products and the NOGAPS archive products, because they are
received in grib format, include meta-data. When converted using wgrib, the log file of that
process includes information which looks like this:

24
rec 1:0:date 2009081412 TMP kpds5=11 kpds6=105 kpds7=2 levels=(0,2) grid=240 2 m above g
TMP=Temp. [K]
timerange 0 P1 0 P2 0 TimeU 1 nx 360 ny 181 GDS grid 0 num\_in\_ave 0 missing 0
center 58 subcenter 0 process 58 Table 3
latlon: lat -90.000000 to 90.000000 by 1.000000 nxny 65160
long 0.000000 to -1.000000 by 1.000000, (360 x 181) scan 64 mode 128 bdsgrid 1
min/max data 201.01 318.35 num bits 14 BDS\_Ref 20101 DecScale 2 BinScale 0

This particular file is the analysis field on 2009/08/14 at 12Z for air temperature at 2m.
There are 360 points in the X direction and 181 points in the Y direction. The field covers
the area -90 to 90 in latitude and 0 to -1 in longitude at 1 degree intervals. The min and
max of the data values are 201.01 and 318.35 degrees Kelvin (-72.15/45.2 degrees C and
-81.65/113.36 degrees F). While these values may seem extreme, this is a file which spans
the entire globe and thus can have a very wide range.

Each file can be checked (if necessary or possible) to ensure data correctness. The log files
from the conversion process should be reviewed to identify possible problems regarding data
quality.

Each product should be plotted with representative snapshots to ensure product fidelity. As
there are too many individual fields to plot all fields for all times, a few representative days
should be identified (e.g. sunny, windy, rainy, etc.) for which a complete set of representative
snapshots of the available products should be made for cross-comparison. The matlab script
compare plots.m will generate a sequence of comparison plots for a desired set of fields. The
matlab script plot nodds.m will plot a set of nodds formatted files.

Comparisons which should be carried out, either numerically or as images include:

• fields from each available product to one another


• sea surface temperature fields to in situ data, imagery, global fields, etc.
• air temperature fields to station observations
• wind fields to station observations and satellite winds
• fields calculated from bulk formulae (e.g. heat flux, wind stress, etc.) with fields
available directly from archives

5.2.5 Preparing Forcing Data for the PE Model

Steps:

1. modify set-up script to ease preparation of files


2. locate available forcing files for the time period of interest and write the input cards
for next step

25
3. create ascii input files of wind stress, solar radiation, net heat flux, and evaporation-
precipitation (E-P)
4. verify correctness of input/output – iterate as necessary
5. create netcdf file for PE model
6. plot output
7. create daily averages

Code/Scripts:

1. modify set-up script – new day.com, FrcJob


2. locate files and write input cards – mkmkflux
3. create ascii input files – makeflux
4. create netcdf file – pe forcing
5. plot output – PlotJob, pe ccnt
6. create daily averages – coamps ave.m

Once the data from any of the data sources has been converted into the ”nodds” format,
bulk formulae are used to prepare inputs for the PE model. Specifically, four ascii data files
are generated which contain: wind stress, solar radiation, net heat flux, and evaporation-
precipitation (E-P). Following that, a netcdf file is created which is the input to the PE
model. The text below uses the ORION (OOI) project as an example.

Mkmkflux and Makeflux: The next procedure occurs in the directory (e.g.) /projects/ooi/Data/Atm.
The first step is to edit a script called new day.com. This creates the files necessary to make
the fluxes for that particular day. The script works by taking the last working set of files and
making a new set from them. This is usually done by naming the output files with today’s
date while the input files include the date from the most recent run. There are two lines in
the new day.com script which look like this:

sed –in-place -e ’s/11 12/11 13/g’ mkmkflux nogaps 08Nov2009.in

This line is changing the ending date for the forcing. The pair of numbers stands for a
month/date combination. Under normal circumstances, when data is received daily, we
would increment the date by one. Running the new day.com script will generate the next
script to be run. This script is usually named something like FrcJob 07Apr2010, where 07
Apr 2010 would be the date on which the script is run. The new day.com script is normally
invoked only once per day to set up the appropriate files.

When the FrcJob script is executed, it runs two jobs. The first uses a code called mkmkflux
to search for available forcing files for the time period desired and then outputs the set of
input cards for the next program to run. This second program is called makeflux. This
program creates the set of four ascii flux files for the time period.

After the FrcJob script is run, it is imperative to examine the log file which is generated.
For unknown reasons, it is possible for the incoming files from METCAST or an archive

26
to contain incorrect data. These problems have appeared in cloud cover, SST and relative
humidity. The log file contains the maximum value found for a variable at all times during the
period of interest. Cloud cover and relative humidity should not exceed 1.0. Inappropriate
values for air temperature, SST, wind, surface pressure or rainfall will generally be obvious.
Once these errors are found, the makeflux input file is examined to identify the data files
which contain spurious information. An entry must be made in the FrcJob to gzip the
data file which contains spurious information so that the file is not found by mkmkflux and
will not be provided again to makeflux as input. The gzip is necessary as the Metcast
downloading/conversion process often reprocesses a file. The reprocessing could place a new
version of the bad data back into place. Once the entry is made in the FrcJob, future
FrcJob incarnations will include those entries and the data file with spurious values should
not re-appear. After these corrections have been made, the FrcJob should be run again.
This is an iterative process until problems no longer appear in the makeflux log file.

PE forcing: The final step is to create the PE forcing file for the PE model. This is done in
domain-specific and date-specific sub-directories under (e.g.) /projects/ooi/Data/Atm/NC Files.
There is a sed file and sed script pair that does most of the work required to produce the
files necessary. sed file contains the dates that need to be changed. Usually these are
each incremented by one. sed script contains the names of the files to be changed. Edit
sed script so that the newest directories (and files within them) will be created from the
most recent directories. Again, this is usually accomplished by incrementing the dates by
one. The use of sed eliminates the need for editing of multiple files.

Now run sed script. sed script will produce a submit jobs script. This script should have
no need for edits. Once sed script has been run, execute the newly modified submit jobs
script. This will submit jobs to the compute nodes to make the PE forcing file.

Plotting forcing: Plots of the wind should be made in the sub-directory of interest.
Execute the PlotJob nogaps wind script to create the gmeta file for the plots. Edit
the movem wind script to create gifs of the winds and move those gifs to the directory
from which these can be viewed on the web (e.g. /srv/www/htdocs/Sea exercises/OOI-
OSSE09/Winds). movem wind will have to be edited to ensure that the plot of the
last day of interest is moved. For example (Nov 10): add this line: mv med101.gif no-
gaps wind Nov13.gif; note, the previous line is ”mv med097.gif nogaps wind Nov12.gif”; so
add 4 to 97 to result in 101, and change Nov12 to Nov13. The daily averages of the winds
are created by running the script coamps ave.m. This should not need any editing. Run this
script and then move the png files to the wind web directory (specified above). Move to this
directory and edit the two index files to include the last date. The daily averages of the winds
are created by running the script coamps ave.m. This should not need any editing. Run
this script and then move the png files to the wind web directory (specified above). Move to
this directory and edit the two index files to include the last date.

At this point, if a review of the plots indicates that all is well, there is a netcdf file ready for
use in the PE model.

27
6 Setting up and running Barotropic Tide Calculations

Codes are currently kept in sub-directories under /home/logutov/export/

/ibtm
/topo – topography
/tpxo – open boundary
/t tide

Tidal calculations are performed for specific regions. Each region is kept in a separate direc-
tory
cd /home/logutov/export/ibtm/domains
asap/
hawaii/
philex/
qpe/

For OOI, use a new sub-directory


cp -r qpe ooi

cd /home/logutov/export/ibtm/domains/ooi/work (cd ooi/work)

1. create the topography file (make topo local nc.m)


2. run mkgrid.m (cgrid structure file)
3. prepare the Open Boundary Conditions from tpxo
cd /home/logutov/data/tpxo (OSU Global Model)

Atlantic/DATA/Atlantic local.mat from (make tpxo local.m)

topofile -=’/home.oleg/data/awacs/toposmithnoaa-20-60N-20-90W.mat’

velocity 25 m/s cut off

4. run specify obc.m


5. run fed model.
6. tide4obc.m – main program
7. compare against data
8. plot ibtm udata tseries.m uses t tide.m to reconstruct the tides.

28
7 PE: running PE model

Move to appropriate local directory: cd /projects/PNAME/PE /2009/Nov09


cp Nov08/* Nov09

Files in the directory to be aware of:

day.table - contains information regarding the runs in this directory


PeJob - script which runs the PE code
SubScript - script to record job submissions and any parameters

Copy a pair of previous nested run directories (preferably a pair most similar to desired
central simulation for today
CpPE ../Nov08/PJH01 PJH01
CpPE ../Nov08/PJH02 PJH02

cd into large domain directory (e.g. PJH01) edit oi.plan (an annotated version of oi.dat ) to
design todays assimilation strategy incorporate changes to oi.plan into oi.day edit pe PB.in,
set run duration, location of subdomain directory (e.g. /projects/PNAME/PE/2009/Nov09/PJH02)
and desired run parameters

Now go to small domain (e.g. PJH02) Copy oi.dat and oi.plan from large domain
Edit pe PB.in with run duration, location of this directory and run parameters.

Go back to date directory (e.g. Nov09) and edit day.table to plan additional test runs.

Copy PJH01 and PJH02. Edit file in resulting directories as planned in day.table

Edit SubScript to launch all runs

Run SubScript to submit PeJobs (for large domains only, small domain runs are spawned by
large domain runs).

7.1 Background

Haley and Lermusiaux (2010), (hereafter referred to as HL2010), derive conservative time-
dependent structured discretizations and two-way embedded (nested) schemes for multiscale
ocean dynamics governed by primitive-equations (PEs) with a nonlinear free surface. They
first provide an implicit time-stepping algorithm for the nonlinear free surface PEs and then
derive a consistent time-dependent spatial discretization with a generalized vertical grid. This
leads to a novel time-dependent finite volume formulation for structured grids on spherical
or Cartesian coordinates, second order in time and space, which preserves mass and tracers
in the presence of a time-varying free surface. They then introduce the concept of 2-way
nesting, implicit in space and time, which exchanges all of the updated fields values across

29
grids, as soon as they become available. Comparisons of various nesting schemes as well a
detailed error analysis can be found in HL2010.

This manual section describes the derivation of robust and accurate two-way embedding (nest-
ing) schemes for telescoping ocean domains governed by primitive-equation (PE) dynamics
with a nonlinear free-surface. The intent of the methodology is to resolve tidal-to-mesoscale
dynamics over large multi-resolution domains with complex coastal geometries from embay-
ments and shallow seas with strong tidal flows to the steep shelfbreaks and the deep ocean
with frontal features, jets, eddies and other larger-scale current systems.

References to other ocean models and modeling systems, as well as applications of the MIT
Multidisciplinary Simulation, Estimation and Assimilation System (MSEAS; MSEAS Group,
2010) can be found in HL2010. To our knowledge, none of the structured models referenced
therein include fully implicit two-way embedding schemes for nonlinear free-surface PEs.
With fully implicit and 2-way embedding, all of the updated fields values are exchanged
across scales among nested domains, as soon as they become available, within the same
time step. Specific new developments described in detail in HL2010 include: a nonlinear
formulation of the free surface and its boundary conditions; a modification of an implicit
time-stepping algorithm (Dukowicz and Smith, 1994) to handle the nonlinear formulation;
a consistent spatial discretization for a time-dependent finite volume method; a generalized
vertical grid; and a fully implicit 2-way nesting scheme for the nonlinear free surface PE.

A recent and comprehensive review of nesting algorithms (both one-way and two-way) can
be found in Debreu and Blayo (2008), including discussions on time stepping and time split-
ting issues. Additional issues reviewed include schemes to control the open boundaries of a
modeling domain; interpolation of normal velocities from the coarse to fine domains; “scale
oriented” one-way multi-model nesting; and a modeling system based on a semi-Lagrangian
scheme: a fully non-hydrostatic simulation embedded in a larger weakly non-hydrostatic sim-
ulation which, in turn, can be embedded in a still larger compatible hydrostatic simulation.

The nesting schemes mentioned above fall under the categories we define as “explicit” or
“coarse-to-fine implicit” nesting. As shown in Fig. 1, in explicit 2-way nesting, the coarse
and fine domain fields are only exchanged at the start of a discrete time integration or time-
step: the 2-way exchanges are explicit. In “coarse-to-fine implicit” 2-way nesting, the coarse
domain feeds the fine domain during its time-step: usually, fine domain boundary values are
computed from the coarse domain integration but the fine domain interior values are only
fed back at the end of the coarse time-step. In “fine-to-coarse implicit” 2-way nesting, it is
the opposite, fine domain updates are fed to the coarse domain during its integration but the
coarse domain feedback only occur at the end of the fine domain discrete integration.

HL2010 derive 2-way nested schemes, fully implicit in space and time: the fine and large
domains exchange all updated information during their time integration, as soon as updated
fields become available. A type of such scheme consists of computing fine domain bound-
ary values from the coarse domain but with feedback from the fine domain. Some of the
algorithmic details of our multiscale fully implicit two-way nesting schemes are specific to
MSEAS, but the approach and schemes are general and applicable to other modeling systems.

30
(a) (b) (c) (d)
Explicit Coarse-to-Fine Implicit Fine-to-Coarse Implicit Fully Implicit

Figure 1: Schematic of (a) explicit, (b) coarse-to-fine implicit, (c) fine-to-coarse implicit and (d)
fully implicit 2-way nesting. Green arrows sketch coarse-to-fine transfers, red arrows fine-to-coarse.
The left arrow indicates discrete time-integrations or time-steps (n − 1, n and n + 1). Nesting
transfers occur before (explicit) or during (implicit) discrete time-step n. If the time-steps of two
nested models are not equal, the duration of step n would in general be the longest of the two.

In section 7.2 we give the equations of motion, provide an implicit time discretization for
the nonlinear free surface PEs, and develop a time dependent, spatial discretization of the
PEs. and provide details on vertical and horizontal discretizations and fluxes, open boundary
conditions and conservation properties. In section 7.3, we present the fully implicit 2-way
nesting scheme and contrast it from traditional explicit and coarse-to-fine implicit schemes.
Multiscale nesting procedures for setting-up multi-grid domains and bathymetries, for multi-
resolution initialization, for tidal forcing and for solving the free-surface equation are given
in section 7.4.

7.2 Formulation Of The MSEAS Free Surface Primitive Equation


Model

In this section we present the discretized equations of motion for our new nested nonlinear
free surface ocean system. We have encoded both the spherical and Cartesian formulations
and most often use the spherical one, but for ease of notation, we present the equations in
only one form, the Cartesian one. In section 7.2.1, we give the differential form of the free
surface PEs. In section 7.2.2, we recast these equations in their integral control volume form
in order to easily derive a mass preserving scheme. In section 7.2.3, we introduce our novel
implicit time discretization of these PEs. Finally, in section 7.2.4, we derive the corresponding
time-dependent, spatial discretization which preserves mass and tracers in the presence of a
time-varying free surface.

31
7.2.1 Continuous Free Surface Primitive Equations

The equations of motion are the PEs, derived from the Navier-Stokes equations under the
hydrostatic and Boussinesq approximations (e.g. Cushman-Roisin and Beckers, 2010). Under
these assumptions, the state variables are the horizontal and vertical components of velocity
(~u, w), the temperature, T , and the salinity S. Denoting the spatial positions as (x, y, z) and
the temporal coordinate with t, the PEs are:
∂w
Cons. Mass ∇ · ~u + =0 , (1)
∂z
D~u 1
Cons. Horiz. Mom. + f k̂ × ~u = − ∇p + F~ , (2)
Dt ρ0
∂p
Cons. Vert. Mom. = −ρg , (3)
∂z
DT
Cons. Heat = FT , (4)
Dt
DS
Cons. Salt = FS , (5)
Dt
Eq. of State ρ = ρ(z, T, S) (6)
D
where Dt is the 3D material derivative, p is the pressure, f is the Coriolis parameter, ρ is
the density, ρ0 is the (constant) density from a reference state, g is the acceleration due to
gravity and k̂ is the unit direction vector in the vertical direction. The gradient operators,
∇, in eq.s (1 & 2) are two dimensional (horizontal) operators. The turbulent sub-gridscale
processes are represented by F~ , F T and F S .

Since we are considering free surface applications in regions with strong tides, we need a
prognostic equation for the evolution of the surface elevation, η. We integrate equation (1)
over the vertical column and apply the kinematic conditions at the surface and bottom to
arrive at the nonlinear free surface transport constraint
Z η
∂η

+∇· ~u dz = 0 (7)
∂t −H

where H = H(x, y) is the local water depth in the undisturbed ocean.

~,
We decompose the horizontal velocity into a depth averaged (“barotropic”) component, U
and a remainder (“baroclinic”), u~0

~ ~ = 1 Zη
~u = u~0 + U ; U ~u dz . (8)
H + η −H
To further isolate the effects of the free surface, we decompose the pressure into a hydrostatic
component (employing the terminology of Dukowicz and Smith, 1994), ph , and a surface
component, ps :
Z η
p = ps + ph ; ph (x, y, z, t) = gρ dζ ; ps (x, y, t) = ρ0 gη . (9)
z

32
Note that the definition of the hydrostatic pressure automatically enforces equation (3).
~ obtained by taking the vertical
Using (8) and (9), we split (2) into two equations, one for U
average of (2) and one for u~0 by removing the vertical average from (2):

~ u~0

∂U η ∂η ~ = Fb − g∇η
− + f k̂ × U (10)
∂t H + η ∂t

∂ u~0 u~0 ∂η

η
+ + f k̂ × u~0 = Fb − Fb . (11)
∂t H + η ∂t
u~0 |
In equations (10-11), we now have additional terms of the form H+ηη ∂η ∂t
. These small terms
are often neglected, but are kept here since our dynamical focus ranges from the deep ocean
to the very shallow ocean with strong tides. In (10-11), we have introduced the following
notation for the terms we group on the RHS:
1 1 Zη b
Fb = − ∇ph − ~Γ(~u) + F~ ; Fb = F dz
ρ0 H + η −H
and for the advection operator
!
~Γ(~u) = Γ(u) ∂φ ∂φ ∂φ
; Γ(φ) = u +v +w .
Γ(v) ∂x ∂y ∂z

Note that instead of directly solving for u~0 using equation (11), we instead solve for ~u using
equation (2) recast in the following form
∂~u
+ f k̂ × ~u = Fb − g∇η , (12)
∂t
then obtain u~0 from definition (8). By using (12) and (8) instead of (11) we reduce the
truncation error for our time splitting procedure in section 7.2.3.

7.2.2 Control Volume Formulation of the Free Surface Primitive Equations

We now rewrite the governing equations (1, 4, 5, 12) in a conservative integral formulation.
With this transformation at the continuous level, it is easier to derive a new discrete system
that correctly accounts for the temporal changes in the ocean volume due to a moving free
surface.

We integrate (1) and the conservative forms of (4, 5, 12) over a control volume V and use
the divergence theorem to arrive at the follow system of equations:
Z
~=0 ,
(~u, w) · dA (13)
S


Z  Z
~u dV + f k̂ × ~u dV = Fe − g∇η , (14)
∂t V V

33
1 Zη
u~0 = ~u − ~u dz , (15)
H + η −H

~ u~0 ∂η

∂U η ~ = Fe − g∇η
− + f k̂ × U , (16)
∂t H + η ∂t

Z  Z
T dV + Γ̃(T ) = F T dV , (17)
∂t V V


Z  Z
S dV + Γ̃(S) = F S dV , (18)
∂t V V

ρ = ρ(z, T, S) , (19)
∂η h i
~ =0
+ ∇ · (H + η) U (20)
∂t
where
1 Z ~
Z
Fe =− ph n̂h · dA − Γ̃(~u) + F~ dV
~ ,
ρ0 S V
Z η
1
Fe = Fe dz ,
H + η −H
~ is an infinitesimal area element vector pointing
S is the surface of the control volume, and dA
in the outward normal direction to S. In equations (14-18) we have introduced the following
notation for the surface advective fluxes:
!
Γ̃(u)
Z
~u) =
Γ̃(~ ; Γ̃(φ) = ~
φ (~u, w) · dA
Γ̃(v) S

where φ (~u, w) denotes the local advective flux of φ.

7.2.3 Temporal Discretization

We now derive our novel implicit time discretization for the nonlinear free surface PEs (13-
20). Using the following discrete time notation:

tn = n ∆t ; φ (tn ) = φn

where ∆t is the discrete time step, and using the second order leap-frog time differencing
operator:
δ(φ) = φn+1 − φn−1 ,
we obtain the following temporal discretization of (13 - 20)
Z
~=0 ,
(~un , wn ) · dA (21)
Sn
α α
1
Z  Z Z
δ ~u dV + f k̂ × ~u dV = Fe n,n−1 − g∇η dV , (22)
τ V V V

34
Z ηn+1
n+1 1
u~0 = ~un+1 − n+1
~un+1 dz , (23)
H +η −H
n
~)
δ(U u~0 δη
η ~ α = Fe n,n−1 − g∇η α
− + f k̂ × U , (24)
τ H + ηn τ
1
Z  Z
n
δ T dV = F T dV − Γ̃(T n ) , (25)
τ V Vn
1
Z  Z
n
δ S dV = F S dV − Γ̃(S n ) , (26)
τ V Vn
n+1 n
η −η h i
~θ = 0
+ ∇ · (H + η n ) U (27)
∆t
where
1 Z ~ − ~Γ̃(~un ) +
Z Z
Fe n,n−1 = − pnh n̂h · dA F~ n dV + F~ n−1 dV ,
ρ0 S n V n V n−1

Z ηn ( )
1 1 Z ~ ~
Z
Fe n,n−1 = − n n
p n̂h · dA − Γ̃(~u ) + F~ n dV dz
H + η n −H ρ0 S n h Vn
Z ηn−1 Z
1

+ ~ n−1 dV dz ,
F
H + η n−1 −H V n−1

and τ = 2∆t is twice the time step. Following the results of the stability analyses in Dukowicz
and Smith (1994), we have introduced semi-implicit time discretizations for the Coriolis force

φα = αφn+1 + (1 − 2α)φn + αφn−1

and for the barotropic continuity:

φθ = θφn+1 + (1 − θ)φn .

In practice we run using the stabilizing choices α = 13 (C. Lozano and L. Lanerolle, private
communication) and θ = 1 (Dukowicz and Smith, 1994). A stability analysis of the explicit
leap-frog algorithm can be found in Shchepetkin and McWilliams (2005), while Dukowicz
and Smith (1994) analyze the linearized implicit algorithm. Note that even though our dis-
cretization parallels Dukowicz and Smith (1994), we do not make the linearizing assumption
η  H in equations (8,9,27). This generalization allows our system to be deployed in littoral
regions of high topographic variations and strong tides.

A couple of observations are worth making. First, we are considering the case in which the
control volume is time dependent. Therefore, in the new time discretizations (21-26) all terms
involving control volume integrals must be evaluated at the proper discrete times as a whole,
not just the integrands. The second is that equations (22-24) and (27) form a coupled system
n+1
of equations to solve for ~un+1 , u~0 , U~ n+1 and η n+1 . We decouple these equations using a
time splitting algorithm. Another approach would have been to use an iterative method
(e.g. Newton solver). However, time splitting is usually more efficient and for their similar
time-splitting approach, Dukowicz and Smith (1994) showed that no significant physics was
lost, provided f ∆t ≤ 2. Our time steps are always much smaller than that limit.

35
Time Splitting Procedure Similar to Dukowicz and Smith (1994), we employ a time-
R n+1 ~ n+1
splitting approach by first introducing the splitting variables, ( V d
~u dV) and Û :
Z d n+1 Z n+1 Z 
~u dV ≡ ~u dV + ατ δ g∇η dV , (28)
V V V

n
n+1 u~0
~ ~ n+1 + ατ g∇δη − η
Û ≡U δη . (29)
H + ηn
The novel portions of this, needed to deal with the full nonlinear free surface dynamics, are
the introduction of equation (28) and the last term in (29). Substituting (28) and (29) into
(22) and (24) we obtain
Z  Z  Z α̃
δ̂ ~u dV + ατ δ̂ f k̂ × ~u dV = τ Fe n,n−1 −τ g∇η dV
V V V
Z α̃
−τ f k̂ × ~u dV
V
Z 
2 2
+α τ δ f k̂ × ∇η dV , (30)
V

n o
~ + αf τ k̂ × δ̂ U
δ̂ U ~ = τ F
~ n,n−1 − g∇η α̃
n
u~0
η
+α2 gf τ 2 k̂ × ∇δη + αf τ δη k̂ × , (31)
H + ηn
where we have introduced the following notation

φα̃ = (1 − 2α)φn + 2αφn−1 ,


n+1 n−1
~ n+1 ~ n−1
Z  Z d Z
~ = Û
δ̂ U −U ; δ̂ ~u dV = ~u dV − ~u dV ,
V V V

Z ηn ( )
~ n,n−1 1 1 Z ~ ~
Z
F = n
− n n
ph n̂h · dA − Γ̃(~u ) + F~ n dV dz
H + η −H ρ0 S n V n

Z ηn−1 Z
1

+ F~ n−1 dV dz − f k̂ × U ~ α̃ .
H + η n−1 −H V n−1

To decouple equations (30-31) we first notice that the last term in equation (30) and the
second to last term in equation (31) are both O (τ 2 δη). These terms are the same order
as the second order truncation errors already made and hence can be discarded. The last
term in (31) is O (τ δη). Although this represents a first order error term in the free surface
elevation, it is still comparable to the error in the free surface integration
 scheme
 (equation
τ δη
27). Furthermore, the term is divided by H + η, meaning that it is O H+η which is never

36
 
larger than O τ ηδη in a single time step, and often much smaller. Hence we discard this
term too. Discarding these terms results in the following decoupled momentum equations
Z  Z  Z α̃
δ̂ ~u dV + ατ δ̂ f k̂ × ~u dV = τ Fe n,n−1 −τ g∇η dV
V V V
Z α̃
−τ f k̂ × ~u dV , (32)
V
n o
~ + αf τ k̂ × δ̂ U
δ̂ U ~ =τ F
~ n,n−1 − g∇η α̃ , (33)

To finish the decoupling, we take equation (27), average it with itself evaluated a time step
~ n+1 . The result is the following decoupled equation
earlier, and substitute equation (29) for U
for η n+1
2δη
 
n
αθgτ ∇ · [(H + η ) ∇δη] − θ∇ · u~0 δη −
n
=
"
η τ
!!
~ n+1 ~ n ~ n−1
∇ · (H + η n ) θÛ + U + (1 − θ)U (34)

In conclusion, the new elements of temporal discretization are in eqns. (28-29, 32, 34). In
particular, the nonlinear free-surface parameterization is maintained by the H + η n factors
in the divergences in (34) and by the second term on the left-hand side of (34).

Note that it is this decoupling procedure that inspired us to keep the full momentum eqn. (12)
instead of the baroclinic eqn. (11) (see §7.2.1). Had we worked with the baroclinic momentum
equation directly, the barotropic eqns. (29, 31, 33) would have been unchanged, however the
n !
R k̂× u~0 |η
truncation term in going from (30) to (32) would have been ατ f δ V H+ηn η dV instead
of the higher order term we obtained in eq. (30). Further the error term in eq. (30) is more
uniform, while the error term that would have been obtained from the baroclinic equations
would have grown as the topography shoaled.

7.2.4 Time Dependent, Nonlinear “Distributed-σ” Spatial Discretization Of


The Free Surface Primitive Equations

Using temporal discretization (21, 23, 25-26, 32-34), we can derive our new, time dependent,
spatial discretization. This discretization distributes with depth the temporal volume changes
in the water column due to the time-variable free surface. We found that these variations
of cell volumes must all be accounted for to avoid potentially large momentum and tracer
errors in regions of strong tides and shallow topography.

Following Bryan (1969) we discretize (21, 23, 25-26, 32-34) on the staggered Arakawa B-grid
(Arakawa and Lamb, 1977). We retain the B-grid of the PE model of MSEAS based on its
ability to simulate geostrophy and any potentially marginally resolved fronts and filaments

37
(a) Horizontal lay-out (b) Vertical lay-out

Figure 2: B-grid indexing scheme. (a) Horizontal lay-out. Here T stands for variables centered
in tracer cells (T ,S,η) and ~u represents variables centered in velocity cells (~u, u~0 , U
~ ). (b) Vertical
lay-out. Tracer cells are shown, velocity cells have the same lay-out, merely shifted 12 grid-point, w
represents the vertical velocity.

in our multiscale simulations (Webb et al., 1998; Griffies et al., 2000; Wubs et al., 2006). We
employ a finite volume discretization in which the average of a variable over the volume is
approximated by the value of the variable at the center of the finite volume. As shown in
figure 2, the tracers and free surface (T ,S,η) are horizontally located at the centers of “tracer
~
~ , Û
cells” while velocities (u~0 , U ) are located at the centers of “velocity cells” which are offset
1
2
grid-point to the northeast from the “tracer cells”. In the vertical, the three dimensional
tracers and velocities (T ,S,u~0 ) are, again, located at the centers of their respective cells,
while the vertical velocities are calculated at the tops of the tracer and velocity cells. By
choosing this type of discretization, the control volumes of (21, 23, 25-26, 32-34) become
structured-grid finite volumes.

Vertical Grid In section 7.2.4 we introduced our vertical discretization, defining first a set
M SL
of terrain-following depths for the undisturbed mean sea level, zi,j,k . Here we present the
M SL
details of zi,j,k . We can currently employ five different schemes for defining these vertical
levels, two of which are new:

(a) σ-coordinates (Phillips, 1957)


M SL
zi,j,k = −σk Hi,j (35)
where 0 ≤ σk ≤ 1

38
(b) hybrid coordinates (Spall and Robinson, 1989)
(
M SL z̃k if k ≤ kc
zi,j,k = (36)
−hc − σk (Hi,j − hc ) if k > kc

where z̃k are a set of constant depths and hc is the sum of the top kc flat level depths
(c) double σ-coordinates (Lozano et al., 1994)

M SL
 −σk f˜i,j  if k ≤ kc
zi,j,k =  (37)
 −f˜i,j − (σk − 1) Hi,j − f˜i,j if k > kc
" #
zc1 + zc2 zc2 − zc1 2α
f˜i,j = + tanh (Hi,j − href )
2 2 zc2 − zc1
(
[0, 1] if k ≤ kc
σk ∈
[1, 2] if k > kc

where f˜i,j is the variable interface depth between the upper and lower σ-systems; zc1
and zc2 are the shallow and deep bounds for f˜i,j ; href is the reference topographic depth
at which the hyperbolic tangent term changes sign; and, α is a nondimensional slope
parameter (||∇f˜|| ≤ α||∇H||).
(d) multi-σ-coordinates This new system is a generalization of the double σ system in which,
for P σ-systems we define P + 1 non-intersecting interface surfaces. Then the depths
are found from
M SL
zi,j,k = −f˜i,j
p−1
− (σk − p + 1)f˜i,j
p
for kp−1 < k ≤ kp (38)
f˜i,j = 0 ; f˜i,j = Hi,j
0 P

σk ∈ [(p − 1), p] for kp−1 < k ≤ kp

The intermediate interfaces are free to be chosen from many criteria, including key σθ
surfaces (e.g. top of mean thermocline) or large mean vertical gradients.
(e) general coordinates For this new system we provide a three dimensional field of level
M SL
thicknesses, ∆zi,j,k , under the constraint
K
M SL
X
∆zi,j,k = Hi,j .
k=1

The unperturbed levels are then found from


( −1
M SL
∆z M SL
2 i,j,1 if k = 1
zi,j,k =   (39)
M SL
zi,j,k−1 − 12 M SL
∆zi,j,k−1 + M SL
∆zi,j,k if k > 1

Note that our new general coordinate scheme contains schemes (a-d) as special cases.
M SL
Hence, schemes (a-d) are now implemented by specifying ∆zi,j,k outside the model,
M SL
according to their respective rules, and using the resulting ∆zi,j,k as input to the
general coordinate scheme.

39
Finite Volue Discretization In the vertical, our new time dependent, terrain-following
coordinates are defined as follows. First, the terrain-following depths for the (undisturbed)
M SL
mean sea level, zi,j,k , are set (see section 7.2.4). We then define the time variable model
depths such that the change in cell thickness is proportional to the relative thickness of the
original (undisturbed) cell. Hence, along model level k, the depths can be found from
!
η(x, y, t) M SL
zk (x, y, t) = η(x, y, t) + 1 + zk (x, y) . (40)
H(x, y)

By distributing the temporal change in the free surface across all the model levels we simplify
the discretization in shallow regions with large tides (e.g. we avoid requiring that the top
level be thick enough to encompass the entire tidal swing, which in the case of very shallow
depth can mean most of the total depth). An additional computational benefit is that the
time dependence of the computational cell thickness decouples from the vertical index. This
provides us the following properties
K K n n
1 n n 1 X ∆Vi,j,k ηi,j
φn dz M SL
X
φ dz = ; = 1 +
H + η n k=1 i,j,k i,j,k H k=1 i,j,k i,j,k M SL
∆Vi,j,k Hi,j

both of which are used to derive equation (44) below.

Since our vertical grid is both terrain-following and time variable we also define a new vertical
flux velocity, ω, normal to the top, ζ, of finite volumes as
∂ζ
ω = w − ~u · ∇ζ − . (41)
∂t
An important consequence of this definition is that the kinematic conditions at the surface
and bottom reduce to
ω|η = 0 ; ω|−H = 0

Using these definitions, along with the second order mid-point approximation
Z  
φ dV = φ∆V + O ∆V 2 ,
V

we discretize equations (21, 23, 25-26, 32-34) as


Z Z
~+
~u · dA ~=0 ,
ω · dA (42)
n
Slat STnB

δ̂ (~u∆V)
+ αf k̂ × δ̂ (~u∆V) = F̂ n,n−1 − g (∆V∇η)α̃ − f k̂ × (~u∆V)α̃ , (43)
τ
n+1
K
 n+1 n+1 ∆V M SL X (~ud
∆V)
u~0 ∆V = (~ud
∆V) − M SL
dz M SL , (44)
H k=1 ∆V
δ (T ∆V) n
= F T ∆V n − Γ̆(T n ) , (45)
τ

40
δ (S∆V) n
= F S ∆V n − Γ̆(S n ) , (46)
τ
n o
~ + αf τ k̂ × δ̂ U
δ̂ U ~ =τ F ~ n,n−1 − g∇η α̃ , (47)

2δη
 
n
αθgτ ∇ · [(H + η ) ∇δη] − θ∇ · u~0 δη −
n
=
"
η
!#
τ
~ n+1 ~ n ~ n−1
∇ · (H + η n ) θÛ + U + (1 − θ)U , (48)

n

u~0
~ n+1
~ n+1 = Û η
U − ατ g∇δη + δη (49)
H + ηn
where !
~ Γ̆(u)
Z Z
Γ̆(~u) = ; Γ̆(φ) = ~+
φ ~u · dA ~ ,
φ ω · dA
Γ̆(v) n
Slat STnB

n,n−1 1 Z ~ − ~Γ̆(~u)n + F~ n ∆V n + F~ n−1 ∆V n−1


F̂ =− pnh n̂h · dA ,
ρ0 S n

Z ηn ( )
1 i,j 1 Z ~
~ − Γ̆(~u)n + F~ n ∆V n dz
F̂ n,n−1 = − pn n̂h · dA
n
Hi,j + ηi,j −Hi,j ρ0 S n h
Z ηn−1 n
1 i,j
o
+ n−1 F~ n−1 ∆V n−1 dz ,
Hi,j + ηi,j −Hi,j

~ n,n−1 = F̂ n,n−1 − f k̂ × U
F ~ α̃ ,
n
Slat are the lateral surfaces of a computational cell and STnB represents the top and bottom
surfaces of the computational cell.

With our new choice of vertical discretization, all cell volumes are functions of time. In regions
with relatively high tides (compared to the total water depth), not correctly accounting for the
time dependence of the volume change can lead to large errors in the tracer and momentum
fields. Focusing on the computational aspects, this time dependency of the cell volume means
that we solve the tracer and baroclinic velocity fields in two steps. Using temperature as an
example, we first solve for (T ∆V)n+1 . Then, after we have solved for η n+1 , we update the
cell volume and compute T n+1 . A second computational property is that we do not maintain
n+1  n+1
separate storage for (~u∆V)
d and u~0 ∆V . Instead, immediately after solving equation
(43) we remove the vertical mean according to (44). All details of the discretization of the
fluxes through the boundaries of the computational volumes are given below.

Fluxes Through Boundaries of Computational Cells To complete the conservative


spatial discretizations of §7.2.4, we first establish some notation. Values taken at the centers
of tracer volumes have integer indices, e.g. Ti,j,k , while values taken at the centers of velocity
volumes have odd-half integer indices, e.g. ~ui+ 1 ,j+ 1 ,k . In the vertical, values taken at either
2 2

41
the centers of tracer or velocity volumes have integer indices while those at the tops or
bottoms of the computational volumes have odd-half integer indices, e.g. ωi,j,k+ 1 . Using
2
these rules, we define the following averaging and differencing operators:
1  1 
hφixi,j,k = φi+ 1 ,j,k + φi− 1 ,j,k hφiyi,j,k = φi,j+ 1 ,k + φi,j− 1 ,k
2 2 2 2 2 2

1  
hφizi,j,k = φ 1 + φ
i,j,k− 21
2 i,j,k+ 2
δ x (φ)i,j,k = φi+ 1 ,j,k − φi− 1 ,j,k δ y (φ)i,j,k = φi,j+ 1 ,k − φi,j− 1 ,k
2 2 2 2

δ z (φ)i,j,k = φi,j,k− 1 − φi,j,k+ 1 .


2 2

Note that in the above, i and j increase with increasing x and y while k increases with
decreasing depth (negative below sea level).

Now we can define the fluxes through the sides of the computational cells. We start with the
“flux velocities” evaluated at the centers of the sides. Following Dukowicz and Smith (1994,
appendix E), we define the integrated flows through the “East” and “North” lateral walls of
the tracer volumes as
n n 0 1 n−1  n 
n−1 y
υi+ 1
,j,k = ∆y j h∆z u + ∆z U + U ii+ 1 ,j,k ,
2 2 2

1  
n
νi,j+ 1
,k = ∆xi h∆z n v 0 + ∆z n−1 V n + V n−1 ixi,j+ 1 ,k ,
2 2 2

while at the velocity boxes, we define the integrated flows through the “East” and “North”
lateral walls as
n n 0 1 n−1  n 
n−1 y x y
υi+1,j+ 1 = ∆y 1 hhh∆z u + ∆z U + U i i ii+1,j+ 1 ,k ,
,k j+ 2
2 2 2

1  
n
νi+ 1
,j+1,k = ∆xi+ 1 hhh∆z n v 0 + ∆z n−1 V n + V n−1 ix iy ixi+ 1 ,j+1,k .
2 2 2 2

These particular spatial averagings are chosen to match the discrete transport constraint (eq
54 in section 7.2.4). The new aspect here is the temporal evaluations. The baroclinic velocity
components are evaluated at time n. However, the timings for the barotropic components
are, again, chosen to match the transport constraint (54). Also note that these timings
assume θ = 1. To get the corresponding flows through the “West”(“South”) lateral walls,
simply decrement i(j) by one.

To evaluate the fluxes through the tops of the computational volumes, we use the above
definitions in (42). At tracer volumes this yields
n
z n x n ∆Vi,j,k y δ(ηi,j )n,n−2
n
δ (ω )i,j,k ∆xi ∆yj + δ (υ )i,j,k + δ (ν )i,j,k + n
=0 (50)
Hi,j + ηi,j τ
while at velocity volumes we get
δ z (ω n )i+ 1 ,j+ 1 ,k ∆xi+ 1 ∆yj+ 1 + δ x (υ n )i+ 1 ,j+ 1 ,k + δ y (ν n )i+ 1 ,j+ 1 ,k
2 2 2 2 2 2 2 2
 n,n−2
n
∆Vi+ 1
,j+ 1 ,k
δ hhηix iyi+ 1 ,j+ 1
2 2 2 2
+ = 0 . (51)
Hi,j + hhη n ix iyi+ 1 ,j+ 1 τ
2 2

42
Using these definitions of the fluxes through the boundaries of the computational volumes,
we can now simply write the discrete advection operator as

Γ̆(φ)ni,j,k = δ x (hφn ix υ n )i,j,k + δ y (hφn iy ν n )i,j,k + δ z (hφn iz ω n )i,j,k ∆xi ∆yj .

This formulation is valid for both tracer and velocity computational volumes, with the un-
derstanding that for velocity volumes the i, j indices are shifted by one half.
~ both by directly discretizing
We have evaluated the pressure force term, − ρ10 S n pnh n̂h · dA,
R

the integrals of pressure along the cell walls (including the horizontal contributions from the
sloping cell tops and bottoms) and by interpolating the pressure to the corresponding velocity
depths and evaluating the differential gradient. Both give similar results, but the integral
evaluation is conservative and produces less noise in the resulting velocities (especially near
sloping bottoms).

Open Boundary Conditions For u~0 , T , S and η, the application of boundary conditions
is straightforward. Our options (see Haley et al., 2009; Lermusiaux, 1997) include using
values based on data, applying radiation conditions (Orlanski, 1976; Spall and Robinson,
1989) or, following Perkins et al. (1997), using radiation conditions to correct the provided
values. For nested sub-domains, we have first used the interpolated values directly or with
Perkins et al. (1997) corrections. Some other promising options we have explored with nested
sub-domains include using the coarse grid values in a narrow buffer zone around the fine
domain, which reduces discontinuities. Another important multiscale conservative boundary
condition option is to feed-back the averages of the fluxes across the boundary walls shared
with the large domain (fig. 3). These include the advective fluxes of momentum and tracers;
the pressure force; and the diffusive fluxes of momentum and tracers.

We still need an additional boundary condition for F̂ n,n−1 since we are unable to directly
evaluate equation (47) at the boundaries. To derive this boundary condition, we recast
equation (47) in the form of equation (24) and solve for F̂ n,n−1 :

~)
δ(U
F̂ n,n−1 = ~ α + g∇ηi,j
+ f k̂ × U α
. (52)
τ
Now, the right hand side of (52) is made up entirely of quantities that can be directly
evaluated at the boundary of the velocity grid. For the free surface, we have found that it is
more stable to rewrite (52) in terms of transports:
 h i
~

1  δ (H + η) U h iα 
F̂ n,n−1 = ~
+ f k̂ × (H + η) U α
+ g∇ηi,j . (53)
H + ηn  τ 

Note: when evaluating (53), only values at time tn+1 are taken from the provided fields (or
nesting interpolations). The fields at times tn and tn−1 are both already in memory and in
primitive equation balance. They are combined with the tn+1 fields to evaluate (53).

43
Following the algorithm of Perkins et al. (1997), corrections to the provided values (and
nesting interpolation values) are obtained by applying the Orlanski radiation algorithm to
the difference between the PE model values and these provided values, and using these
differences to correct the boundary values.

For the barotropic transport, however, this is only done for the tangential component to the
boundary. The correction to the normal component is derived from the correction obtained
for the surface elevation, ∆η, and the barotropic continuity equation
∂∆η h i
~ =0 .
+ ∇ · (H + η)∆U
∂t

Maintaining the Vertically Integrated Conservation of Mass To see how the free
surface algorithm maintains the vertically integrated conservation of mass, start from equa-
tion (49), multiply by θ(H + η n ) and take the divergence of the result to get
" # !
h
~ n+1
i
~ n+1 n δη
n
∇ · (H + η ) θU n
= ∇ · (H + η ) θÛ − αθgτ ∇ · [(H + η n ) ∇δη] + θ∇ · u~0 .
η τ

Substitute for the right-hand side of the above equation from (48) and rearrange to obtain
δη 1 h 
~ n+1 + U
~ n + (1 − θ)U
i
~ n−1 = 0 .
+ ∇ · (H + η n ) θU (54)
τ 2
Equation (54) represents the discrete form of the barotropic continuity enforced by the free
surface algorithm. Imbalances in (54) produce unrealistic vertical velocities via (50,51).

However, as illustrated by the above derivation, equation (54) is only satisfied to the same
degree that equations (48,49) are satisfied. This places restrictions on the valid avenues for
~ n+1
nesting. For example we can safely replace the coarse domain estimates of (H + η)Û with
averages from the fine domain without disturbing (54). Moving this exchange one step later
~ n+1 ~ n ~ n−1 ] would violate
in the algorithm, and trying to average (H + η)[θÛ + U + (1 − θ)U
(48) (in the sense that we would not be able to make the last substitution leading to 54) and
hence we would violate (54).

7.3 Fully Implicit Nesting Scheme

This section discusses the fully implicit (in space and time) 2-way nesting scheme. Deriving
this scheme required a detailed exploration of the choices of variables to exchange and the
specific algorithms. Considering first traditional “explicit” and “coarse-to-fine implicit” 2-
way nesting (Debreu and Blayo, 2008), fields are often interpolated from a coarser resolution
domain to provide boundary values for a finer resolution domain. Then fields from the finer
domain are averaged to replace corresponding values in the coarser domain. This is a natural
order of operations in the sense that often a refined (smaller) time step is used for the finer

44
domain and hence not all refined time steps have corresponding coarse field values. However,
once updated, the coarse domain fields are no longer the same fields that were interpolated
for the finer domain boundaries. This results in a weakened coupling between the domains
which can be rectified either with an iteration scheme or with fully implicit nesting.

In our new implicit nesting, the goal is to exchange all of the updated fields values as soon
as they become available. This is analogous to an implicit time stepping algorithm, which
simultaneously solves for all unknowns. It is only analogous because here updated values are
exchanged across multiple scales and nested grids within the same time step, for several fields.
Hence, we refer to such schemes as being implicit in space and time; the nested solutions are
intertwined. Such tightly coupled implicit nesting can, in some sense, be seen as refining grids
in a single domain (e.g. Ginis et al., 1998). However, there are some advantages to the nesting
paradigm. First, the time stepping can be easily refined for the finer domains. Second, the
model dynamics can be tuned for the different scales in the different domains. Most notably,
different sub-gridscale physics can easily be employed in the different domains and we have
used this in several regions. Finally, fundamentally different dynamics can be employed in
the different domains (e.g. Shen and Evans, 2004; Maderich et al., 2008). To implement our
implicit nesting, we observe that most of our prognostic variables in our free-surface PE model
(42-49) are coded with explicit time stepping. Therefore, reversing the order of operations
(updating the coarse domain fields with averages from the interior of the fine domain before
interpolating to the boundaries of the fine domain) ensures that, for these fields, the updated
field values are in place as soon as they are needed. For the remaining variables, such implicit
nesting is more complex. The free-surface η has implicit time stepping (48), while U ~ is
coupled to η through (49) and boundary conditions (section 7.2.4). Furthermore, additional
constraints are imposed on η and U ~ to maintain the vertically integrated conservation of
mass (section 7.2.4). Much of the research was centered around these two variables. The
final results are presented next, assuming a two-domain configuration (coarse and fine).

We start by defining collocated grids for the coarse and fine domains as shown in figure 3.
Our nesting algorithm is suitable for arbitrary odd refinement ratios (r:1), subject to the
known issues of scale matching (e.g., Spall and Holland, 1991). Here we illustrate the nesting
with 3:1 examples. We denote fields evaluated at coarse grid nodes with the indices (ic , jc )
and fields evaluated at fine grid nodes with (if , jf ). We distinguish two special subsets of fine
grid nodes: (a) fine grid nodes collocated with coarse grid nodes (if c , jf c ) and (b) fine grid
nodes at the outer boundary of the fine domain (if b , jf b ). In this presentation, we assume
that we have the same number of model levels and distribution of vertical levels in both
domains (i.e. no vertical refinement). However, the topography can be refined in the finer
domains (it is refined in all of our examples), subject to the constraints described in 7.4.1.
The algorithms apply to (and are coded for) both Cartesian and spherical coordinates.

At each time step, our nesting algorithm proceeds as follows (also shown graphically in figure
4).

n+1 ~ n+1
1. Solve (42-47) simultaneously in each domain for (u~0 ∆z n+1 , Û , T n+1 ∆z n+1 , S n+1 ∆z n+1 )

45
(a) (b)

Figure 3: The basic collocated nesting finite volume domains are shown (for a 3:1 example) with
the coarse domain nodal points indicated by open circles and the boundaries of the corresponding
coarse domain computational cells in solid lines. The fine domain nodal points are marked with
plus signs and the boundaries of the corresponding fine domain computational cells in dashed lines.
(a) The r × r array of fine grid cells averaged to update a single coarse grid cell are highlighted. (b)
the 4×4 stencil of coarse grid nodes bi-cubically interpolated to update boundary nodes of the fine
domain are highlighted as are the updated fine grid cells.

n+1 ~ n+1
2. Replace (u~0 ∆z n+1 , (H + η n ) Û , T n+1 ∆z n+1 , S n+1 ∆z n+1 , η n ) in the coarse do-
n+1
main at overlap nodes with the following averages from the fine domain (u~0 ∆z n+1 ,
~ n+1
(H + η n ) Û , T n+1 ∆z n+1 , S n+1 ∆z n+1 , η n )
jf c +rh if c +rh
1
φn+1 n+1
φn+1 n+1
X X
ic ,jc ,k ∆zic ,jc ,k = i,j,k ∆Vi,j,k , (55)
∆Aic ,jc j=jf c −rh i=if c −rh
jf c +rh if c +rh
1
ηinc ,jc n
X X
= ηi,j ∆Ai,j , (56)
∆Aic ,jc j=jf c −rh i=if c −rh

 n+1 jf c +rh if c +rh  n+1



~ 1 
~
Hic ,jc + ηinc ,jc Û ic ,jc = n
X X
Hi,j + ηi,j Û i,j ∆Ai,j (57)
∆Aic ,jc j=jf c −rh i=if c −rh

where rh = br/2c, the greatest integer less than or equal to r/2,


φ = u~0 , T, S ; n
∆Vi,j,k n
= ∆xi,j ∆yi,j ∆zi,j,k ; ∆Ai,j = ∆xi,j ∆yi,j .

3. In the coarse domain, recompute U ~ n from (49) and updated η n . When the coarse
domain estimate of U~ n was computed from (49) in the n-1 time step, the coarse domain
estimate η n had not yet been updated from the fine domain (eq. 56 in step 2).

46
(a)

(b)

Figure 4: Present MSEAS-nesting algorithm, 2-way implicit in space and time. The nesting
algorithm is shown schematically (a) on the discrete structured finite-volume equations (42-49)
and (b) in words. Solid lines indicate averaging operators from fine domain to coarse. Dashed lines
indicate interpolation operators from the coarse domain to the boundary of the fine domain.

47
~ n+1 , ∆z n+1 , u~0 n+1 , T n+1 , S n+1 .
4. In the coarse domain, solve (48-49) for η n+1 , U

5. Using piece-wise bi-cubic Bessel interpolation, B, replace values in the fine grid bound-
ary with values interpolated from the coarse grid
 
φn+1 n+1
if b ,jf b ,k = B φic ,jc ,k , (58)
 
n+1 n+1
u~0 if b ,jf b ,k ∆zin+1
f b ,jf b ,k
= B u~0 ic ,jc ,k ∆zin+1
c ,jc ,k
, (59)

~ n+1
h 
~ in+1
i 1
Uif b ,jf b ,k = B Hic ,jc + ηin+1
c ,jc
U c ,jc
(60)
Hif b ,jf b + ηin+1
f b ,jf b

where
φ = T, S, η n , η n+1 .
Note that equations (59-60) are written in terms of transports rather than velocities.
This is done to generate a consistent mass flux as seen by both domains. We have
implemented this scheme to either use the interpolated values in (58-60) directly or
to correct them to allow the outward radiation of scales unrepresented in the coarse
domain. The radiation scheme is an extension of Perkins et al. (1997) and updates our
previous radiation schemes (Lermusiaux, 2007; Haley et al., 2009).

~ n+1 , ∆z n+1 , u~0 n+1 , T n+1 and S n+1 .


6. In the fine domain, solve (48-49) for η n+1 , U

As written in steps 1-6, the new fully implicit nesting scheme requires that both domains
be run with the same time step. This is an outgrowth of the applications we have been
running, which have strong thermoclines, haloclines and pycnoclines over shallow areas, steep
shelfbreak and/or open ocean. These applications require a relatively large number of vertical
levels (e.g. from 50 to 100 or more). Satisfying the Courant-Friedrichs-Lewy (CFL; Courant
et al., 1928) restrictions from the resulting vertical discretizations requires a small enough
time step such that the maximum horizontal velocities only reach about 10% of their own
CFL limits. Hence decreasing the horizontal grid spacing by a factor of 3 or 5 does not affect
the total CFL limitation much or require a smaller time step.

It is a straightforward problem to restructure this algorithm to handle refined time stepping.


First, split the data transfer from the horizontal interpolation in step 5. Before step 2 the
values from the coarse grid in the 2 bands outside of the overlap region (i.e. all the coarse
grid points in the interpolation stencil but outside of the overlap region) would be passed to
some auxiliary storage in the fine grid model. In the fine grid, these external values would
be time interpolated to the current refined time step then spatially interpolated with the
averaged fine grid values to the outer boundary. An advantage of our scheme over one with
refined time stepping is that the fine grid fields are available to make the update in equation
(57), which increases the coupling of the barotropic modes between the domains.

Our scheme is directly applicable to an arbitrary number of non-overlapping, telescoping


domains. First, iterate step 2 over all domains from finest to coarsest. Then, apply the series
of steps 3-5 for all domains from coarsest to finest.

48
Finally, since we allow refinement in the topography, our undisturbed vertical terrain-following
M SL
grid, zi,j,k , requires constraints to maintain consistent interpolation and averaging operations
in the above nesting rules. Specifically, in the portion of the coarse domain supported by
averages from the fine domain, ziMc ,jSLc ,k
are computed from averages ziMf ,jSL
f ,k
following equation
(56). Along the boundary of the fine domain, zif b ,jf b ,k are interpolated from ziMc ,jSL
M SL
c ,k
following
equation (58). These restrictions, along with the nesting couplings on η, keep the compu-
tational cells consistent between domains which, in turn, keeps the averaging operations in
equation (55) consistent (i.e. as long as the coarse cell is equivalent to the sum of the fine
cells then the integral of a field over the coarse cell is conceptually the same as the sum of
the integrals of the same field over the corresponding fine cells).

7.4 Domains, Initialization, Tidal Forcing and Surface Elevation:


Algorithms and Implementation

7.4.1 Setting Up Domains

Topography There are two main issues when defining topographies for nested simula-
tions. The first is that the finer resolution grid can support finer topography scales, including
sharper gradients. The bathymetry on the finer grid is not an interpolation of the coarser
grid bathymetry, but the coarser grid bathymetry is a coarse-control-volume average of the
finer grid bathymetry. The refinement in topographic scales can lead to abrupt artificial
discontinuities in the topography where the fine and coarse domains meet. This can be exac-
erbated by conditioning the topography (Haley and Lozano, 2001) to control the hydrostatic
consistency condition (Haney, 1991). For a given value of the hydrostatic consistency factor
(roughly proportional to d~x·∇h
h
) the finer resolution domain can support steeper bathymetric
features (e.g. shelfbreak). To ensure a smooth transition, we define a band of points around
the outer edge of a fine domain (e.g. a band from the boundary to 6 points inside the bound-
ary, see also Penven et al., 2006). In this band, we replace the fine grid topography with a
blend of the coarse and fine grid topographies:

hblend = αhfine + (1 − α)hcoarse (61)

where α varies from zero at the boundary to one at the inner edge of the band (e.g. 6 points).

The second issue comes about from the nesting algorithm itself. As mentioned in section 7.3,
M SL
we force the undisturbed vertical grid, zi,j,k , to satisfy the nesting rules of equations (56,58).
To ensure that the topographies in nested domains satisfy (56,58) and the blending (61) ,we
usually follows these steps:

1. Apply the nesting constraints on the unconditioned topographies. Starting from the
smallest domain, average the fine grid topographies on the successively larger topogra-
phies according to (56). Then starting from the coarsest domain, interpolate the to-
pographies to the boundaries of the successively smaller domains according to (58).

49
2. Starting from the largest domain, apply the conditioning. After the largest domain is
conditioned, apply the blending (61) to the second largest. Condition that domain and
repeat the blending-conditioning cycle with the successively smaller domains.

3. Reapply the nesting constraints on the conditioned topography. Repeat step 1.

Land Masking The first constraint for masking occurs at the boundaries of the finer
domains. Considering any two nested domains, we want continuity of the masks across the
domain boundary. In other words, a coastline that crosses the boundary of the fine domain
should not have a jump or jog at the boundary of the fine domain. Enforcing this consistency,
along with boundary constraints on the topography, enforces consistent estimates of the areas
of the lateral boundaries of the fine domain as measured in both the coarse and fine grids.

The second constraint is to have a certain degree of consistency in defining land and sea in the
interior of the fine domain. This is a less exact statement because the fine domain supports
a more detailed resolution of the land/sea boundary than the coarse domain. Because of the
superior resolution, we take the view that the land mask in the interior of the fine domain
is “more correct” than the coarse domain mask. Since we use collocated grids, this provides
us with a simple algorithm for resetting the coarse mask. For each coarse grid point fully
supported by fine grid points, we count how many of the supporting fine grid points are land
and how many are sea. If at least one half the fine grid points are sea, the coarse grid point
is marked as sea, otherwise it is masked as land.

Our general procedure is to first define the land mask for the largest (coarsest) domain. Then
use that mask to define a crude first guess for the mask in the fine domain. We then reset
the interior nodes of the fine mask to better resolve the coasts (leaving a narrow band around
the exterior untouched to ensure continuity through the boundary). If we have more then
two domains we use the current domain to initialize the mask for the next finest domain and
repeat. When we finish the mask in the smallest (finest) domain we use that mask to reset
the mask in the next coarser domain, using the above sea/land counting procedure. We then
examine the modified mask in that next coarser domain to eliminate any spurious artifacts
that may have been created (e.g. a narrow mouth of a bay may have been closed leaving an
isolated “lake” that we do not need to maintain). We repeat with the next coarser domain
and so on until we get back to the coarsest domain.

7.4.2 Initialization

Our most common initialization scenario is to estimate the best initial synoptic state from
temperature and salinity data (in situ, climatologies, satellite, etc.) but with little or no
direct velocity data. Our initialization scheme for this situation is described next, focusing
mainly on the nesting considerations, first briefly for the rigid-lid procedures and then the
extensions for initializations with a free surface.

50
Rigid Lid Our procedures for rigid lid initializations in nested grids are based on, e.g.
(Haley et al., 2009). Starting from temperature and salinity data, climatologies, etc., we
create three dimensional estimates of temperature and salinity, often using objective analyses
(Carter and Robinson, 1987; Agarwal and Lermusiaux, 2010). From these three dimensional
temperature and salinity estimates, we construct density (6) and the hydrostatic pressure
(9). We then estimate the total velocity using the rigid lid geostrophic relation
1 g Zz
f k̂ × (~u − ~uref ) = ∇ph = ∇ρ dζ
ρ0 ρ0 Zref
where Zref is a suitably chosen reference level, which can be a “level of no motion”, uref is the
absolute velocity at that depth and we have interchanged the horizontal gradient with the
vertical integral. When evaluating ∇ρ at a particular depth, if any of the ρ values used for the
gradient would be below topography, we set ∇ρ to zero. To enforce no penetration of land,
we find a streamfunction, ψ, which satisfies ∇2 ψ = ∇ × ~u with ψ set to be constant along
coasts. From this ψ we recompute the velocity. We decompose this velocity into barotropic
and baroclinic parts (8). The baroclinic portion is fine as is, but barotropic velocities at this
stage generally do not satisfy the non-divergence of transport. To enforce this, we define a
transport streamfunction k̂ × ∇Ψ = H U ~ and fit it to our estimated barotropic velocities via
the Poisson equation !
k̂ ~ .
∇× × ∇Ψ = ∇ × U
H
We derive Dirichlet boundary conditions for the above by first noting that the tangential
derivative of Ψ to the boundary equals the normal component of transport, H U ~ , through
the boundary. We then integrate this relation around the boundary to obtain the Dirichlet
values. For domains with islands, we also need to provide constant values for Ψ along the
island coasts. We do this in a two stage process in which we first compute Ψ assuming all the
islands are open ocean. We then use that initial guess to derive constant island values that
minimize the relative inter-island transports using Fast-Marching-Methods (Agarwal, 2009).

Nesting Considerations: For nesting the initial temperature, salinity, other tracers and
baroclinic velocity, we can directly enforce some conservation constraints by averaging esti-
mates from finer to coarser grids. For the transport streamfunction, we go to the additional
step of generating the Dirichlet boundary values for the Poisson equation in the fine domain
by interpolating the streamfunction values from the coarse domain. This ensures that the
same constant of integration is used for both domains and that the net flows through the
fine domain are consistent in both the coarse and fine grids. For island values, if the island
is represented in both the coarse and fine domains, the coarse domain value is used. If the
island is only in the fine domain, then the procedure of the preceding paragraph is used.

Free Surface The starting point for the free surface initialization scheme is the above rigid-
lid initialization. We start by explicitly computing the final, rigid-lid barotropic velocities
from
~ = k̂ × ∇Ψ .
U
H

51
We next create an equation for the initial surface elevation. We start from the geostrophic
approximation with the full pressure
1
f k̂ × ~u = g∇η + ∇ph .
ρ0
Integrating this equation in the vertical from −H to 0 and isolating η results in

~ 1 Z0
gH∇η = f k̂ × H U − ∇ph dz . (62)
ρ0 −H
Finally we take the divergence of (62) to get
Z 0
~ − 1 ∇·
  
∇ · (gH∇η) = ∇ × f H U ∇ph dz . (63)
ρ0 −H

To generate Dirichlet boundary values for (63) we integrate the tangential components of (62)
around the boundary. Because the coastal boundary condition is zero normal derivative, no
special action needs to be taken for islands.

Once an initial value for η is constructed, then, by (40), the initial depths are recomputed.
The tracers (temperature, salinity, etc) and baroclinic velocity are re-interpolated to these
new initial depths. Finally the barotropic velocities from the rigid-lid calculation are rescaled
to preserve the transports:

~ free H ~
U surface = Urigid lid .
H +η

Nesting Considerations: These are the same as for the rigid-lid case. The additional
detail is that now we also interpolate the coarse grid estimate of η to generate Dirichlet
boundary values for solving (63) in the fine domain.

7.4.3 Tidal Forcing

Constructing The Tidal Forcing When adding tidal forcing to our simulations, our
underlying assumption is that our regional domains are small enough so that the tidal forcing
through the lateral boundaries completely dominates the local body force effects. To model
these lateral forcings we employ the linearized barotropic tidal model (Logutov, 2008; Logutov
and Lermusiaux, 2008). We use a shallow water spectral model and generate two dimensional
fields for the amplitude and phase of tidal surface elevation and the barotropic tidal velocity.
We dynamically balance these barotropic tidal fields with our best available topographic and
coastal data along with the best exterior barotropic tidal fields (e.g. Egbert and Erofeeva,
2002). Once we have constructed our tidal fields for the desired modes, we can simply evaluate
them for any time.

The above procedures can provide tidal fields on different grids than used by our PEs. For
example Logutov (2008); Logutov and Lermusiaux (2008) are formulated on a C-grid, instead

52
of the B-grid being used here. In particular, this means that tidal fields interpolated from
these grids will not, in general, exactly satisfy the same discrete continuity as in our grid. Our
experience shows that satisfying the same discrete continuity leads to more robust solutions.
To enforce this constraint, we solve the constrained minimization problem
Z n
J = ~ 1∗ · U
αη1∗ η1 + θβ U ~ 1 + φβ U
~∗ ·U
~
h  i h  io
~
+λ< iωη + ∇ · H U ~
+ γ= iωη + ∇ · H U dV
~ 0 as the complex tidal surface elevation and barotropic tidal velocity interpolated
where η0 , U
~ 1 are the additive “correction” complex tidal surface elevation
from the original grid, η1 , U
and barotropic tidal velocity that minimize J ,
η = η0 + η1 ; U ~ =U ~0 + U
~1
α and β are the weights (including nondimensionalizing factors), λ and γ are the Lagrange
multipliers, the superscript ∗ indicate complex conjugation, < and = refer to the real and
imaginary parts and θ, φ are penalty parameters to inhibit unreasonably large total velocities.
Using the calculus of variations, the above minimization is equivalent to solving the following
system of equations
H2
" # !
θ ~0
ωη1 − ∇ · ∇ (αη1 ) = −ωη0 + i∇ · HU
(θ + φ)ωβ θ+φ
α|open boundary = 0
!
q ∂η1
iωη1 + gH = 0


∂n
open boundary

~1 = − φ U
U ~0 − i H
∇ (αη1 )
θ+φ (θ + φ)ωβ
Note that the radiation boundary condition does not come from the variations but is a useful
addition we are free to make after obtaining α = 0 from the variations.

Applying The Tidal Forcing We use the barotropic tides both for initialization and
for boundary forcing. For the surface elevation we simply superimpose the tidal surface
elevation with the subtidal elevation estimated in 7.4.2. For initialization, this superposition
is done over the entire area before the final vertical interpolation of tracers and u~0 . For
lateral forcing this is done at run-time in the PE model at the exterior boundaries (and also
along 2 bands inside these boundaries for Perkins et al. (1997) boundary conditions). The
resulting boundary values are used to generate Dirichlet boundary conditions for (48). A
similar procedure is used for the barotropic velocities with two notable differences. First, the
superposition is performed to preserve transport:
~ superimposed = (H + ηsubtidal ) U
(H + ηsuperimposed ) U ~ subtidal + H U
~ tidal .
Note that the tidal velocity is only multiplied by the undisturbed water depth. This is
because we are using a linearized tidal model. The second difference is that the run-time
boundary values of the barotropic velocity are used for equation (53), not directly applied to
the final barotropic velocities.

53
Nesting Considerations: For initialization, the process is as for the unnested case. The
superpositions described above are done for the initial conditions of each domain. For the
lateral forcing, however, the barotropic tidal fields are only applied at the boundaries of the
coarsest domain. The reason being that applying the barotropic tidal fields at the boundary
of the coarsest domain can produce the full tidal response in the interior and hence the
barotropic tidal fields are unnecessary for the nested subdomains.

7.4.4 Solving The Equation for the Surface Elevation

Equation (48), with Dirichlet boundary conditions, represents an Elliptic system of equations
for the surface elevation, η. To numerically solve this system we use a preconditioned conju-
gate gradient solver for sparse matrices (e.g. SPARSKIT; Saad, 2009). A typical convergence
test for such an iterative solver is an integrated measure of the reduction in the norm of the
residual over all points. Specifically, if r is the residual of the current solver iteration and r0
is the residual of the initial guess, the convergence test is

krk ≤ τr kr0 k + τa

where τr is the relative tolerance and τa is the absolute tolerance. In practice we tend to
use very small values (10−12 and 10−25 respectively) to ensure a tight convergence. We also
supplement this global constraint with the following point-wise constraint:

∂δη k ∂δη k−1 ∂δη k ∂δη k ∂δη k−1 ∂δη k
− ≤ τrg + τa ; − ≤ τrg + τa


∂x ∂x ∂x ∂y ∂y ∂y

k
δη − δη k−1 ≤ τrg δη k + τa

where the superscript k refers to the iteration number and τrg is the relative tolerance for the
gradient test (typically around 10−8 ). Here we test on both δη and its gradients to ensure
the relative convergence of the barotropic velocities (49).

Since we have discretized our equations on the B-grid, both (48) and, especially, (63) possess a
well known checkerboard mode in their null spaces (Deleersnijder and Campin, 1995; le Roux
et al., 2005; Wubs et al., 2006). For realistic geometries we found that applying a Shapiro
filter (Shapiro, 1970) to the solution was sufficient to suppress the noise while maintaining
the correct physical features. The one case where this approach failed was in creating an
initialization for an idealized flow in a periodic channel. The lack of Dirichlet boundary
values in that case, and corresponding lack of structure they would have imposed, allowed
the checkerboard mode to suppress all other structures. To control this error, the matrix in
(63) was augmented with a Laplacian filter (Deleersnijder and Campin, 1995; Wickett, 1999)
to prevent the appearance of this mode. Again, this filter was only needed for the idealized
periodic channel flow.

54
8 Model Products for the Web

Once a model pair (MAB and NJS) has been determined for distribution, the appropriate
gifs must be made. There are two steps to this process:

1. the model fields must be plotted, with a resulting “gmeta” file


For step 1), there needs to be a “PlotJob” script. Currently these have been lo-
cated in “WGL Plotting” subdirectories and have been named “PlotJob Physics” and
“PlotJob Physics Sections”. These plotting scripts are modified to plot the appropriate
variables, for the appropriate times, with the appropriate minimum and maximums.

2. gifs must be created and moved into the appropriate web directory
For step 2), there are conversion scripts. As an example, look in the PE model output
directories
/projects/ooi/PE/2009/Nov06/PJH01/WGL Plotting (for MAB) and
/projects/ooi/PE/2009/Nov06/PJH02/WGL Plotting (for NJS). Vertical sections are
only made in the NJS directories. For this example, there are three scripts of interest

(a) convert gmetas 1107,


(b) convert gmetas Physics 1107, and
(c) convert gmetas Physics Sections 1107.

The convert gmetas 1107 script references another key file, in this case
/share/stage/OOI/convgmetajob.1107. This file specifies the names of the gifs
which will result from the conversion process. The convert gmetas Physics 1107
and convert gmetas Physics Sections 1107 scripts specify to where the gifs will
be put.

9 Model Web Pages

Each day has a separate web page and directory.

Example web page - http://mseas.mit.edu/Sea exercises/OOI-OSSE09/Maps/Nov08/index.html


Directory - /srv/www/htdocs/Sea exercises/OOI-OSSE09/Maps

In this directory there are three important files - mkdir.com, sed file and sed script. “mkdir.com”
creates the necessary primary and sub-directories for each date. The file “sed file” is input
to the “sed script”. “sed file” determines the changes in day values. Normally each value
is increased by one for each succeeding day. “sed script” determines the operations for each
day. The index files are modified and copied into their new directories.

The only change which should then be necessary in the main daily index page is for the name
of the appropriate netcdf model output files.

55
References
Agarwal A (2009) Statistical field estimation and scale estimation for complex coastal regions and
archipelagos. Master’s thesis, Massachusetts Institute of Technology, Cambridge, MA

Agarwal A, Lermusiaux PFJ (2010) Statistical field estimation for complex coastal regions and
archipelagos. Ocean Modeling In preparation

Arakawa A, Lamb VR (1977) Computational design of the basic dynamical processes of the ucla
general circulation model. Methods in Computational Physics 17:173–265

Bryan K (1969) A numerical method for the study of the circulation of the world ocean. Journal of
Computational Physics 4(3):347–376

Carter EF, Robinson AR (1987) Analysis models for the estimation of oceanic fields. Journal of
Atmospheric and Oceanic Technology 4(1):49–74

Courant R, Friedrichs K, Lewy H (1928) Über die partiellen differenzengleichungen der mathema-
tischen physik. Mathematische Annalen 100(1):32–74

Cushman-Roisin B, Beckers JM (2010) Introduction to geophysical fluid dynamics: Physical and


Numerical Aspects. Academic Press

Debreu L, Blayo E (2008) Two-way embedding algorithms: a review. Ocean Dynamics 58(5-6):415–
428

Deleersnijder E, Campin JM (1995) On the computation of the barotropic mode of a free-surface


world ocean model. Annales Geophysicae 13(6):675–688

Dukowicz JK, Smith RD (1994) Implicit free-surface method for the bryan-cox-semtner ocean model.
Journal of Gephysical Research 99(C4):7991–8014

Egbert GD, Erofeeva SY (2002) Efficient inverse modeling of barotropic ocean tides. Journal of
Atmospheric and Oceanic Technology 19(2):183–204

Ginis I, Richardson RA, Rothstein LM (1998) Design of a multiply nested primitive equation ocean
model. Monthly Weather Review 126(4):1054–1079

Griffies SM, Böning C, Bryan FO, Chassignet EP, Gerdes R, Hasumi H, Hirst A, Treguier AM,
Webb D (2000) Developments in ocean climate modelling. Ocean Modelling 2(3-4):123–192

Haley PJ Jr, Lermusiaux PFJ (2010) Multiscale two-way embedding schemes for free-surface
primitive-equations in the multidisciplinary simulation, estimation and assimilation system.
Ocean Dynamics 60:1497–1537, DOI doi:10.1007/s10236-010-0349-4

Haley PJ Jr, Lozano CJ (2001) COND TOPO: Topography conditioning in matlab. URL
http://mseas.mit.edu/archive/HOPS/Cond Topo/cond topo.ps.gz

Haley PJ Jr, Lermusiaux PFJ, Robinson AR, Leslie WG, Logoutov O, Cossarini G, Liang
XS, Moreno P, Ramp SR, Doyle JD, Bellingham J, Chavez F, Johnston S (2009) Fore-
casting and reanalysis in the Monterey Bay/California Current region for the Autonomous
Ocean Sampling Network-ii experiment. Deep Sea Research II 56(3-5):127–148, DOI
doi:10.1016/j.dsr2.2008.08.010

56
Haney RL (1991) On the pressure gradient force over steep topography in sigma coordinate ocean
models. Journal of Physical Oceanography 21(4):610–619

Lermusiaux PFJ (1997) Error subspace data assimilation methods for ocean field estimation: The-
ory, validation and applications. PhD thesis, Harvard University, Cambridge, MA.

Lermusiaux PFJ (2007) Adaptive sampling, adaptive data assimilation and adaptive modeling.
Physica D 230:172–196, special issue on ”Mathematical Issues and Challenges in Data Assimi-
lation for Geophysical Systems: Interdisciplinary Perspectives”, Christopher K.R.T. Jones and
Kayo Ide, Eds.

Logutov OG (2008) A multigrid methodology for assimilation of measurements into regional tidal
models. Ocean Dynamics 58(5-6):441–460, DOI doi:10.1007/s10236-008-0163-4

Logutov OG, Lermusiaux PFJ (2008) Inverse barotropic tidal estimation for regional ocean appli-
cations. Ocean Modelling 25(1-2):17–34, DOI doi:10.1016/j.ocemod.2008.06.004

Lozano CJ, Haley PJ, Arango HG, Sloan NQ, Robinson AR (1994) Harvard coastal/deep water
primitive equation model. Harvard Open Ocean Model Reports 52, Harvard University, Cam-
bridge, MA

Maderich V, Heling R, Bezhenar R, Brovchenko I, Jenner H, Koshebutskyy V, Kuschan A, Terletska


K (2008) Development and application of 3D numerical model THREETOX to the prediction
of cooling water transport and mixing in the inland and coastal waters. Hydrological Processes
22(7):1000–1013

MSEAS Group (2010) The Multidisciplinary Simulation, Estimation, and Assimilation Systems
(http://mseas.mit.edu/ , http://mseas.mit.edu/codes). Reports in Ocean Science and Engineer-
ing 6, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge,
Massachusetts

Orlanski I (1976) A simple boundary condition for unbounded hyperbolic flows. Journal of Compu-
tational Physics 21(3):251–269

Penven P, Debreu L, Marchesiello P, McWilliams JC (2006) Evaluation and application of the


roms 1-way embedding procedure to the central california upwelling system. Ocean Modelling
12(1-2):157 – 187, DOI DOI: 10.1016/j.ocemod.2005.05.002

Perkins AL, Smedstad LF, Blake DW, Heburn GW, Wallcraft AJ (1997) A new nested boundary
condition for a primitive equation ocean model. Journal of Geophysical Research 102(C2):3483–
3500

Phillips NA (1957) A coordinate system having some special advantages for numerical forecasting.
Journal of Meteorology 14(2):184–185

le Roux DY, Sène A, Rostand V, Hanert E (2005) On some spurious mode issues in shallow-water
models using a linear algebra approach. Ocean Modelling 10(1-2):83–94

Saad Y (2009) SPARSKIT: A basic tool-kit for sparse matrix computations. URL http://www-
users.cs.umn.edu/˜saad/software/SPARSKIT/sparskit.html

57
Shapiro R (1970) Smoothing, filtering and boundary effects. Reviews of Geophysics and Space
Physics 8(2):359–387

Shchepetkin AF, McWilliams JC (2005) The regional ocean modeling system (roms): A split-
explicit, free-surface, topography following coordinates ocean model. Ocean Modelling 9(4):347–
404

Shen CY, Evans TE (2004) A free-surface hydrodynamic model for density-stratified flow in the
weakly to strongly non-hydrostatic regime. Journal of Computational Physics 200(2):695–717

Spall MA, Holland WR (1991) A nested primitive equation model for oceanic applications. Mathe-
matics and Computers in Simulation 21(2):205–220

Spall MA, Robinson AR (1989) A new open ocean, hybrid coordinate primitive equation model.
Mathematics and Computers in Simulation 31(3):241–269

Webb DJ, de Cuevas BA, Coward A (1998) The first main run of the occam
global ocean model. Internal Document 34, Southampton Oceanography Centre, URL
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.5811&rep=rep1&type=pdf

Wickett ME (1999) A reduced grid method for a parallel global ocean general circulation model.
PhD thesis, University of California, Davis, Davis, CA

Wubs FW, de Niet AC, Dijkstra HA (2006) The performance of implicit ocean models on B- and
C-grids. Journal of Computational Physics 211(1):210–228

58