Climatological Oceanographic Atlas of the Bering Sea
Providers:
Gleb Panteleev, Vladimir Luchin, Phyllis Stabeno, Dmitri Nechaev, Takashi Kikuchi

Data

The procedure of the variational data assimilation techniques depends heavily on the data availability and quality. Formulation of the data assimilation procedure requires specification not only of the observations per se but also needs some estimate of the data error variances. That is why it is logical to analyze the existing data first and then describe the data assimilation technology. The reconstruction of the circulation in the Bering Sea is based upon the following data sets:

a) 500 satellite-tracked drifters trajectories from Fisheries Oceanography Coordinated Investigations (FOCI) data base and 84 drifter trajectories from Global Drifter Program (GDP) data base

The FOCI surface drifters (www.pmel.noaa.gov/foci) had drogues at approximately 40 m. The GDP drifters (www.aoml.noaa.gov/phod/dac) had a drogue at 15 m. Preliminary analysis of these data included: (i) temporal low-pass filtering with a 7 days cutoff period; (ii) spatial interpolation and smoothing of the filtered drifter velocity components onto the model grid with correlation radius of 30 km; (iii) estimation of the error variance of gridded velocity components. We assimilated only "reliable" velocities obtained from the averaging of at least three different surface drifters. Velocity estimates derived from FOCI surface drifters are illustrated in Figure 1a.

b) 57 monthly mean velocity data from moorings

Most of these data came from Alaska Ocean Observing System (AOOS) database. Preliminary analysis of AOOS data (www.aoos.org) involved averaging of the velocity time series to obtain monthly mean values. Many of AOOS velocity time series are only 2 - 4 months long and can not be treated as a source for reliable estimates of climatological currents. To avoid contamination of the data assimilation solution with presumably strong data signal component with "sub-seasonal" periods, the STDs of the monthly mean velocity data were prescribed as the largest of the following three values: 5 cm/s, 20% of the monthly mean velocity amplitude, and a conventional STD estimate from velocity time series, where 5cm/s is an estimate of monthly variability of velocity amplitude. Velocity estimates derived from AOOS data are illustrated in Figure 1b.

Figure 1. Velocity observations utilized in data assimilation: (a) thin arrows show averaged velocities at 40 m derived from FOCI drifter data. Shaded areas show spatial distribution of zonal velocity STD, which ranges from 5 cm/s to 25 cm/s. (b) - trajectories and 2-day mean velocities of the four ARGO drifters parked at 1000 m during 2002 - 2004. Circles and asterisks designate initial and final locations of the drifter. Thick arrows show averaged velocities from AOOS moorings.

c) Estimates of the mean transport through the Bering Strait

Transport estimates of 0.9±0.2 Sv were taken from [Woodgate et al., 2005].

d) NCEP/NCAR wind stress and surface heat/salt flux climatology the NCEP/NCAR climatologies

(www.cdc.noaa.gov/cdc/data.ncep.reanalysis.derived.html)
were found to be extremely smooth. To allow for the adjustment of the spatial details in the model forcing fields, we used wind stress and heat/salt flux data with relatively high error variance equal to 40% of their spatial and temporal variability in the BS. Significant errors in the NCEP/NCAR forcing were also noticed by Ladd and Bond, [2002]. Roughly, this variance overlaps a possible 70Wm-2 overestimation of the NCEP heat flux data [Ladd and Bond, 2002].

e) temperature/salinity profiles collected in the Bering Sea and adjacent regions over more than 70 years

Variational reconstruction of the Bering Sea circulation is based upon the oceanographic observation collected in the Bering Sea and the adjacent part of the Pacific Ocean. The major part of these data was provided by the following oceanographic centers and organizations:
- Global Oceanographic Data Center, Obninsk, Russia (RIHMI-WDC http://meteo.ru);
- National Oceanographic Data Center (Silver Spring, USA -
http://www.nodc.noaa.gov/OC5/SELECT/dbsearch/dbsearch.html);
- Alaska Ocean Observing System (AOOS) UAF
(http://penguin.sfos.uaf.edu/data/).

Additional significant part of oceanographic information came from different Russian sources and organizations such as RosHydroMet, fishery departments (TINRO-Center, TURNIF), Hydrographic services, Academy of Science of Russia.

It includes bottle data, mechanical bathythermograph data, high resolution CTDs profiles, expendable bathythermograph (XBT) and PALACE ARGO drifter data.

The data acquired from all these sources were quality controlled and processed. The data quality control and processing involved elimination of duplicated stations, which is unavoidable in processing of a huge volume of observations obtained from different sources. Quality control was performed by statistical methods which were tuned to take into account specific characteristics of the Bering Sea region. Currently the database contains information from 120,128 stations deployed in the period from 1928 to 2008.

These data were used to build climatological estimates of temperature and salinity and estimates of standard deviations (STD) of these climatological mean values, which has been used in data assimilation. Temperature and salinity STD varied within the ranges 0.5°C - 1.5°C and 0.1 - 2.0 ppt near the surface of the ocean and decreased down to 0.1°C and 0.03 ppt respectively in deeper layers (1000 m).

Spatial density of observation in the Bering Sea accounted in the database over entire period of observations is presented in Figure 2. As expected, the density of observations is low in the open sea and increases closer to the continental slope and the coast of the Bering Sea.

Time distribution of oceanographic stations over different months and years is shown in figures 3 and 4. Naturally, the highest density of oceanographic observations corresponds to the summer months, while the number of observations during the cold time of the year is significantly lower. Lower number of observations in winter is related not only to unfavorable weather conditions, but also reflects the fact that significant part of the sea is covered by ice. The number of observations before 1950 is insufficient.

With increase of depth the number of observations drops dramatically. At the depth of 50 m the number of observation is decreased by one third, at the depth of 100 m the number of observations is just a half of the number of observation at the surface. At the depth of 500 m the total volume of observations is just 14% of the surface data, and at the depth of 2000 m the number of observation is smaller by a factor of 100 than at the surface.

Figure 2. Spatial distribution of all available oceanographic data (120,128 stations deployed during the period from 1928 to 2008).

Figure 3. Time distribution of of all available oceanographic data (120,128 stations deployed during the period from 1928 to 2008).

Figure 4. Spatial distribution of all available oceanographic station over the seasons of the year.

Data Assimilation Technique

Forward and adjoint models

To derive correctly the Bering Sea circulation from the data, the observational information has been combined with information imposed by the dynamical constraints. In this study, the dynamical constraints are formulated as a numerical model based on a set of conventional primitive equations under Boussinesq and hydrostatic approximations.

The model is a modification of the C-grid, z-coordinate OGCM designed in the Laboratoire d'Oceanographie Dynamique et de Climatologie [Madec et al., 1999]. The model is implicit both for barotropic and baroclinic modes permitting model runs with relatively large time steps. The model is used in the "climatological" non-eddy-resolving mode on a relatively coarse grid with a 3-hour time step. Detailed description of the numerical scheme can be found in Nechaev et al. [2005], and Panteleev et al. [2006a].

The model grid resolution is 0.1° along meridians and 0.35° along latitudes (approximately 10x10 km). The model bathymetry is taken from the ETOPO02 data set (http://www.ngdc.noaa.gov/mgg/fliers/01mgg04.html). There are 18 unequally spaced levels in the vertical direction with a minimum resolution of 3 m near the surface and 50 m near the bottom.

The model is also subjected to a number of simplifications, which do not compromise the validity of the major dynamical balances of the Bering Sea circulation. First of all, we do not utilize any ice model. This can be approved by the fact that most of the basin is ice-free in summer. On the other hand, surface fluxes, which can be modified by the presence of ice, are free parameters of the model to be tuned in data assimilation. So the surface fluxes obtained in data assimilation experiments can be interpreted as the fluxes at the ice/water interface, but not necessarily as the atmospheric fluxes.

The second simplification is related to the turbulence closure. The mixing processes in the model are parameterized by the prescribed vertical and horizontal eddy diffusion coefficients and the hydrostatic adjustment algorithm. The non-uniform and non-stationary temperature/salinity vertical diffusivity coefficients for data assimilation experiments were obtained as an average over the ensemble of the forward runs of the model with Pacanowski and Philander, [1981], turbulent closure. After that, these coefficients were smoothed both in time and in space and used as fixed parameters in the experiments outlined below. The values of the coefficients ranged between 5·10-4m2s-1 at the surface and 2·10-5 m2s-1 near the bottom. The vertical momentum diffusivity coefficient was constant and equal to 5·10-5m2s-1. Horizontal momentum and temperature/salinity diffusivity coefficients are taken to be 10 m2s-1. In the data assimilation experiments we obtained an indication that this simplification might not be accurate in the shallow regions near the coast of Russia resulting in larger model data misfits. But we had to use this sub-optimal parameterization of the mixing processes since the adjoint code for more advanced turbulence closures is not available yet.

The adjoint code of the model was constructed analytically by transposition of the operator of the tangent linear model, linearized in the vicinity of the given solution of the forward model [Penenko, 1981, Wunsch, 1996]. Application of the implicit scheme with large time steps results in a considerable reduction of storage requirements for variational data assimilation, since for nonlinear problems running the adjoint model requires storing the solution of the forward model on every time step. The tangent linear model was obtained by direct differentiation of the forward model code. Therefore, the tangent linear and the adjoint model are the exact analytical consequences of the forward model.

In the course of data assimilation, the model solution is optimized by tuning free parameters (the so-called control vector) of the model. The control vector of the model is composed of values of the functions specifying initial conditions, open boundary conditions and surface fluxes. The set of initial conditions of the model includes the fields of sea surface height (SSH), T, S and horizontal components of velocity. Boundary components of the control vector comprise the distributions of T, S, and normal components of velocity specified on the open boundaries. The free-slip boundary condition for the tangent velocity component is set at the open boundary.

Cost function

The variational data assimilation can be formulated as a traditional least square problem [Marchuk, 1974; Penenko, 1981; Le Dimet and Talagrand, 1986; Thacker and Long, 1988]. The optimal solution of the model is found through constrained minimization of a quadratic cost function on the space of the model control vectors, where the cost function measures squared weighted distances between the model solution and observations.

Statistical interpretation of the least square method [Thacker, 1989; Wunsch, 1996] considers the cost function as an argument of the Gaussian probability distribution with the cost function weights being the inverse covariances of the corresponding data errors. Under the statistical interpretation the optimal solution is the most probable model state for the given data realization and prior error statistics. Because of the sparseness of the oceanographic data and limited duration of the observational time series, practical data assimilation methods commonly rely on the assumption that the errors of different observations are d-correlated [Thacker, 1989]. Under this assumption, the cost function weights are represented by the diagonal matrices, with diagonal elements being equal to the reciprocals of the corresponding data error variances.

In the present study, we use the cost function J, which, in addition to the "real" climatological data described in above, contains the so-called "bogus" data [Thacker, 1989] in the form of the smoothness and "stationarity" terms:

Here u denotes the horizontal velocity; z is sea surface height; C=(T,S); t is wind stress; B represents surface heat and salt fluxes; Gn denote the segments of the open boundary; Vn is the estimate of transport through the n-th segment of the open boundary; and W...... are the variances of corresponding data. Asterisks denote the observed fields. The smoothness or "bogus" data terms in J are proportional to the squared Laplacians of the model fields. These terms were introduced into the cost function to regularize the data assimilation problem.

Two groups of terms JC and Ju constrain baroclinic (T, S and heat/salt fluxes at the surface) and barotropic (SSH, velocity and wind stress) variables of the model respectively. The physical meaning of the different terms in JC and Ju is straightforward: minimization of these terms enforces smoothness of the model solution and attracts it to the observed data. According to Tziperman and Thacker [1989], the third group Jstat ought to force the model solution to be quasistationary with a degree defined by weights W...st.

The spatial and temporal distributions of the "real" data error variances W... have been discussed in section Data. The variances of the "bogus" data are determined through the scale analysis:

The scales Vs=5 cm/s and zs =10 cm are defined as typical variations of these variables in the first guess solution, while the parameters Cs2, Bs2, ts2 are derived from the characteristic spatial variability of the corresponding data. We utilized a uniform spatial scale for the wind stress fields Lt=500 km, while spatial scales LC and Lu are the functions of the local depth gradient and vary within the ranges 100 – 300 km and 50 – 150 km respectively. This setting of the characteristic scales allows us to avoid over-smoothing of the model solution in regions with strong topographic steering.

The procedure of the first guess initialization and the technique of sequential minimization of the cost function J are outlined in Panteleev et al. [2006a]. Starting with some prior estimate of the model control vector the model run is performed to obtain the first guess solution. Given the solution of the forward model the value of the cost function J is computed and the adjoint model backward is simulated in time to estimate the gradient of the cost function with respect to the control vector. The gradient of the cost function is then used in the quasi-Newtonian optimization algorithm [Gilbert and Lemarechal, 1989] to find a better estimate of the control vector of the model. The procedure is repeated until the norm of the cost function gradient is sufficiently small.

Because of the model's non-linearity, the cost function J may have multiple local minima. The quasi-Newtonian optimization algorithm finds only one local minimum, which is the closest to the first guess solution. The procedure of the first guess initialization and gradual minimization of cost function J is outlined in Panteleev et al. [2006a].