An Overview of Recent Developments for Computationally Efficient Applications in Operational Oceanography

Pierre Brasseur

Abstract Ensemble-based methods have become very popular for data assimilation in numerical models of oceanic or atmospheric flows. Unlike the deterministic Extended Kalman Filter which explicitly describes the evolution of the best estimate of the system state and the associated error covariance, ensemble filters rely on the stochastic integration of an ensemble of model trajectories that are intermittently updated according to data, using the forecast error covariance represented by the ensemble spread. In this chapter, we present an overview of recent developments of ensemble-based assimilation methods that were motivated by the need for cost-effective algorithms in operational oceanography. We finally discuss a number of standing issues related to temporal assimilation strategies.

15.1 Introduction

Over the past 15 years, ensemble-based methods have become very popular for data assimilation in numerical models of geophysical flows and have matured to the point that observations are now operationally assimilated into ocean circulation models to produce a variety of ocean state estimations in real time and delayed mode (Cummings et al. 2009). Among these methods, the Ensemble Kalman Filter (EnKF, Evensen 1994) is probably the most famous stochastic estimation algorithm which historically has been introduced in oceanography to overcome some of the problems encountered with the deterministic Kalman filter extended to non-linear models. The EnKF was further developed and applied to data assimilation into atmospheric models (e.g., Houtekamer and Mitchell 2001), hydrological models (e.g., Reichle et al. 2002) and even geological reservoir applications (e.g., Chen and Zhang 2006).

Considering the intrinsic properties shared by geophysical fluids such as chaotic temporal evolution and finite predictability (Brasseur et al. 1996), ensemble-based

CNRS, LEGI, Université de Grenoble, BP 53X, 38041 Grenoble, France e-mail: [email protected]

A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, 381

DOI 10.1007/978-94-007-0332-2_15, © Springer Science+Business Media B.V. 2011

methods are in essence well designed to address estimation problems with ocean or atmospheric systems involving non-linear mesoscale dynamics. For such systems, ensemble model integrations can be used to compute the probability distribution functions (pdfs) of predictions based on imperfect models and uncertain initial and/ or boundary conditions.

The development of ensemble methods for data assimilation in oceanography has been motivated by several additional factors like:

• the need to reduce the computational complexity of the conventional Kalman filter for applications to numerical problems involving very large dimensions;

• the flexibility of computer implementations and operations with models based on numerical codes in perpetual evolution;

• the possibility to conduct sensitivity experiments at very low cost using algorithmic simplifications, e.g. for testing parameterizations of the different assimilation steps;

• requirements for error estimates on the solutions, which can be easily computed as a by-product of ensemble algorithms.

Many review papers, textbooks (e.g. Evensen 2007) and application papers dealing with ensemble filters have been published since the seminal work by Evensen (1994). The fundamentals of the Kalman filter and the derivation of low-rank implementations have been presented in a previously published chapter book (Brasseur 2006) and will not be repeated here. In the present chapter, we present an overview of recent developments that were stimulated by the necessity for more cost-effective methods in the context of operational oceanography. We will exemplify the utility of explicit integration of the ensemble statistics, for instance to cope with non-Gaussian error distributions, smoother estimation solutions and assimilation of asynchronous observations.

The basic concepts of ensemble filtering are briefly reminded in the next section. In Sect. 15.3, we present several approaches that were proposed in the literature to generate and propagate the error statistics with time. Different formulations of the observational update are then discussed in Sect. 15.4. Issues related to the temporal assimilation strategies are then addressed in Sect. 15.5. In the conclusion of the chapter, we finally discuss the implications of ensemble techniques for model development strategies in operational oceanography systems.

15.2 Ensemble Data Assimilation Methods Derived From the Kalman Filter

The Kalman Filter (Kalman 1960) provides the basic framework for sequential assimilation methods based on the least squares estimation principle. The Kalman Filter (KF) is a statistical recursive algorithm designed for systems of linear dynamics, in which prior information (i.e., numerical model prediction) is merged with information from the actual system (i.e., observations) to produce a corrected, posterior

Fig. 15.1 Conceptual filtering process in sequential assimilation. Three forecast-update cycles are shown, with data assimilated at i + 1, i + 2 and i + 3. The vertical bars represent the model forecast and analysis errors (in red and blue) and the observation errors (in green)

system estimate. An extended version of the KF has been developed for non-linear models, known as the Extended Kalman Filter (EKF, Jazwinski 1970).

The implementation of the KF follows a sequence of forecast-update cycles that involve two main steps: the forecast step for transitioning the model state and the associated error covariance between two successive times i and i + 1, and the observational update for correcting the forecast using observations available at time i + 1 (Fig. 15.1). A somewhat heuristic derivation of the KF and EKF equations is presented by Brasseur (2006). During the forecast step, the uncertainty of the system estimate is expected to grow due to initial errors and imperfect model dynamics, while the uncertainty is reduced whenever measurements are assimilated. As only data from the past influence the best estimate at a given time, the assimilation process belongs to the class of filtering methods.

In spite of its conceptual simplicity, the applicability of the EKF to non-linear ocean circulation models is often impossible even for problems of modest size. One of the main issues is related to the explicit computation of the forecast error covariance in the baseline algorithm, which can only be achieved at the price of n model integrations (where n designates the dimension of the state vector of the discretized system). Since n is typically ~107-109 in operational applications, a brute-force implementation is just impossible and alternative formulations must be sought.

In his initial work on the Ensemble Kalman Filter, Evensen (1994) showed that Monte Carlo methods could be used as an alternative to the approximate error co-variance evolution equation used in the EKF to compute forecast error estimates with a significantly lower computational cost. Unlike the deterministic EKF which explicitly describes the evolution of the best estimate of the system state and the associated error covariance, the EnKF relies on the stochastic integration of an ensemble of model states followed by observational updates using the forecast error covariance implicitly represented by the ensemble spread. The size of the ensemble (noted m here after),—and thus the CPU requirements to run the EnKF—, depends on the actual shape of the probability distributions that need to be sampled, but the literature suggests that an ensemble of size 50-100 is often adequate for real ocean systems. The accuracy of the state estimates as a function of ensemble size remains however an important research question that will be discussed further in the following sections.

15.3 Ensemble Generation and Forecast

Several categories of ensemble-based assimilation techniques can be identified, which essentially differ on the strategy adopted to generate the initial ensemble members describing the uncertainty of the estimated state, and to propagate this uncertainty over the assimilation time window.

In the native formulation of the EnKF, the ensemble members are generated as purely random samples of the prior probability distribution of the system state, while in other schemes such as the Singular Evolutive Extended Kalman (SEEK) filter introduced by Pham et al. (1998) the uncertainty is described in terms of "well chosen" perturbations of a given reference trajectory. The basic principle of the SEEK filter is to make corrections only in the directions for which the error is amplified or not sufficiently attenuated by the system dynamics. Instead of ensembles model realizations, the SEEK filter is thus considering perturbation ensembles that span and track the scales and processes where the dominant errors occur. This motivates the representation of uncertainty using Empirical Orthogonal Functions (EOFs) of the system variability (which are often approximated by EOFs of the model variability) to characterize and predict the largest uncertainties. As is illustrated in detail by Nerger et al. (2005), an EOF-based approach generally requires a smaller ensemble size for the same performance compared to strategies based on purely random sampling. Based on this work and the reformulated SEEK filter by Pham (2001), a revised sampling strategy for the EnKF was proposed by Evensen (2004).

The SEEK approach is closely related to the concept of Error Sub-space Statistical Estimation (ESSE) introduced by Lermusiaux and Robinson (1999). In applications of ESSE methodologies, the error modes are obtained by a singular value decomposition of the error covariance matrix which can be specified by means of analytical functions. Other methods have been proposed that utilize singular, Lyapunov or breeding vectors of the transition matrix (e.g., Miller and Erhet 2002; Hamill et al. 2003). The leading Lyanunov vectors are computed by applying the linear tangent model on perturbations of the non-linear model trajectory, whereas the bred vectors are a generalization of the Lyapunov vectors computed with the non-linear model. Figure 15.2 illustrates schematically the convergence of an ensemble of initial perturbations toward the leading Lyapunov vectors.

A common denominator between the EnKF, the SEEK, the ESSE and similar methods is the rank-deficient property of the error covariance matrix associated to the ensemble spread since, in practice, the size of the ensemble is much smaller that

Fig. 15.2 Schematic representation of the evolution of an ensemble of initially random perturbations converging toward leading Lyapunov vectors. (Redrawn from Kalnay 2003)

the dimension of the system space. The concept of reduced rank was first introduced in the KF framework by Todling and Cohn (1994). The reformulation of the analysis and forecast equations of the KF in presence of rank-deficient error covariance matrices is described in Brasseur (2006).

In the EKF, the time evolution of the error statistics over the assimilation window is computed using the tangent linear model to update the error covariance matrix. The same approach was proposed in the native formulation of the SEEK filter (Pham et al. 1998). In order to better capture non-linear evolutions of the error field, a finite difference solution of the forecast error equation can be substituted as proposed by Brasseur et al. (1999), avoiding in this way the development and implementation of a tangent linear operator in the filter. A similar strategy is used to evolve the subspace in the ESSE methodology (Lermusiaux 2001) as well as in the SEIK variant of the SEEK filter (Pham 2001): in these schemes, a central forecast and an ensemble of stochastic ocean model integrations are carried out starting from perturbed states. This technique is eventually very close to the EnKF in which the integration of all ensemble members is performed using the nonlinear model without any obligation to identify a central forecast.

The computational resources needed to evolve the error statistics with these methods is proportional to the rank r of the error subspace, or alternatively to the size m of the ensemble. In either case, this requires at least an equivalent of several tens of model integrations which are not always affordable in operational ocean systems. It is therefore natural to fall back on simplified versions of the assimilation schemes where the error statistics is not explicitly evolved using the model dynamics: examples are the Ensemble Optimal Interpolation (EnOI) presented in Evensen (2003), or the SEEK filter with fixed error modes (Brasseur and Verron 2006). An operational implementation of EnOI in the BlueLINK operational forecasting system is described by Oke et al. (2008), whereas in the Mercator system the assimilation scheme is a SEEK filter with stationary EOF basis (Brasseur et al. 2005).

These methods still allow the computation of multivariate analyses and are numerically extremely efficient, but a larger ensemble may be required to ensure that it spans a large enough space to properly capture the relevant analysis increments. Further, the sub-optimal analysis solutions are provided without consistent error estimates. We will identify in the next sections additional disadvantages of sequential assimilation methods based on stationary error statistics.

15.4 Observational Update

In this section, an overview is given of different approaches to perform the observational update,—or analysis step—, of the system's state. The Kalman filter updates the prediction with new measurements using a weighted combination of the model forecast and measurement values. The computation of the weights relies on an optimality principle that involves the covariance matrix of the forecast and observation errors.

In the so-called "stochastic" implementations of the EnKF, the repetitive update of all forecast members is achieved using perturbed observations to avoid the problem of systematic underestimation of the analysis covariance that occurs when the same data and the same gain are used in the ensemble of analysis equations (Burgers et al. 1998). The perturbations are drawn from a distribution with zero mean and covariance equal to the measurement error covariance matrix. An alternative technique is the so-called "deterministic" or "square root" analysis scheme, which consists of a single analysis based on the ensemble mean, and where the update of the perturbations is obtained from the square root of the Kalman filter analysis error covariance (Verlaan and Heeming 1997; Tippett et al. 2003).

When the error covariance matrices are in square root form, the inverse operations required to compute the analysis increments are performed in the reduced space rather than in the observation space. The Kalman filter analysis scheme is thus transformed to become linear in the number of observations y (instead of being originally proportional to the cube of y), upon the condition of availability of the inverse of the observation error covariance matrix. This condition is a severe limitation to the use of square root algorithms in operational settings, which often leads to assuming uncorrelated observation errors for the sake of numerical efficiency. In a recent paper, Brankart et al. (2009) show that the linearity of the square root algorithm in y can be preserved for a very broad class of non-diagonal observation error covariance matrices. This can be achieved by augmenting the observation vector with discrete measurement gradients. The proposed technique is shown beneficial with regard to the quality of observational updates and accuracy of the associated error estimates (Fig. 15.3). It can also be combined with adaptive techniques (e.g., Brankart et al. 2010a) for tuning key parameters of the prescribed error statistics. Based on these results, the use of a diagonal observation error in the square root formulation of the analysis step should no more be an obstacle for operational implementations with huge observation sets to assimilate.

jjj^j^jj^jg- »"ijjfi 31¡¡¡^jfc.. m. ..we »J» i ¡Ii/ m 308" ji}" 312

aM CLÛ1 o.oa o.(h 0.« O.M 0.000 0.003 iLOOe ooofr 0.01 J 01" ;: ajoe o.ib 0 ; : c.oo aao jjj^j^jj^jg- »"ijjfi 31¡¡¡^jfc.. m. ..we »J» i ¡Ii/ m 308" ji}" 312

aM CLÛ1 o.oa o.(h 0.« O.M 0.000 0.003 iLOOe ooofr 0.01 J 01" ;: ajoe o.ib 0 ; : c.oo aao

Fig. 15.3 Results of observational updates performed in a 1/4° NEMO simulation of the North Brazil Current system. The first line shows a snapshot of the circulation on December 14th: sea-surface height (SSH) in meter (left column), the SSH gradient in meter per grid point (middle), and the sea-surface velocity in m s-1 (right column). The error standard deviation as measured by the ensemble of difference with respect to the true state is shown in the second line, while the same quantities as estimated by the square-root scheme with a parameterization of correlated observation errors are shown in the third line. The bottom line shows the results obtained when the observation error is parameterized as a diagonal matrix (i.e. neglecting correlated observation errors), which significantly differ from the previous error estimates. (Redrawn from Brankart et al. (2009))

Fig. 15.3 Results of observational updates performed in a 1/4° NEMO simulation of the North Brazil Current system. The first line shows a snapshot of the circulation on December 14th: sea-surface height (SSH) in meter (left column), the SSH gradient in meter per grid point (middle), and the sea-surface velocity in m s-1 (right column). The error standard deviation as measured by the ensemble of difference with respect to the true state is shown in the second line, while the same quantities as estimated by the square-root scheme with a parameterization of correlated observation errors are shown in the third line. The bottom line shows the results obtained when the observation error is parameterized as a diagonal matrix (i.e. neglecting correlated observation errors), which significantly differ from the previous error estimates. (Redrawn from Brankart et al. (2009))

In typical operational forecasting applications, the dimension of the reduced space (e.g., implied by ensemble size m) is much smaller than the dimension of the state vector n and also smaller than the number of positive Lyapunov exponents of the system. Hence, all unstable modes of the dynamical system are not controllable and this may lead to unreliable forecasts. A related issue is the lack of accurate representation of weakly correlated variables at long distance when using ensembles with only ~100 members. Different localization techniques have been introduced to overcome this problem, such as the application of a Schur-product to modify the ensemble covariance using local support correlation functions (e.g., Houtekamer and Mitchell 2001), the computation of local analysis at a grid point using only nearby measurements (Evensen 2003), or the approximate local error parameterization pro-

Fig. 15.4 Representers for one observation (identified by the blue dot) of sea-surface height in a relatively quite region of an idealised mid-latitude, double-gyre model: (top left panel) using a 5000-member ensemble covariance without localization, (top central panel) using a 200-member ensemble covariance without localization, and (top right panel) using a 20-member ensemble covariance with localization. The bottom panels show the corresponding error standard deviation estimated by the square root filter (from Brankart et al. 2010b)

Fig. 15.4 Representers for one observation (identified by the blue dot) of sea-surface height in a relatively quite region of an idealised mid-latitude, double-gyre model: (top left panel) using a 5000-member ensemble covariance without localization, (top central panel) using a 200-member ensemble covariance without localization, and (top right panel) using a 20-member ensemble covariance with localization. The bottom panels show the corresponding error standard deviation estimated by the square root filter (from Brankart et al. 2010b)

posed by Brankart et al. (2010b) which preserves the computational complexity of square root algorithms. The localization process can be interpreted as a means of increasing the rank of the error covariance matrix without increasing the number of members in the ensemble (i.e. the cost of the forecast error computation). In the example of Fig. 15.4, it is shown that the local parameterization proposed by Brankart et al. (2010b) can efficiently remove the spurious covariances associated with remote observations and improve the accuracy of the analysis error estimated by the filter. This topics is still subject to active research for improving the efficiency of ensemble-based assimilation techniques in realistic oceanographic applications.

Until now, the default assumption of KF-based algorithms has been to assume that the background error pdfs are normal distributions. This is a convenient choice because normal pdfs are fully determined by only two parameters (the mean and the standard deviation), and remain Gaussian after linear operations. In addition, the least squares solution obtained by the linear update (as discussed above) corresponds to the maximum likelihood if the errors have a normal distribution. In many applications however, the Gaussian assumption is a very crude approximation of the actual error distributions and a more general framework compliant with the concept of non-linear analysis is required. A simple example is the estimation of tracer concentrations which are positive-definite quantities and therefore cannot be treated as Gaussian variables (as positive-definiteness is not preserved in a linear analysis scheme).

An adaptation of the EnKF to account for non-Gaussian errors was first proposed by Bertino et al. (2003) who introduced the concept of anamorphosis to transform the set of original state variables in the physical space into modified variables that are hopefully more suitable for linear updates. This concept has been further explored and applied to assimilate synthetic data in a coupled physical-ecosystem model of the Arctic Ocean (Simon and Bertino 2009). An even more general transformation method has been proposed recently by Beal et al. (2010), still in the context of data assimilation into coupled physical-biogeochemical models. The underlying idea is to take full benefit of the ensemble forecast statistics and compute transformation functions locally by mapping the ensemble percentiles of the distributions of each state variable on the Gaussian percentiles. The results of idealized experiments indicate that this anamorphosis method can significantly improve the estimation accuracy with respect to classical computations based on the Gaussian, opening new prospects e.g. for assimilation into coupled models (physics-biology or ocean-ice-atmosphere). A key aspect of the anamorphosis approach is that it doesn't induce any significant extra cost when compared to the linear analysis scheme. However, the full benefit of adaptive anamorphosis as proposed by Beal et al. (2010) is obtained when the error statistics is explicitly propagated using the model dynamics.

15.5 Temporal Strategies

In the conceptual assimilation problem described by Fig. 15.1, two major simplifications have been considered: (i) the observations are available at discrete time intervals, and (ii) the analysis is performed at the exact time of the measurements. In real-world oceanographic and atmospheric problems, the situation is quite different since the flow of observations can be considered as almost continuous in time (as, for instance, the sampling of along-track altimeter data). It would not be appropriate to interrupt the model forecast every time a new piece of data becomes available because very frequent model updates based on too few data would be too expensive and detrimental to the numerical time integration of the model. In practice, assimilation windows in operational systems are 3-7 days for mesoscale ocean current predictions, and 10-30 days for initialization of coupled ocean-atmosphere seasonal prediction systems.

Hence, intermittent assimilation methods necessarily involve approximations. For example, the FGAT (First Guess at Appropriate Time) method initially introduced in meteorology can be used to evaluate the innovation vector more correctly: instead of computing the difference between the time-distributed data set and the model forecast at the analysis time, the innovation is evaluated "on the flight" by cumulating the differences between each piece of observation and the corresponding element of the model forecast at the measurement time. This approach has been tested with 3D-VAR assimilation systems (Weaver et al. 2003).

A rigorous way for taking into account the temporal distribution of data is offered by "4D" assimilation methods. 4D-VAR or ensemble methods have indeed the capacity to assimilate asynoptic data at their exact observation time, within an assimilation window. In the Ensemble Kalman Smoother (EnKS) introduced by Evensen and van Leeuwen (2000), it is possible to assimilate non-synoptic measurements by exploiting the time correlations in the ensemble: the EnKF solution is used as the first guess for the analysis, which is propagated backward in time by using the ensemble covariances. This so-called 4D-EnKF formulation was further discussed by Hunt et al. (2004), and more recently revisited by Sakov et al. (2010). In line with these works, Cosme et al. (2010) have developed a reduced rank, square-root smoother derived from the SEEK formulation. The CPU requirements for the EnKF, the EnKS, the SEEK filter or the SEEK smoother are similar when m=r. Compared to 4D-VAR however, no backward integrations in time and no adjoint operators are needed. The storage requirements of smoothers however may become huge for long time intervals with many analysis times since the ensemble trajectory has to be stored at all observation instants.

A second consequence of intermittency is the discontinuity of the forecast/analysis estimates, which is recognized as a major drawback of both variational and sequential assimilation methods that require repeated assimilation cycles. Two related problems,—shocks to the model and data rejection—, arise with intermittent corrections. It is found that observations assimilated into models may introduce transient waves excited by the impulsive insertion. These waves are often the result of imperfections in the corrected state associated to physically unbalanced error covariances. In order to incorporate analysis increments in a more gradual manner, an algorithm based on Incremental Analysis Updates (IAU) was proposed by Bloom et al. (1996), which combines aspects of intermittent and continuous assimilation schemes. Using the classical KF equations, the IAU algorithm first computes the analysis correction; this correction is then distributed (uniformly or not) over the assimilation window and inserted gradually to the model evolution (Ourmieres et al. 2006). The state obtained at the end of the assimilation window can be used as initial conditions for the next assimilation cycle, leading to time-continuous filtered trajectories. The IAU temporal strategy can be complemented by the FGAT scheme which computes the innovation "on the flight". More rigorous techniques that combine localization and processing of observations that arrive continuously in time are subject to new developments compliant with large numerical systems (e.g., Bergemann and Reich 2010).

15.6 Conclusions

Outstanding advances have been accomplished since the first applications of ensemble-based methods to assimilate data into oceanic or atmospheric models. A broad variety of reduced-rank Kalman filters exist today, that were developed with the aim of reducing the computational complexity of the native algorithms while making possible the assimilation of complex and heterogeneous data sets into nonlinear models.

In this chapter, we have shown that ensemble-based methods are becoming very competitive with respect to 4D-VAR while their flexibility for implementation into numerical codes that are in perpetual evolution remains a major asset. In addition, ensemble methods provide an elegant and powerful statistical methodology to quantify uncertainty as requested by users of operational oceanography products.

However, most operational systems in place today are still based on sub-optimal estimation methods (e.g. EnOI) that do not explicitly propagate the error statistics. The transition toward 4D methods is challenging in a context where the increase of operational model resolution in space and time is strongly encouraged by user requirements, scientific arguments (e.g. role of submesoscale processes) and the refined resolution of data sets available today.

For applications such as the production of multi-decadal reanalyses, the requirement for dynamical consistency over periods of time larger than the predictability time scales of the simulated flow remains an issue at conceptual level. This is especially true with eddy-resolving ocean models for which the 4D "weak-constraint" assimilation methods (i.e. assuming that the model equations are not strictly verified) represent the relevant framework to properly conciliate imperfect models with imperfect data.

Acknowledgements I thank the organizers of this GODAE Summer School on Operational Oceanography held in Perth, Australia, for inviting me to give these lectures in a so wonderful place. This work has been partly supported by the MyOcean project of the European Commission under Grant Agreement FP7-SPACE-2007-1-CT-218812-MYOCEAN.

References

Beal D, Brasseur P, Brankart J-M, Ourmieres Y, Verron J (2010) Characterization of mixing errors in a coupled physical biogeochemical model of the North Atlantic: implications for nonlinear estimation using Gaussian anamorphosis. Ocean Sci 6:247-262 Bergemann K, Reich S (2010) A localization technique for ensemble Kalman filters. Quart J R

Meteor Soc 136:701-707 Bertino L, Evensen G, Wackernagel H (2003) Sequential data assimilation techniques in oceanography. Int Stat Rev 71:223-241 Bloom SC, Takacs LL, Da Silva AM, Ledvina D (1996) Data assimilation using incremental analysis updates. Mon Wea Rev 124:1256-1271 Brankart J-M, Ubelmann C, Testut C-E, Cosme E, Brasseur P, Verron J (2009) Efficient parameterization of the observation error covariance matrix for square root or ensemble Kalman filters: application to ocean altimetry. Mon Wea Rev 137:1908-1927. doi:10.1175/2008MWR2693.1 Brankart J-M, Cosme E, Testut C-E, Brasseur P, Verron J (2010a) Efficient adaptive error param-eterizations for square root or ensemble Kalman filters: application to the control of ocean mesoscale signals. Mon Wea Rev 138:932-950. doi:10.1175/2009MWR3085.1 Brankart J-M, Cosme E, Testut C-E, Brasseur P, Verron J (2010b) Efficient local error parameter-izations for square root or ensemble Kalman filters: application to a basin-scale ocean turbulent flow. Mon Wea Rev (in revision) Brasseur P (2006) Ocean Data Assimilation using Sequential Methods based on the Kalman Filter. In: Chassignet E, Verron J (eds) Ocean weather forecasting: an Integrated View of Oceanography. Springer, Netherlands, pp 271-316

Brasseur P, Verron J (2006) The SEEK filter method for data assimilation in oceanography: a synthesis. Ocean Dyn 56:650-661. doi:10.1007/s10236-006-0080-3 Brasseur P, Blayo E, Verron J (1996) Predictability experiments in the North Atlantic Ocean: outcome of a QG model with assimilation of TOPEX/Poseidon altimeter data. J Geophys Res 101(C6):14161-14174

Brasseur P, Ballabrera J, Verron J (1999) Assimilation of altimetric observations in a primitive equation model of the Gulf Stream using a singular evolutive extended Kalman filter. J Mar Syst 22(4):269-294

Brasseur P, Bahurel P, Bertino L, Birol F, Brankart J-M, Ferry N, Losa S, Remy E, Schröter J, Skachko S, Testut C-E, Tranchant B, van Leeuwen PJ, Verron J (2005) Data Assimilation for marine monitoring and prediction: the MERCATOR operational assimilation systems and the MERSEA developments Quart J R Meteor Soc 131:3561-3582 Burgers G, van Leeuwen P, Evensen G (1998) Analysis scheme in the ensemble Kalman filter.

Mon Wea Rev 126:1719-1724 Chen Y, Zhang D (2006) Data assimilation for transient flow in geologic formations via ensemble

Kalman filter. Adv Water Resour 29(8):1107-1122 Cosme E, Brankart J-M, Verron J, Brasseur P, Krysta M (2010) Implementation of a reduced-rank, square root smoother for high-resolution ocean data assimilation. Ocean Model 33:87-100. doi:10.1016/j.ocemod.2009.12.004 Cummings J, Bertino L, Brasseur P, Fukumori I, Kamachi M, Martin M, Morgensen K, Oke P, Testut CE, Verron J, Weaver A (2009) Ocean data assimilation systems for GODEA. Oceanography 22(3):96-109

Evensen G (1994) Sequential data assimilation with a nonlinear quasi-geostrophic model using

Monte Carlo methods to forecast error statistics. J Geophys Res 99(C5):10143-10162 Evensen G (2003) The Ensemble Kalman Filter: theoretical formulation and practical implementation. Ocean Dyn 53:343-367 Evensen G (2004) Sampling strategies and square root analysis schemes for the EnKF. Ocean Dyn 54:539-560

Evensen G (2007) Data assimilation, the ensemble Kalman filter. Springer, New York, p 279 Evensen G, van Leeuwen PJ (2000) An ensemble Kalman smoother for non-linear dynamics. Mon

Wea Rev 128:1852-1867 Hamill TM, Snyder C, Whitaker JS (2003) Ensemble forecasts and the properties of flow-dependent analysis-error covariance singular vectors. Mon Wea Rev 131:1741-1758 Houtekamer PL, Mitchell HL (2001) A sequential ensemble Kalman filter for atmospheric data assimilation. Mon Wea Rev 129:123-137 Hunt B, Kalnay E, Kostelich E, Ott E, Patil DJ, Sauer T, Szunyogh I, Yorke JA, Zimin AV (2004)

Four dimensional ensemble Kalman filtering. Tellus 56A:273-277 Jazwinski AH (1970) Stochastic processes and filtering theory. Academic Press, San Diego Kalman RE (1960) A new approach to linear filter and prediction problems. J Basic Eng 82:35-45 Kalnay E (2003) Atmospheric modeling, data assimilation and predictability. Cambridge University Press, Cambridge, p 341 Lermusiaux PFJ (2001) Evolving the subspace of the three dimensional ocean variability: Massachusetts Bay. J Mar Syst 29:385-422 Lermusiaux PFJ, Robinson AR (1999) Data assimilation via error subspace statistical estimation,

Part I: theory and schemes. Mon Wea Rev 127(7):1385-1407 Miller RN, Ehret L (2002) Ensemble generation for models of multimodal systems. Mon Wea Rev 130:2313-2333

Nerger L, Hiller W, Schröter J (2005) A comparison of error subspace Kalman filter. Tellus 57A:715-735

Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink ocean data assimilation system (BODAS). Ocean Model 21:46-70 Ourmieres Y, Brankart JM, Berline L, Brasseur P, Verron J (2006) Incremental analysis update implementation into a sequential ocean data assimilation system. J Atmos Ocean Technol 23(12):1729-1744

Pham DT (2001) Stochastic methods for sequential data assimilation in strongly non-linear systems. Mon Wea Rev 129:1194-1207 Pham DT, Verron J, Roubaud MC (1998) A singular evolutive extended Kalman filter for data assimilation in oceanography. J Mar Syst 16:323-340 Reichle RH, McLaughlin DB, Entekhabi D (2002) Hydrologic data assimilation with the Ensemble Kalman filter. Mon Wea Rev 130:103-114 Sakov P, Evensen G, Bertino L (2010) Asynchronous data assimilation with the EnKF. Tellus 66A:24-29

Simon E, Bertino L (2009) Application of the Gaussian anamorphosis to assimilation in a 3-D coupled physical-ecosystem model of the North Atlantic with the EnKF: a twin experiment. Ocean Sci 5:495-510

Tippett MK, Anderson JL, Bishop CH, Hamill TM, Whitaker JS (2003) Ensemble square root filters. Mon Wea Rev 131:1485-1490 Todling R, Cohn SE (1994) Suboptimal schemes for atmospheric data assimilation based on the

Kalman Filter. Mon Wea Rev 122:2530-2557 Verlaan M, Heemink AW (1997) Tidal flow forecasting using reduced-rank square root filter.

Stoch Hydrol Hydraul 11:349-368 Weaver A, Vialard J, Anderson DLT (2003) Three- and four-dimensional variational assimilation with a general circulation model of the tropical Pacific Ocean. Part I: formulation, internal diagnostics, and consistency checks. Mon Wea Rev 131:1360-1378

Part VI Systems

0 0

Post a comment