Development of Operational Ocean Forecasting System Evaluation

The second use of data assimilation with ocean modelling has been dedicated to short terms ocean prediction. Operational oceanographic centres development is also related to the availability of satellite data. In the late 1990s, several groups had already proposed multivariate assimilation scheme enhancing ocean models capabilities, either based on quasi-geostrophic or primitive equation formulations (see Dombrowsky et al. 2009 for a quick historical introduction). In the framework of GODAE, the main development of these groups focused on OFS providing daily estimates of hindcast, nowcast and short-term forecast18 of the ocean dynamics at

18 Short-term ocean prediction: between 5 days and 2 weeks.

mesoscale. That is, a description at length and time scales larger than 10 km and one day of the density field and water mass changes, of the currents (from surface Ekman currents to western boundary currents), and their respective transient effects in term of front, meanders, waves and eddy-like propagating features, from surface to depth. The objectives and potential applications of such OFS have been largely discussed with the terms of reference of GODAE (see Bell et al. 2009 for more details and references). However, one can mention ocean circulation description for synoptic to interannual studies, short term prediction for security (e.g. oil spill prediction, search and rescue activities), for water quality (by coupling with bio-geochemical models, like algae bloom detection), for defence application (usually associated with acoustic modelling), or for fish stock assessment when coupled with efficient ecosystem and high trophic levels models.

Evaluation methodology of OFS has first followed the path proposed by the modelling community, but had to take into account constraints that do not normally appear when performing model validation on academic project. First, by proceeding to the evaluation of the assimilation scheme, and its efficiency in providing accurate ocean analysis19. That is, more focused on accuracy than overall quality. In other words, where a certain level of quality is sought in pure modelling research (e.g., is there deep convection and Labrador Sea Water formed? a Gulf Stream overshoot? an acceptable meridional heat transport and Meridional Overturning Circulation?), assimilation experiments are tested on "realistic representation" where reference dataset are used to directly quantify error levels. A comprehensive error budget is also required for data assimilation results to be properly assessed. Assimilation schemes are more or less guided by background20 and observation errors, and the most sophisticated schemes provide robust analysis21 and forecast error estimates (Brasseur 2006; Cummings et al. 2009). It is then required to verify the model error assumptions against dedicated error validation procedures.

Second, by taking into account and measuring the impact of real-time constraints. That is, lack of data (observations that are not yet available during the assimilation time-window), and/or the low quality of these data, compared to reanalysis framework, where data are usually complete and fully controlled and corrected. Note also that in real time operations, forcing fields provided by weather forecast or atmospheric models might be less precise.

And third, by focusing more specifically to the scientific assessment of forecast products, that is the evaluation of the performance and the predictability of the OFS. Performance is considered here not in its general definition, but more precisely associated with the benefit of using an ocean predicting model, together with an assimilation methodology that correct the ocean estimates produced by the OFS. Here the performance is a value of the usefulness of these different components for user's interest and applications: to predict ocean current for the next week, why just not use a climatology? Why not applying a persistence approach, saying that

19 See footnote 16.

20 See footnote 15.

21 See footnote 16.

the ocean state next week, in a good approximation, is the same that the estimation computed today? In both cases, what is the added value of sophisticated tools like assimilation scheme and ocean models compared to climatology, to persistency approach? In practice, the idea is to evaluate forecast errors, compared to climatology or persistence errors, together with accuracy of the analysis (i.e., the efficiency of the assimilation scheme).

Constraints appear also on technical/engineering aspects. Assessments have to be performed in real time, matching practical operational constraints, such as computer resource, storage capacity, and availability of reference values. Which means that the dataflow has to be monitored, since lack of input data for any technical reasons will directly impact the ocean estimates quality.

Also, outputs from operational systems might be used for user-oriented applications (e.g., water quality, marine security or other societal use). Thus, the performance assessment methodology mentioned above has to rely on user requirements. Different applications might require different levels of accuracy. For instance, the accuracy of surface current forecasts dedicated to help search and rescue activities could not be matched by the operational systems, while the same ocean model may be used satisfactorily for a more general ocean study, or the climatology could be useful enough for some applications (e.g., tourist brochure).

Thus, for all these reasons, using different model configurations and data assimilation methods, operational oceanography teams have tried to develop their tools for assessing the quality of outputs, in order to be able to provide "error bars" to users. Thanks to GODAE, these initiatives could be shared at the international level. An outlook of OFS validation is covered by the lecture of Martin (2011) during summer school.

In this context, a common interest for intercomparison or collaboration on validation methods soon appeared among the different groups developing OFS. In the framework of the MERSEA Strand1 European Union (EU) project (20032004), a first attempt was realized to intercompare eddy-permitting, basin scale ocean data assimilating systems. Hindcasts originating from the different systems were intercompared using climatology and historical high quality ocean datasets, like WOCE sections (Crosnier et al. 2006). This validation methodology has been enhanced during the EU MERSEA Integrated Project (2004-2008, see http:// on several aspects: (1) Perform routinely the validation, and thereby stimulate data processing and archiving centers to provide observations in real time; (2) Apply diagnostics that offer a robust scientific evaluation of each system, and select the most suitable diagnostics among those applied in research mode; (3) Evaluate both operational system performance and the products quality, taking into account user requirements (usually from short term to seasonal timescale applications); (4) Push for consistency of assessment among the different forecasting centres: applying similar diagnostics to the different systems, thus strengthening the overall assessment management activity through central team expertise; (5) Use this consistency to allow intercomparison of the operational systems, and thus design and implement a technical architecture that allows robust exchanges, interconnections, and interoperability between these systems. Which is a milestone for implementing, in a consistent way, interoperable activities like ensemble forecasting.

In the framework of GODAE, based on these advances for OFS scientific assessment, a special intercomparison exercise was decided, prepared, and carried out at the beginning of 2008 (Hernandez et al. 2009). Some of the results are highlighted below.

0 0

Post a comment