Summary and Conclusions

An overview of methods which can be used to evaluate the accuracy and skill of ocean forecasts has been presented. Various statistical methods which can be used to perform evaluations have been defined, together with some useful diagrams for summarising related statistical information. A discussion on the importance of knowledge about the accuracy and quality of the observations used in the evaluations has also been given.

Some examples of the application of the various statistical measures to GODAE ocean forecasting systems have been given. These were used to highlight the need to evaluate the ability of the model to reproduce the large-scale ocean circulation, the accuracy of the analyses, and the accuracy of the subsequent forecasts. The use of independent data in assessing analyses and forecasts has also been presented, as has an example of validation directed at a particular user need.

Various techniques which could be used to evaluate ocean forecasting systems have not been described in detail for different reasons. For example, it is possible to estimate a formal error estimate of the analysis using the Hessian of the cost function in variational data assimilation schemes. However, this is an expensive quantity to calculate and the output of the calculation is dependent on the input error covariance information which is usually not well known. For these reasons, it is not usually provided as an analysis error estimate.

Similarly, for systems which run an ensemble of forecasts, the spread in the forecasts can be used to provide an estimate of the confidence which should be placed in the forecasts. The uncertainty in the initial conditions and the processes and parameterisations which are modelled can be sampled and the spread of the forecasts can then give statistical information on how much confidence should be placed in certain regions. However, the way in which the uncertainties in the system are sampled has a significant impact on the resulting forecast error estimates, and few operational ocean forecasting systems run an ensemble prediction system at present.

Inter-comparison with other ocean forecasting systems can also provide useful information about the skill of a particular ocean forecasting system and insight into weaknesses that can easily be corrected. For more information on this subject, the reader is directed to the separate paper on inter-comparison methods.

The evaluation of ocean forecast products is an important aspect of all the GO-DAE systems, and is continually being improved. It is hoped that common verification statistics will be produced routinely by all the systems over the coming years which will drive improvements to the systems themselves, and will also provide further insight into the most appropriate methods for their evaluation.

Acknowledgements The author would like to thank Joe Metzger, Nicolas Ferry and Peter Oke for their permission to reproduce results here. The author also gratefully acknowledges the FOAM team for input and useful discussions. The FOAM system was developed for the Royal Navy, and under the MERSEA and MyOcean Projects—partial support of the European Commission under Contracts SIP3-CT-2003-502885 and FP7-SPACE-2007-1 is gratefully acknowledged.

0 0

Post a comment