Table of congruences: Difference between revisions
en>Michael Hardy No edit summary |
change link |
||
Line 1: | Line 1: | ||
{{tone|date=September 2012}} | |||
'''Forecast verification''' is a subfield of the climate, atmospheric and ocean sciences dealing with validating, verifying as well as determining the predictive power of prognostic model forecasts. Because of the complexity of these models, forecast verification goes a good deal beyond simple measures of [[association (statistics)|statistical association]] or [[mean squared error|mean error]] calculations. | |||
==Defining the problem== | |||
To determine the value of a [[forecasting|forecast]], we need to measure it against some baseline, or minimally accurate forecast. There are many types of forecast that, while producing impressive-looking skill scores, are nonetheless naive. For example, a "persistance" forecast can still rival even those of the most sophisticated models. Persistance forecast in a nutshell: what's the weather going to be like today? Same as it was yesterday. This could be considered analogous to a [[scientific control|"control" experiment]]. Another example would be a [[climatology|climatological]] forecast: what's the weather going to be like today? The same as it was, on average, all the previous days this year for the past 75 years. | |||
The second example suggests a good method of normalizing a forecast before applying any skill measure. Most weather situations will cycle, since the Earth is forced by a highly regular energy source. A numerical weather model must accurately model both the seasonal cycle and (if finely resolved enough) the diurnal cycle. This output, however, adds no information content, since the same cycles are easily predicted from climatological data. Climatological cycles may be removed from both the model output and the "truth" data. Thus, the skill score, applied afterward, is more meaningful. | |||
One way of thinking about it is, "how much does the forecast reduce our ''uncertainty''?" Tang et al (2005) <ref name="Tang_etal2005"> | |||
{{cite article | title=Reliability of ENSO Dynamical Predictions | author=Youmin Tang and Richard Kleeman and Andrew M. Moore | year=2005 | journal=Journal of the Atmospheric Sciences | volume=62 | pages=1770-1791 }}</ref> | |||
used the [[conditional entropy]] to characterize the uncertainty of [[ensemble forecasting|ensemble predictions]] of the [[El Nino-Southern Oscillation|El Nino/Southern Oscillation (ENSO)]]: | |||
:<math> | |||
R \approx \sum p_i \ln \frac{p_i}{q_i} | |||
</math> | |||
where ''p'' is the ensemble distribution and ''q'' is the climatological distribution. | |||
==For more information== | |||
The World Meteorological Organization maintains a useful web page on forecast verification. | |||
<ref>{{cite web | |||
| author = WMO Joint Working Group on Forecast Verification Research | title = Forecast Verification: Issues, Methods and FAQ | url = http://www.cawcr.gov.au/projects/verification/ | accessdate = July 30, 2013}}</ref> | |||
For more in depth information on how to verify forecasts see the book by Jolliffe and Stephenson. <ref>{{cite article | title=Forecast Verification: A Practitioner's Guide in Atmospheric Science | author=Ian T. Jolliffe and David B. Stephenson | publisher=Wiley | year=2011}}</ref> | |||
==References== | |||
<references/> | |||
{{reflist}} | |||
[[Category:Weather forecasting]] |
Latest revision as of 03:34, 27 December 2013
I am Keisha from Menzingen. I love to play Lute. Other hobbies are Insect collecting.
Stop by my weblog Hostgator Discount Coupon
Forecast verification is a subfield of the climate, atmospheric and ocean sciences dealing with validating, verifying as well as determining the predictive power of prognostic model forecasts. Because of the complexity of these models, forecast verification goes a good deal beyond simple measures of statistical association or mean error calculations.
Defining the problem
To determine the value of a forecast, we need to measure it against some baseline, or minimally accurate forecast. There are many types of forecast that, while producing impressive-looking skill scores, are nonetheless naive. For example, a "persistance" forecast can still rival even those of the most sophisticated models. Persistance forecast in a nutshell: what's the weather going to be like today? Same as it was yesterday. This could be considered analogous to a "control" experiment. Another example would be a climatological forecast: what's the weather going to be like today? The same as it was, on average, all the previous days this year for the past 75 years.
The second example suggests a good method of normalizing a forecast before applying any skill measure. Most weather situations will cycle, since the Earth is forced by a highly regular energy source. A numerical weather model must accurately model both the seasonal cycle and (if finely resolved enough) the diurnal cycle. This output, however, adds no information content, since the same cycles are easily predicted from climatological data. Climatological cycles may be removed from both the model output and the "truth" data. Thus, the skill score, applied afterward, is more meaningful.
One way of thinking about it is, "how much does the forecast reduce our uncertainty?" Tang et al (2005) [1] used the conditional entropy to characterize the uncertainty of ensemble predictions of the El Nino/Southern Oscillation (ENSO):
where p is the ensemble distribution and q is the climatological distribution.
For more information
The World Meteorological Organization maintains a useful web page on forecast verification. [2]
For more in depth information on how to verify forecasts see the book by Jolliffe and Stephenson. [3]
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.