Download The multi-model ensemble (MME) forecast constructed with bias

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Regression analysis wikipedia , lookup

Choice modelling wikipedia , lookup

Linear regression wikipedia , lookup

Least squares wikipedia , lookup

Time series wikipedia , lookup

Data assimilation wikipedia , lookup

Transcript
Forecast Procedure
APCC Science Division
1. General Procedure
Till 2007, APCC was issuing operational seasonal forecasts four times an year. However, since
January, 2008, APCC has started issuing monthly rolling 3-month forecasts since January. The
data flux for the operational procedure is presented in Fig.1.1.
Original dynamical model data including forecasts and hindcasts are firstly collected from the
model holder in APEC members. Then these data need standardization which make the similar
format for these model data. These data are stored in each file with only one variable, one
ensemble member and one month. Next, quality check procedures are performed for the forecast
data. Only these data passing the quality check can be mixed with observation data, and these
composite data are prepared as the input data for APCC MME procedure.
APCC produces seasonal forecasts of precipitation, T850, Z500 applying five methods:
1. SPPM – Step-wise pattern projection method (an improved variant of the earlier Coupled
Pattern Projection Method);
2. Simple composite method – simple composite of bias corrected model ensemble means;
3. Superensemble - multiple regression based blend of model ensemble means (MR);
4. Synthetic multi-model ensemble (SE) – multiple regression on leading PCs.
5. Probabilistic – position of the forecast PDF in respect to the historical PDF.
All the above forecast results and hindcast skill scores for deterministic and probabilistic
hindcasts, in general at WMO’s level 3 SVS verification standards, are plotted in graphics.
These graphics are pissued with the APCC Outlook for the next three months along with a
summary of current conditions, during 23-25 of each month.
In Outlook, interpretation and description of Global/Regional prediction are be made depending
on the above forecasts. Then these forecast information including the document of outlook and
forecast figures are uploaded in APCC website. The verifications of hindcasts through cross
validation are made in the operational forecast month, and the verification of forecastsis made as
soon as the observation data are obtained, typically with a lag of one month.
2. Time Schedule
The time schedule for APCC operational procedure is generally made as table 2.1. During the
first 10 days in the month before the forecasting season, all participating model data are collected.
From the middle of the second week, these data are processed into basic data with the same
format and then Quality Checks are conducted for these basic data. Then, from the middle of the
second week to the middle of the third week, APCC MME forecasts are carried out using four
MME schemes. After that, two days are needed for APCC outlook.
Table 2.1 The time schedule for APCC operational procedure
The day in
the month
before the
season
Mission
21-23
1 - 10
11 - 15
16 - 21
Issue of
Request for
MME Data
Data
Collection
Standardization
Quality Check
MME
production,
Plotting of
Figures and
development
of draft
outlook
23-25
Discussions Outlook and
with
Upload to
SAC/WG
Website
members
Hindcast
Data
Quality
Check
Basic
Data
Original
Model Data
Composite
Data
Observation
Data
Anomaly, Total
Climatology
Graphics
Anomaly
ACC RMSE
Above Normal
Normal
Below Normal
Forecast
Hindcast
Probabilistic
Forecast and
Hindcast
Outlook: Interpretation and Description of Global/Regional Prediction
Dissemination: - Web Information Up-load
- Backup of the Data and Documents
Verification: - Verification of Previous Prediction
- Verification of Hindcast
Fig. 1.1 The data flux for the operational procedure
MME,
CPPM,
SVD and
Synthetic
Schemes,
PROB
3. Science basis of MME schemes at APCC
3.1 CPPM
Coupled Pattern Projection Method (CPPM) MME is a statistical downscaling forecast in which
an optimal forecast field (coupled pattern) is projected onto the observation at a target point. In
order to ensure the success of the statistical downscaling scheme, the following two points are
important:
(1) How to search for a coupled pattern?
(2) How to determine the correct transfer function?
Now, CPPM MME chooses the coupled pattern according to the correlation coefficient between
the model prediction in the predictor field and the observation at the target point, and uses the
linear regression function between them as the transfer function. Generally, a coupled pattern
may represent the underlying important dynamical relationship between the predictior field and
the target point. Therefore, CPPM MME is a promising post-processing method for improving
prediction skill.
Suppose the predictand and predictor fields are Y (t ) and X ( ,  , t ) , respectively. Here Y is a
local observed precipitation and X model predicted variables.
 and  are longitude and
latitude, respectively. The spatial pattern of the predictor field associated with the predictand can
be expressed as,
C ( ,  )  Y (t ) X ( ,  , t ) and
Cˆ i  CWi ,
(3.1)
1 inside window
Wi ( ,  )  
0 outside window
The over bar denotes the time mean for the hindcast period. The window W specifies the
positions of the spatial patterns of the predictor field. Once obtaining the patterns ( Ĉ i ), a local
predictand (the corrected prediction) can be obtained by projecting the patterns onto the predictor
variables of the model predicted data, as in the following equation.
YC (t ) 


1 k
(ai  C ( , )W i( , )  X ( , , t )  bi )

k i
  ,

(3.2)
The regression coefficients ( a, b ) are obtained by minimizing the error variance of Y using the
hindcast prediction data. By applying this technique in a cross-validative manner, one can obtain
an independently corrected forecast ( Yi (t ) ) for a particular i th window. The most important
procedure of the CPPM method is the selection of optimal windows. For this purpose, we
generate a large number of corrected predictions corresponding to the windows by moving the
position and changing the size. The window sizes changed are from 30
O
longitude x 20 O
latitude (the minimum size) to 120 O longitude x 50 O latitude (the maximum size). The optimal
windows are selected by comparing the temporal correlation skill of corrected forecasts for
corresponding windows with a double cross-validation procedure (Kaas et al. 1996). The final
corrected forecast is not determined by a single pattern with the highest cross-validated
correlation skill but the ensemble mean of several corrections with several different patterns.
Only the patterns in the category with the significance level larger than 95% are used. If there are
five patterns are selected, the final correction is made by the composite of five corrections based
on those patterns. In this case, k=5 in equation (3.2). For the correction of predicted precipitation,
the predictor variables used here are precipitation and 850hPa temperature.
It should be noted that the corrections of prediction toward observation based on CPPM methods
lead to a loss of variability in absolute magnitude; that is, the corrected field stays close to
climatology. Thus, it may be necessary to apply some sort of inflation [method] to the adjusted
field. The most common method of inflation is to multiply the adjusted values by the ratio
between the standard deviation of the observations and that of the adjusted values. In the present
study, the inflation factor has been obtained by combining the common method of inflation and
the weighting factor considered by Feddersen et al. (1999) and used by Kang et al. (2004).
3.2 MME
Multi-model ensemble (MME) technology has been considered as one of efficient solution to
improve the weather and climate forecasts. The basic idea of MME is to avoid model inherent
error by using a number of independent and skillful models in the hope of a better coverage of
the whole possible climate phase spaces. MME is a deterministic forecast scheme as a simple
arithmetic mean of predictions based on individual member models. In MME, there is an
assumption that each model is relatively independent and to some extent, it has the capability to
forecast the regional climate well, therefore we can expect a well model forecast by simple
composite of each model prediction from different models. This scheme keeps the model
dynamics due to the simple spatial filtering for each variable at each grid point. In addition, this
simple scheme contains the common advantage and limitation of the model predictions, therefore,
it could be a good benchmark used to evaluate other MME schemes.
The multi-model ensemble (MME) forecast constructed with bias-corrected data is given by
1
St  O 
N
N
 (F
i 1
i ,t
 Fi )
(3.3)
where, Fi,t is the ith model forecast at time t, Fi and O is the climatology of the ith forecast and
observation, respectively, and N is the number of forecast models involved. Therefore, the MME
results are generated by the combination of bias-corrected model forecast anomalies. Skill
improvements result from the bias removal and from the reduction of the climate noise by
ensemble averaging. In this scheme, the ensemble mean assigns the same weight of 1/N to each
of the N member models in anywhere regardless of their relative performance.
3.3 Multiple-Regression (MR)
The conventional multi-model superensemble forecast (Krishnamurti et al., 2000) constructed
with bias-corrected data is given by
n
S t  O   ai ( Fi ,t  F i )
(3.4)
i 1
Where, Fi ,t is the i th model forecast for time t , F i is the appropriate monthly mean of the i th
forecast over the training period, O is the observed monthly mean over the training period, a i
are regression coefficients obtained by a minimization procedure during the training period, and
n is the number of forecast models involved. The multi-model superensemble forecast in
equation (3.4) is not directly influenced by the systematic errors of forecast models involved
because the anomalies term ( Fi ,t  F i ) in the equation accounts for each model’s own seasonal
climatology.
At each grid point for each model of the multi-model superensemble the respective weights are
generated using pointwise multiple regression technique based on the training period.
a. Multimodel superensemble using standard linear regression
For obtaining the weights, the covariance matrix is built with the seasonal cycle-removed
anomaly ( F ' ),
Ci , j 
Train
F
'
i ,t
F ' j ,t ,
(3.5)
t 0
Where Train denote the training period, and i and j the i th and j th forecast models,
respectively.
The goal of regression is to express a set of data as a linear function of input data. For this, we
construct a set of linear algebraic equations,
~
C · x = o' ,
~
(3.6)
Where o j  t 0 Ot Fj ,t is a ( n x 1) vector containing the covariances of the observations with
'
Train
'
'
the individual models for which we want to find a linear regression formula, and o ' is seasonal
mean-removed observation anomaly, C is the ( n x n ) covariance matrix, and x is an ( n x 1)
vector of regression coefficients (the unknowns). In the convectional superensemble approach,
the regression coefficients are obtained using Gauss-Joran elimination with pivoting. The
covariance matrix C and o ' are rearranged into a diagonal matrix C’ and o '' , and the solution
vector is obtained as
~''
 ~''

on 
 o1
x =
,
,...,
C nn ' 
 C11 '


T
where the superscript T denotes the transpose.
(3.7)
The Gauss-Jordan elimination method for obtaining the regression coefficients between different
model forecasts is not numerically robust. Problems arise if a zero pivot element is encountered
on the diagonal, because the solution procedure involves division by the diagonal elements. Note
that if there are fewer equations than unknowns, the regression equation defines an
underdetermined system such that there are more regression coefficients than the number of
'
{ o j }. In such a situation, there is no unique solution and the covariance matrix is said to be
singular. In general, use of the Gauss-Jordan elimination method for solving the regression
problem is not recommendable since singularity problem like the above are occasionally
encountered. In practice, when a singularity is detected, the superensemble forecast is replaced
by an ensemble forecast.
b. Multimodel superensemble using SVD
SVD is applied to the computation of the regression coefficients for a set of different model
forecasts. The SVD of the covariance matrix C is its decomposition into a product of three
different matrices. The covariance matrix C can be rewritten as a sum of outer products of
columns of a matrix U and rows of a transposed matrix VT, represented as

Ci , j  UWV T

i, j
n
  wk U ikV jk ,
(3.8)
k 1
Here U and V are ( n x n ) matrices that obey the orthogonality relations and W is an ( n x n )
diagonal matrix, which contains rank k real positive singular values( wk ) arranged in decreasing
magnitude. Because the covariance matrix C is a square symmetric matrix, CT = VWUT = UWTT
= C. This proves that the left and right singular vector U and V are equal. Therefore, the method
used can also be called principal component analysis(PCA). The decomposition can be used to
obtain the regression coefficients:

 1
x  V  diag 
w

 j
~

  (U T  Q' )


(3.9)
The pointwise regression model using the SVD method removes the singular matrix problem that
cannot be entirely solved with the Gauss–Jordan elimination method.
Moreover, solving Eq. (3.9) with zeroing of the small singular values gives better regression
coefficients than the SVD solution where the small values w j are left as nonzero. If the small w j
~
values are retained as nonzero, it usually makes the residual | C · x 2 o | larger (Press et al. 1992).
This means that if we have a situation where most of the w j singular values of a matrix C are
small, then C will be better approximated by only a few large w j singular values in the sum of
Eq. (3.8).
3.4 Synthetic Ensemble (SE)
Despite the continuous improvement of both dynamical and empirical models, the predictive
skill of extended forecasts remains quite low. Multi-model ensemble predictions rely on
statistical relationships established from an analysis of past observations (Chang et al., 2000).
This means that the multi-model ensemble prediction depends strongly on the past performance
of individual member models.
In the context of seasonal climate forecasts, many studies (Krishnamurti et al., 1999, 2000a,b,
2001, 2003; Doblas-Reyes et al., 2000; Pavan and Doblas-Reyes 2000; Stephenson and DoblasReyes 2000; Kharin and Zwiers 2002; Peng et al., 2002; Stefanova and Krishnamurti, 2002; Yun
et al., 2003; Palmer et al., 2004) have discussed various multi-model approaches for forecasting
of anomalies, such as the ensemble mean, the unbiased ensemble mean and the superensemble
forecast. These are defined as follows:
Eb 
1
N
Ec 
1
N
N
N
 ( F  O)
i 1
i
N
 (F  F )
i 1
i
i
S   ai ( Fi  Fi )
i 1
(3.10)
Here, Eb is the ensemble mean, Ec is the unbiased ensemble mean, S is the superensemble, Fi is
the ith model forecast out of N models, Fi is the monthly or seasonal mean of the ith forecast
over the training period, O is the observed monthly or seasonal mean over the training period,
and ai is the regression coefficient of the ith model. The difference between these approaches
comes from the mean bias and the weights. Both the unbiased ensemble mean and the
superensemble contain no mean bias because the seasonal climatologies of the models have been
considered. The difference between the unbiased ensemble and the superensemble comes from
the differential weighting of the models in the latter case. A major aspect of the superensemble
forecast is the training of the forecast data set. The superensemble prediction skill during the
forecast phase could be improved when the input multi-model predictions are statistically
corrected to reduce the model errors.
Fig. 3.1. Schematic chart for the proposed sumperensemble prediction system. The new data set
is generated from the original data set by minimizing the residual error variance E ( 2 ) for each
model
Figure 3.1 is a schematic chart illustrating the proposed algorithm. The new data set is generated
from the original data set by finding a consistent spatial pattern between the observed analysis
and each model. This procedure is a linear regression problem in EOF space. The newly
generated set of EOF-filtered data is then used as an input multi-model data set for
ensemble/superensemble forecast. The computational procedure for generating the new data set
is described below.
The observation data (O) and the multi-model forecast data set (Fi) can be written as linear
combinations of EOFs, which describe the spatial and temporal variability:
~
O( x, t )   On (t )n ( x)
(3.11)
n
~
Fi ( x, T )   Fi ,n (T )i ,n ( x)
(3.12)
n
~
~
Here, On (t ) , Fi ,n (t ) and n (x) , i ,n ( x) are the principal component (PC) time series and the
corresponding EOFs of the nth mode for the observation and model forecast, respectively. Index
i indicates a particular member model. The PCs in eqs. (3.11) and (3.12) represent the time
evolution of spatial patterns during the training period (t) and the whole forecast time period (t).
We can now estimate a consistent pattern between the observation and the forecast data, which
evolves according to the PC time series of the training observations. The regression relationship
between the observation PC time series and the number of PC time series of individual model
forecast data can be written as
~
~
O (t )    i ,n Fi ,n (t )   i ,n (t ) .
(3.13)
n
With eq. (3.13) we can express the observation time series as a linear combination of the
predictor time series. To obtain the regression coefficients αi,n the regression is performed in the
EOF domain. The regression coefficients αi,n are found such that the residual error is minimized.
The covariance matrix is constructed with the PC time series of each model. For obtaining the
regression coefficients αi,n, the covariance matrix is built with the seasonal cycle-removed
anomaly. Once the regression coefficients αi,n are found, the PC time series of new data set is
written as
~
~
Fi reg (T )    i ,n Fi ,n (T )
(3.14)
n
The new data set is now generated by reconstruction with corresponding EOFs and PCs:
~
~
Fi syn ( x, T )   Fi ,reg
n (T )n ( x ) .
(3.15)
n
This EOF-filtered data set generated from the DEMETER coupled multi-model is used as an
input data set for both multi-model ensemble and superensemble prediction systems that produce
deterministic forecasts. What is unique about the new data set is that it minimizes the variance of
the residual error between the observations and each of the member models (Fig. 3.1). The
residual error variance is minimized using a least-squares error approach.
3.5 Probabilistic Multi-Model Ensemble Forecast
The APCC Probabilistic Multi-Model Ensemble Seasonal Climate Prediction System (PMME)
was invented and implemented as an operational forecasting tool in May 2006 (Min and Kryjov
2006). Detailed description of the method, scientific bases, verification assessments are given in
Min et al. (2009).
a. probabilistic multi-model ensemble operational forecast
The APCC operational seasonal forecasts are issued in the form of tercile-based categorical
probabilities (hereafter, tercile probabilities), that is, the probability of the below-normal (BN),
near-normal (NN), and above-normal (AN) categories, with respect to climatology. In this study,
similar to many other studies (e.g., Kharin and Zwiers 2003; Boer 2005), a Gaussian
approximation was applied to estimate tercile probabilities. The APCC forecast procedure
consists of two stages. In the first stage the individual model probabilistic forecasts are estimated.
The lower ( x b ) and upper ( x a ) terciles are estimated as xb    1.43 and xa    1.43 ,
respectively, with  and  being the mean and standard deviation of the hindcast sample.
Forecast probability of each category is estimated as a portion of the cumulative probability of
the forecast sample associated with this category. In the second stage, individual model
probabilistic forecasts for each category are combined using Eq. 3.16 (see below).
An example of a probabilistic seasonal forecast issued by APCC for temperature is shown in Fig.
3.2. It shows the forecast probability of each category separately, and a combined one based on
three category probabilities. The combined map (Figure 3.2d) shows the regions where one of
the categories dominates with corresponding color and the regions where the forecast PDF does
not significantly differ from the climatological one are left uncolored. For the combined map, at
each grid point we test the forecast probability distribution against the climatological distribution
applying Pearson’s chi-square (  2 ) test. We estimate the statistic as
3
( P( E j )  0.333) 2
j 1
0.333
 *  n
2
,
(3.16)
where n is a sum of ensemble sizes of individual models, P( E j ) is a forecast probability of j event, and 0.333 is the expected (climatological) probability of all three equiprobable categories.
Under the null hypothesis, which corresponds to no significant difference from climatological
probability distribution, this statistic has a  2 probability distribution with two degrees of
freedom. If the null hypothesis is rejected at the 5% significance level, the largest forecast
categorical probability is marked in the combined map with respective color. It is worth noting
that the threshold probabilities associated with this test are very close to those estimated as the
95% confidence interval of climatological probability based on binomial probability distribution.
Fig. 3.2. The APCC probabilistic multi-model forecast maps for temperature for summer 2007
(June-August). Shown are the probabilities (%) of the (a) AN, (b) NN, and (c) BN categories. (d)
Combined map shows the regions where the dominant forecast category is a certainty based on
Person’s chi-square test.
b. multi-model combination
Two approaches are possible to developing of a probabilistic multimodel ensemble forecast on
the basis of a set of model ensembles. The first approach is pooling (e.g., Barnston et al. 2003;
Doblas-Rayes et al. 2005), and the second is to separately compute a probabilistic forecast for
each individual model and then combine them. Because of the inconsistency between individual
model weights in the APCC hindcast and forecast datasets, only the second approach can be used
for the operative APCC probabilistic forecast method. It implies that forecast probabilities for
each tercile category are estimated separately for each individual model and then these forecast
probabilities are combined by applying the total probability formula:
M
P( E j )   P( mdli )P( E j | mdli ) .
(3.17)
i 1
Where P is a forecast probability, E j is j -event (i.e., either above normal (AN), near normal
(NN) or below normal (BN)), mdli is the i -model, and M is the number of models. In this
equation, P( E j | mdli ) is a forecast probability of the event E j conditioned on the i -model (i.e.,
the i -model forecast of j -event). P (mdli ) is an unconditional probability of the model, which
is a model weight in this context.
The choice of the model weights depends upon the ratio between (1) the standard errors of the
individual model ensemble means which represent the 68% confidence intervals of the sampling
errors (Särndal et al. 1992) of individual models associated with model ensemble spread, and (2)
the difference between individual model forecasts caused by both the difference in model
formulations and sampling errors. If the difference between the individual model forecasts is
comparable to or less than the model standard errors the optimal model weights are inversely
proportional to the squared standard error of each individual model forecast (Taylor 1997).
Alternatively, if the difference between the model forecasts is much larger than the model
standard errors, one can neglect them and combine the model forecasts with equal weights.
The analysis of the ratio between the model standard errors and differences between the model
ensemble means has been carried out on the 21-year cross-validated hindcasts. The standard
error for each model for each year is defined as
   n 1/ 2 ,
(3.18)
where σ is the standard deviation of the model spread, and n is the model ensemble size.
The supporting study has shown that for the globe as a whole neither the intermodel difference,
nor the standard error can be treated as prevailing, that is, neither of the above suggested weights
is appropriate for a method of global multimodel forecast. Such uncertainty suggests that for the
global forecast method it is reasonable to choose some compromise approach to the model
weights. We have suggested the geometric mean of the alternative weight values, that is, we
assign those model weights that are inversely proportional to the maximum error in forecast
probability associated with standard error.
The maximum random error in forecast probability,  P , is related to the standard error of the
mean defined in Eq.(3.18) as:
 P  f ( X )  ,
(3.19)
1
( X   )2
exp( 
)
2 2
 2
(3.20)
where f ( X ) is a Gaussian PDF:
f ( x) 
and | X   |   /2. For the standard errors associated with ensemble sizes varying from 5 to 31
the exponent term in Eq. (3.20) ranges within the interval from 0.98 to 0.9999 and can be treated
as a constant. Eq. (3.19) becomes as follows
P 
cons tan t

 
cons tan t
.
n1 / 2
(3.21)
Therefore, for each individual model forecast, we assign the weight proportional to the square
root of the model ensemble size, ni . Taking into account that model weights must sum up to one,
the final forecast formula for each j -event is as follows
M
M
i 1
i 1
P( E j )  (  ni1 / 2 )1  ni1 / 2 P( E j | mdli ) .
(3.22)
4. Example
4.1 Forecast for summer 2006
There are following 12 GCMs participating the MME forecasts for summer 2006:
CWB, GCPS, GDAPS_F, GDAPS_O, HMC, IRIF, IRI, JMA, METRI, MGO, NCC and
NCEP
The training period is 21 years from 1983 to 2003. The anomaly forecasts of seasonal mean of
2006 summer by four schemes are shown in Fig. 4.1 and Fig. 4.2.
Fig. 4.1 Precipitation anomaly of the seasonal mean of 2006 summer forecasts
Fig. 4.2 T850 anomaly of the seasonal mean of 2006 summer forecasts
Figure 4.3 shows multi-model probability forecast result of 2006 summer. The forecast maps
show the probabilities of each of the three categories (above-normal, near-normal, and belownormal) and combined filed.
Fig. 4.3 Multi-model probabilistic prediction of 2006 spring for each category and
the most likely category for precipitation.
4.2 Hindcast skill for summer 2006
The individual GCMs that participate the MME hindcasts for 2006 summer are the same as those
in forecast case. The training period is also 21 years from 1983 to 2003. After cross-validation,
the ACC and RMSE of precipitation hindcast over the globe for the four MME schemes are
shown in Fig. 4.4 and Fig. 4.5.
1
ACC Prec. Global JJA
0.8
0.6
0.4
0.2
0
1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 AVE
-0.2
-0.4
-0.6
Fig. 4.4 ACC for Precipitation hindcasts over the globe
CWB
GCPS
GDAPS_F
GDAPS_O
HMC
IRIF
IRI
JMA
METRI
MGO
NCC
NCEP
CPPM
MME
MR
SE
RMS Prec. Global JJA
2
1.8
CWB
GCPS
GDAPS_F
GDAPS_O
HMC
IRIF
IRI
JMA
METRI
MGO
NCC
NCEP
CPPM
MME
MR
SE
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 AVE
Fig. 4.5 RMSE for Precipitation hindcasts over the globe
The 20-year (from 1981 to 2000) hindcast verification was executed using the cross-validation
method. We checked the following verification score: the BS (Brier Score, Figure 4.6), the
reliability diagram (Fig. 4.7), which are recommend in the WMO’s Standard Verification System
(SVS).
Brier Score (PREC, MAM)
Brier Score
0.40
0.30
0.20
0.10
0.00
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
CWB
GDAPS
METRI
MGO
MSC
NCEP
PMME
Fig. 4.6. Brier Score of probabilistic precipitation hindcast for individual model and multimodel for spring of above-normal category.
Fig. 4.7 Brier Score of probabilistic precipitation hindcast for individual model and multimodel for spring of above-normal category.
4.3 Outlook for summer 2006
Outlook is made depending on the forecasts by CPPM with reference to the forecasts by other
MMEs at APCC. The outlook for summer 2006 is presented as below:
East Asia
Precipitation
Prevailing of normal and slightly below normal precipitation over the region.
Temperature
Normal temperature conditions prevail.
South Asia
Precipitation
Prevailing of normal and below normal precipitation over Indochina and above normal over
maritime continent.
Temperature
Normal conditions prevail.
Russia
Precipitation
Normal and slightly above normal precipitation prevails over most of the region.
Temperature
Positive temperature anomalies over most of the region but Western Siberia and Chukotka.
Australia
Precipitation
Normal and slightly below normal precipitation over continent and above normal over eastern
ocean areas.
Temperature
Positive temperature anomaly prevails.
North America
Precipitation
Normal precipitation prevails.
Temperature
Positive temperature anomaly prevails over most of the region but Alaska.
South America
Precipitation
Normal and slightly above normal precipitation.
Temperature
Positive temperature anomaly over Chile and negative over Peru.
Globe
Precipitation:
Transition from weak La-Nina to normal conditions. Negative precipitation anomalies over
equatorial Pacific and Indian oceans, normal and slightly above normal precipitation in other
regions.
Temperature:
Transition from weak La-Nina to normal conditions. Negative temperature anomalies are
expected over the Eastern Pacific and Indian oceans. Normal conditions and positive temperature
anomalies prevail in other regions.
References
Barnston, A. G., S. Mason, L. Goddard, D. G. DeWitt, and S. E. Zebiak, 2003: Increased
automation and use of multimodel ensembling in seasonal climate forecasting at the IRI. Bull.
Amer. Meteor. Soc., 84, 1783-1796.
Boer, G. J., 2005: An evolving seasonal forecasting system using Bayes’ theorem. Atmos.-Ocean,
43 (2), 129–143.
Chang, Y., Schubert, S. D. and Suarez, M. J. 2000. Boreal winter predictions with the GEOS-2
GCM: the role of boundary forcing and initial conditions. Q. J. R. Meteorol. Soc. 126, 2293–
2321.
Doblas-Rayes, F. J., R. Hagedorn, and T. N. Palmer, 2005: The rationale behind the success of
multi-model ensembles in seasonal forecasting – II. Calibration and combination. Tellus, 57A,
234-252.
Doblas-Reyes, F. J., Déqué, M. and Piedelievre, J.-P. 2000. Multi-model spread and probabilistic
forecasts in PROVOST. Q. J. R. Meteorol. Soc. 126, 2069–2087.
Feddersen, H., A. Navarra, and M. N. Ward, 1999: Reduction of model systematic error by
statistical correction for dynamical seasonal prediction. J. Climate, 12, 1974-1989.
Kaas, E., T.-S. Li, and T. Schmith, 1996: Statistical hindcast of wind climatology in the North
Atlantic and northwestern European region. Climate Res., 7, 97-110.
Kang, I. -K, J.-Y. Lee, and C.-K. Park, 2004: Potential predictability of a dynamical seasonal
prediction system with systematic error correction. J. Climate, 17, 834-844.
Kharin, V. V. and Zwiers, F. W. 2002. Notes and correspondence: Climate predictions with
multi-model ensembles. J. Climate 15, 793–799.
Kharin, V. V., and F. W. Zwiers, 2003: Improved seasonal probability forecasts. J. Climate., 16,
1684–1701.
Kharin, V. V., and F. W. Zwiers, 2003: Improved seasonal probability forecast. J. Climate, 16,
1684-1701.
Krishnamurti, T. N. and Sanjay, J. 2003. A new approach to the cumulus parametrization issue.
Tellus 55A, 275–300.
Krishnamurti, T. N., Kishtawal, C. M., LaRow, T. E., Bachiochi, D. R., Zhang, Z. and co-authors.
1999. Improved weather and seasonal climate forecasts from multi-model superensemble.
Science 285, 1548–1550.
Krishnamurti, T. N., Kishtawal, C. M., Shin, D. W. and Williford, C. E. 2000b. Multi-model
superensemble forecasts for weather and seasonal climate. J. Climate 13, 4196–4216.
Krishnamurti, T. N., Kishtawal, C. M., Zhang, Z., LaRow, T. E., Bachiochi, D. R. and co-authors.
2000a. Improving tropical precipitation forecasts from a multi-analysis superensemble. J.
Climate 13, 4217–4227.
Min, Y.-M., V.N. Kryjov, C.-K. Park, 2009: Probabilistic Multimodel Ensemble Approach to
Seasonal Prediction. Weather and Forecasting, 24, 812-828
Min, Y.-M., V.N. Kryjov (2006): Development of APCC Multi-Model Probabilistic Forecast
System. - APCC Technical Report No.1. Vol. 4. APCC Annual Report on Research and
Development. p. 21 – 35
Palmer, T. N., A. Alessandri, U. Andersen, P. Cantelaube, M. Davey, P. Délécluse, M. Déqué, E.
Díez, F. J. Doblas-Reyes, H. Feddersen, R. Graham, S. Gualdi, J.-F. Guérémy, R. Hagedorn,
M. Hoshen, N. Keenlyside, M. Latif, A. Lazar, E. Maisonnave, V. Marletto, A. P. Morse, B.
Orfila, P. Rogel, J.-M. Terres, and M. C. Thomson, 2004: Development of a european
multimodel ensemble system for seasonal-to-interannual prediction (DEMETER). Bull.
Amer. Meteor. Soc, 85, 853–872.
Pavan, V. and Doblas-Reyes, J. 2000. Multi-model seasonal hindcasts over the Euro-Atlantic:
skill scores and dynamic features. Climatic Dyn. 16, 611–625.
Peng, P., Kumar, A., Van den Dool, A. H. and Barnston, A. G. 2002. An analysis of multi-model
ensemble predictions for seasonal climate anomalies. J. Geophys. Res. 107, 4710
Press, W. H., S. A. Teukolsky, W. T. Vettering, and B. P. Flannery, 1992: Numerical Recipes in
Fortran. 2d ed. Cambridge University Press, 963 pp.
Stefanova, L. and Krishnamurti, T. N. 2002. Interpretation of seasonal climate forecast using
Brier skill score, FSU superensemble, and the AMIP-1 data set. J. Climate 15, 537–544.
Stephenson, D. B. and Doblas-Reyes, F. J. 2000. Statistical methods for interpreting Monte Carlo
ensemble forecasts. Tellus 52A, 300–322.
Taylor, J. R., 1997: An Introduction to Error Analysis: The Study of Uncertainties in Physical
Measurements, 2nd ed. Univ. Science Books, 327 p.
Yun, W.-T., Stefanova, L. and Krishnamurti, T. N. 2003. Improvement of the superensemble
technique for seasonal forecasts. J. Climate 16, 3834–3840.