Download Panel # 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Data assimilation wikipedia , lookup

Time series wikipedia , lookup

Transcript
Panel # 1
Draft Summary
6 slides of panel member responses (RL,
MW, AW, DS)
1 general themes slide (RG, AK)
(Not a summary of the 12 excellent talks)
current state of knowledge, specifically in
relation to what are the key gaps (1 of 3)
•
•
What kind of statistical analysis is appropriate when it is the combination
multiple variables that is extreme rather than any single variable?
Detection and attribution that are valid for extremes are needed
– Current D&A algorithms contain underlying normal assumptions which are not
valid for extremes. These methods, including pattern fingerprinting techniques,
need to be generalized to allow for extreme value distributions.
•
Covariates (which ones and how to do this)
– What are the important covariates that will aid in understanding and quantifying
mechanisms of change?
– How large scale should these covariates be? (patterns? Indices?)
•
•
•
Further improving the fidelity of climate models (e.g., through improving
resolution and model physics)
Improving the use of climate models. Challenge conventional approach of
feeding climate model outputs (with or without bias correction) to application
models (crop or hydrologic). Ask how insights, rather than numbers, can be
used to understand better climate change impacts and adaptation.
More research is needed in how uncertainty information can be derived and
utilized in various application sectors.
current state of knowledge, specifically in
relation to what are the key gaps (2 of 3)
•
Downscaling (statistical, dynamical) coupled with hydrologic simulation not
wholly satisfactory for est. future hydrologic extremes. RCMs yield valuable
information but:
– still manifest prohibitive climatology errors;
– runs too short & too few for some applications, & for full uncertainty estimation.
•
•
Hydrologic models very sensitive to forcing variation -> challenge for
downscaling. To reproduce realistic space time patterns & magnitudes led
to focus on resampling and analoguing
Statistical downscaling techniques have typically worked with mean GCM
projection outputs (e.g., monthly temperature) and tried to reconstruct
variability (including extremes)
– most common approaches typically involve resampling, rescaling, and analogues
– more sophisticated approaches (e.g., stochastic downscaling) typically have
been too univariate (eg precip only, temperature only)
•
•
•
Stakeholders demand ‘proof’ of successful climate model simulation of
historical hydrology) – a difficult standard to meet.
Downscaling methods have relied too heavily on use of climate model
precip and temperature, where are known to be less well simulated than
other climate system variables.
Observed hydro-climatology datasets are many but are poorly organized,
quality controlled, and inadequate temporal and spatial extent.
current state of knowledge, specifically in
relation to what are the key gaps (3 of 3)
•
•
•
Insufficient engagement of statisticians with climate scientists. Climate
scientists need more training in the proper use of statistical models.
Statistical models need to be more physically-motivated and related to key
processes. (e.g. how to include co-variates in EVT models)
Key capabilities & limitations in simulating or
analyzing extreme events (1 of 2)
•
What are the mechanisms of change for extreme precipitation?
– Clausius-Clapeyron or Convergence or both?
•
•
•
Higher resolution can improve simulation of things that play key role in
several types of extremes (e.g. FCs).
RCMs can include significant local forcings (topo., lake) in simulating the
mean
Regional model skill depends on boundary conditions provided by global
models.
– Some extreme events have a large scale context.
– RCM simulation of extreme events limited by large scale simulated by global
models, which depends on both model resolution and model physics.
•
Regional models require significant computational resources so
– limitations in number of regional scenarios gen. by regional models
– Limitations in length of the simulations (often time slices are produced rather
than continuous century long simulations).
•
Hydrologic models are effective at representing variability in hydrologic
systems, including reproducing and predicting extremes – droughts, floods
– given good quality meteorological forcings. But want high resolution
Key capabilities & limitations in simulating or
analyzing extreme events (1 of 2)
•
No inferential framework has yet been developed for making probabilistic
statements about real world extremes from climate model simulations.
–
–
•
Need to statistically model the errors climate models make in simulating extremes.
The framework is needed to make probabilistic statements about future extremes or make
detection and attribution statements about changes in extremes. (Normal OLS regression is
not well-specified for doing extremes).
A well-specified probability model is needed to fit climate model data in
order to make quantitative risk statements.
Other points? (1 of 2)
• Heat/cold waves may be easier than precipitation to simulate.
• Biases need to be considered carefully. (too hot => soil moisture off,
creating positive feedback)
• need more vigorous and systematic comparisons of statistical and
dynamical downscaling approaches and bias correction methods
– to provide guidance on the advantages/limitations of different
approaches (particularly their impacts on extreme events) and
– implications to various applications.
• Better documenting model skill in simulating different types of
extreme events
• Process-based analyses of how and why extreme events may
change are needed
– will heat waves (say) increase because of increased frequency of
stagnation, or simply because the whole PDF is shifted by a few
degrees;
– are there any new mechanisms of extreme events under climate
change, or simply that the same mechanism occurs more frequently or
with higher intensity;
– will any region experience new type of extremes not seen in the past)
• Ext value theory – statistical modeling of changes in hyd events –
may offer more direct alternative for est. extreme risk in many cases
Other points? (2 of 2)
• EVT has been underexploited in climate science, few studies use
EVT to model extreme storms.
• Collective risk due to multiple extreme events is an area needing
more attention. Societal losses aggregate over events.
General themes (models & stats)
1.
Both modelers and statisticians would benefit from greater interaction
1.
2.
2.
Need better training of modelers in proper use of extreme statistical techniques, (AR5 , type of tail, etc.)
EVT models need better physical basis (covariates, resolving key threshold, spatial dep., nonstationary)
Dealing with model biases
1.
2.
3.
3.
How to adjust? Using RA data, but do those adjustments apply in the future? (Non-stationarity)
How to validate against extremes? (check patterns assoc. w/ extreme events?)
What size of correction is ‘too big’? How does it propagate: GCM->RCM->app.
Multivariate extreme stats needed (consider variables in rare combination) +
1.
2.
4.
Use key values & patterns from multiple sources (TCs, Atm Rivers, Heat waves, etc.) => new indices
Does time or length scale matter? Does nonstationarity? Separating natural vs climate change signal.
How to do detection and attribution of extremes.
1.
2.
3.
5.
Need detection & attribution algorithms (non-normal) for extremes
Is the data (including obs.) good enough? (length of record, instrument probs.)
Internal atm & intra-model variability & broad uncertainty ranges -> need tools to separate ‘downscaling
climate noise’ from models from signal
Gap in model products and user needs
1.
2.
3.
4.
6.
Uncertainty vs confidence unclear, probability instead of categories, using cost info.
Model products intended for gaining insights but used as drivers for application models. Some uses may
be inappropriate (e.g. poor interpolation from large to small scale of app)
Climate models don’t produce probabilities; need to develop statistical frameworks for that.
Collective risk from multiple extreme events may be more relevant
Need framework for vetting models
1.
2.
3.
4.
7.
Which models do what well? Relative weighting when aggregating for a specific extreme event type
Set up independent tests, specific case studies, verifying land surface outputs
Handling grid ‘boxes’ versus point station values
Are differences (NARCAP) significant? How assess credibility in sim. specific extremes?
Modelers doing well at getting means right, but
1.
2.
3.
4.
Not equally so well at all scales; also variance and extreme values may be less well handled.
Key products for users often least well simulated (sfc values, soil moisture, precip.)
RCMs need better BC’s from the global models (seasonal/diurnal cycles, TC forcing, etc.)
Some extremes handled better than others – do those first? (target a model to a type of extreme?)
Panel 1 – Summary
• Models can provide information about climate extremes under
a changing climate, but how to validate credibility of this
information? Is the observational data good enough? And for
what variables? How to approach the model credibility issue?
• Validating models: Need to develop techniques to validate
grid scales (for climate models) vs. point observations.
• Is validating large scale conditions (simulated by climate
models) that lead to climate extremes a viable approach?
• Use of information from multi-models – How to best combine
the information and convey to users? There is a need to
develop a statistical framework for doing this.
• There is a need to develop covariate analysis of climate
extremes. How large scale? (patterns? Indices? Variables?)
• Model calibrations – different downscaling techniques can
lead to different outcomes. Further research in which
approach to follow is required.
Panel 1 – Summary
• Downscaling (dynamical and statistical)–what is the
value added Is there a consensus about
information?
• A basic requirement is knowing the observed PDF of
basic quantities, but observational data base is not
long enough. How to overcome this issue?
• There is a basic need to connect modelers/climate
experts with statisticians
• Attribution analysis of climate extremes needs to be
pursued; Focus on some specific past case studies.
• Information about which climate extremes are
important from the POV of adaptation/application
needs to be developed.