Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
002_corti 11-07-2008 9:21 Pagina 17 PREDICTABILITY OF CLIMATE CHANGE by SUSANNA CORTI* 1. Introduction “One degree and we’re done for”. This heading, followed by an alarming text based on brand new scientific findings, appeared in New Scientist on September 2006. Scientists of NASA’s Goddard Institute for Space Studies have analysed global temperature records and found that the surface temperatures have been increasing by an average of 0.2ºC every decade for the past 30 years. As shown in Figure 1, warming is greatest in the high latitudes of the Northern hemisphere, particularly in the sub-Arctic boreal forests of Siberia and North America. Is this a cause for concern? After all newspapers and magazines always have lots of sensationalist headlines on (real and fictitious) climate changes, including the global cooling and the incoming ice age envisaged in the 1970’s (see for example the “Cooling world” paper published in Newsweek on September 1975) and the Pentagon’s secret report leaked to the press in February 2004 which warned that Britain will be plunged into a “Siberian” climate by 2020 as a (paradoxical) consequence of the rising of global temperatures. Now we know that the cooling figured out in the 1970’s not only turned out to be an hoax, but it was right in those years that the global average temperature started to climb faster than in any other periods during the last 150 years (see Fig. 2). However we will have to wait about a decade to verify the apocalyptical prediction made by the Pentagon’s experts, and a trifle more if we would like to verify the predictions made within the IPCC framework for the 2050’s and 2100’s. In the light of these considerations, it could seem quite challenging to detect the real truth buried under all these contradictory * Istituto di Scienze dell’Atmosfera e del Clima (ISAC). Consiglio Nazionale delle Ricerche (CNR) - [email protected] 002_corti 18 11-07-2008 9:21 Pagina 18 Global Climate Change and the Ecology of the Next Decade claims. Maybe one could be tempted to give up, considering more advisable not to believe any of the scary claims of the science and/or the press and just continue his life “business as usual”. And this is indeed what most people do. Could they do better? Is there a “not too complicated” way to solve this signal-to-noise-like problem? How can we check the feasibility and reliability of climate predictions? Do climate predictions have any skill at all? In the next sections I will address the matter of climate (and weather) predictability, trying to highlight what we should (and should not) expect from climate predictions. Simulations of the mean global temperature trend during the 20th century are presented in section 2. Section 3 discusses the difference between climate and weather predictions. In section 4 we consider climate change predictions and model developments. The role of deterministic chaos and flow regimes in weather and climate predictability are presented respectively in sections 5 and 6. Concluding remarks are made in section 7. 2. Could we have predicted that curve? Figure 2 shows the instrumental record of global average temperatures from 1850 to 2006. It can be noticed that eleven of the last twelve years (1995-2006) rank among the 12 warmest years in the instrumental record. The linear warming trend over the last 50 years (about 0.13ºC per decade) is nearly twice that of the last 100 years. The total temperature increased of about 0.76ºC from 1850-1899 to 2001-2005. Warming of the climate system appears unequivocal, as is now evident from observations of increases in global average air and ocean temperatures. However the signal was more uncertain ten years ago, and quite indistinguishable from noise until the late 1980’s. This from an observational point of view, but what about prediction? Could we have been able, say 150 years ago, to predict the global mean surface temperature of the 20th century? Let’s suppose that a team of scientists in 1850 decided to do a bunch (the fancy word “ensemble” wasn’t in vogue yet) of climate predictions for the next 150 years. Let’s suppose also that they knew the physical laws which determine the evolution of the climate sys- 002_corti 11-07-2008 9:21 Pagina 19 Predictability of Climate Change 19 tem and how to formulate a comprehensive mathematical model based on these laws using finite truncations of the partial-differential equations. Let’s suppose finally that in this hypothetical past world, high-performance computers were available to carry out future climate simulations. The question is: would those predictions have been successful? Figure 3 might provide a hint to the correct answer. Here the results of 4 different 150-year simulations starting in 1850 of global mean surface temperature (black lines) with a coupled ocean-atmospheric general circulation model are compared with the observed record (red line). In panel (a) the model was forced with natural forcing (i.e. solar variability and volcanic eruptions) only, vice versa only anthropogenic forcings (i.e. greenhouse gases, tropospheric and stratospheric ozone and the direct and indirect effects of sulphate aerosols) were taken into account in integrations shown in panel (b). Panel (c) shows the result when both, natural and anthropogenic forcing, are included. Fig. 1: Difference in instrumentally determined surface temperatures between the period January 1995 through December 2004 and “normal” temperatures at the same locations, defined to be the average over the interval January 1940 to December 1980. The average increase on this graph is 0.42°C. This plot is based on the NASA GISS Surface Temperature Analysis (GISTEMP), which combines the 2001 GISS land station analysis data set (Hansen et al. 2001) with the Rayner/Reynolds oceanic sea surface temperature data set (Reynolds et al. 2002). Fig. 2: Instrumental record of global average temperatures as compiled by the Climatic Research Unit of the University of East Anglia and the Hadley Centre of the UK Meteorological Office. Data set HadCRUT3 was used. The most recent documentation for this data set is Brohan et al. (2006). Following the common practice of the IPCC, the zero on this figure is the mean temperature from 1961-1990. 002_corti 20 11-07-2008 9:21 Pagina 20 Global Climate Change and the Ecology of the Next Decade Fig. 3: Global mean surface temperature anomalies relative to the 1880 to 1920 mean from the instrumental record compared with ensembles of four simulations with a coupled ocean-atmosphere climate model forced (a) with solar and volcanic forcing only, (b) with anthropogenic forcing including well mixed greenhouse gases, changes in stratospheric and tropospheric ozone and the direct and indirect effects of sulphate aerosols, and (c) with all forcings, both natural and anthropogenic. The red thick line shows the instrumental data while the black thin lines show the individual model simulations in the ensemble of four members. Note that the data are annual mean values. Copyright: figure from Intergovernamental panel on Climate Change, Third Assessment Report, Technycal Summary of Working Group I Report, 2001. It appears evident that only with the inclusion of both, natural and anthropogenic, forcing, the model is able to reproduce much of the observed decadal scale variation in global mean temperature for the entire 20th century. The natural forcing alone cannot account for the warming in recent decades. Similarly, anthropogenic forcing alone is insufficient to explain the warming from 1910 to 1945 but necessary to reproduce the warming since 1976. This result, highlighted in the 2001 002_corti 11-07-2008 9:21 Pagina 21 Predictability of Climate Change 21 IPCC third assessment report, is one of great worth for at least two reasons. Firstly it indicates that mean temperature trend (and therefore current conditions) can’t be explained without including greenhouse gases forcing; this is a pretty solid case that what is happening is in large part anthropogenic. Secondly it shows that climate predictions (at least predictions of global mean temperature) can be successful provided that the time evolution of all the external forcings, which affect the climate system, are known with acceptable accuracy. In the light of these arguments, we can now answer to the previous question: Yes, an hypothetical climate modeller living in the 1850’s could have predicted (in principle) global warming if he or she had known in advance the rate of increase of greenhouse gases and aerosols together with the timing of volcanic eruptions and solar activity variations. This positive answer arouses another (tricky) question though: How can we predict the climate of the next 50 or 100 years when weather forecasts become inaccurate after just a few days? To tackle the problem, we first need some useful definitions which will be given in the next section. 3. Weather, Climate, Prediction and Predictability. Weather is identified with the complete state of the atmosphere at a particular instant. Weather prediction is then identified with the process of determining how the weather will change as time advances. Weather predictability assesses whether and how (i.e. how long in advance and with what kind of skill) such predictions are feasible. Climate may be identified with the set of statistics of an ensemble of many different states of the atmosphere during a long time span. Climate prediction then becomes the process of determining how these statistics will change as the beginning and the end of time span advance. Climate predictability is concerned with whether such climatic prediction is possible. Following the definition given by Lorenz (1975) we shall refer to the weather and climate prediction (and predictability) which have just been introduced as prediction (and predictability) of the first 002_corti 22 11-07-2008 9:21 Pagina 22 Global Climate Change and the Ecology of the Next Decade kind. Predictions of the first kind are essentially initial value problems. Predictability of the first kind is therefore concerned with the question of how uncertainties in the initial state evolve during the forecast and limit its skill. By contrast, forecasts which are not dependent on initial conditions, for example predicting changes in the statistics of climate as a result of some prescribed imposed perturbation, would constitute a prediction of the second kind. In a prediction of the second kind, we estimate how (the attractor of) a given dynamical system – for example the climate system – responds to a change in some prescribed parameter or variable. Uncertainties in such predictions may arise from the accuracy in the prescribed change itself, or from uncertainties in model formulation. A weather forecast is clearly a prediction of the first kind; so is a forecast of El Niño. By contrast, estimating the effects on climate of a prescribed volcanic emission or prescribed anthropogenic changes in atmospheric composition, would constitute a climate prediction of the second kind. This definition of predictability of the first and second kind is useful, but, in practice, many forecasts do not fall exclusively into one of these two categories to the exclusion of the other one. The predictions shown in figure 3 are a good example of this. For on the one hand they depend on the atmospheric and oceanic initial values and are concerned with the chronological order in which climate states occur (figure 3 shows the evolution of the annual global mean temperature). On the other hand the forcing provided by natural and anthropogenic causes is sufficiently strong to overcome the possible sensitive dependence on initial conditions. Because they start from slightly different initial conditions, the (four) integrations carried out do have different trajectories. However these trajectories are close to each other and when the correct forcing is applied (i.e. in panel (c)) they evolve consistently with the observed record. In other words, here the forcing is strong enough to wipe out any significant uncertainty due to initial conditions. Therefore, in this hybrid case, as in predictions of the second kind, predictability seems to arise most from the accuracy in the prescribed changes in the external forcing (even though one cannot totally neglect initial conditions). 002_corti 11-07-2008 9:21 Pagina 23 Predictability of Climate Change 23 4. Climate change Predictions In order to make quantitative projections of future climate change, it is necessary to use climate models that simulate all the important processes governing the future evolution of the climate. For climate simulation, the major components of the climate system must be represented in sub-models (atmosphere, ocean, land surface, cryosphere and biosphere), along with the processes that go on within and between them. Comprehensive climate models are based on physical laws represented by mathematical equations that are solved using a three-dimensional grid over the globe. In the atmospheric module, for example, equations are solved that describe the large-scale evolution of mass, momentum, heat and moisture. Similar equations are solved for the ocean. Within the IPCC framework the tools of climate models are used with future scenarios of forcing agents (e.g., greenhouse gases and aerosols) as input to make a suite of projected future climate changes that illustrates the possibilities that could lie ahead. Figure 4 shows the projected temperature changes for the early and late 21st century for B1, A1B, and A2 scenarios1. The central and right panels show the Atmosphere-Ocean General Circulation multi-Model average projections for the B1 (top), A1B (middle) and A2 (bottom) SRES scenarios averaged over decades 2020–2029 (centre) and 2090–2099 (right). In each case greater warming over most land is evident. Over the ocean warming is relatively large in the Arctic and along the equator in the eastern Pacific. The left panel shows corresponding probability density functions (PDFs) that give a measure of the uncertainty associated with the global average temperature change. Two key points emerge from probability esti1 The B1 storyline and scenario family describes a world with global population that peaks in mid-century and declines thereafter, and with rapid change in economic structures toward a service and information economy, with reductions in material intensity and the introduction of clean and resource-efficient technologies. The A1B scenario describes a future world of very rapid economic growth, global population that peaks in mid-century and declines thereafter, and the rapid introduction of new and more efficient technologies with a balance across all sources of energy. The A2 scenario describes a very heterogeneous world. The underlying theme is self-reliance and preservation of local identities. Fertility patterns across regions converge very slowly, which results in continuously increasing population. Economic development is primarily regionally oriented and per capita economic growth and technological change more fragmented and slower than other storylines. 002_corti 24 11-07-2008 9:21 Pagina 24 Global Climate Change and the Ecology of the Next Decade mates: for the projected 2020-2029 warming: (i) there is more agreement among models and methods (narrow width of the PDFs) compared to later in the century (wider PDFs); (ii) the warming is similar across different scenarios, compared to later in the century where the choice of scenario significantly affects the projections. Fig. 4: Projected surface temperature changes for the early and late 21st century relative to the period 1980-1999. The central and right panels show the Atmosphere-Ocean General Circulation multi-Model average projections for the B1 (top), A1B (middle) and A2 (bottom) SRES scenarios averaged over decades 2020-2029 (center) and 2090-2099 (right). The left panel shows corresponding uncertainties as the relative probabilities of estimated global average warming from several different AOGCM and EMICs studies for the same periods. Copyright: figure from IPCC,2007, Four Assessment Report, Summary for Policymakers. The surface temperature changes depicted in fig. 4 are essentially predictions of the second kind: they predict how a statistical property of the climate system (here the global mean temperature) changes as the atmosphere composition is altered in a given way (each scenario represents a different possibility). These predictions have been carried out using a number of comprehensive state-of-the-art Atmosphere-Ocean General Circulation Models (AOGCMs). Climate models have developed over the past few decades as computing power has increased. During that time, models of the main components, atmosphere, land, ocean and sea ice have been developed separately 002_corti 11-07-2008 9:21 Pagina 25 Predictability of Climate Change 25 Fig. 5: The development of climate models over the last 25 years showing how the different components are first developed separately and later coupled into comprehensive climate models. Copyright: figure from Intergovernamental panel on Climate Change, Third Assessment Report, Technycal Summary of Working Group I Report, 2001. and then gradually integrated. Figure 5 shows the past, present and near future evolution of climate models. Currently, the resolution of the atmospheric part of a typical model is about 100-200 km in the horizontal and about 1 km in the vertical above the boundary layer. The resolution of a typical ocean model is about 200 to 400 m in the vertical, with a horizontal resolution of about 100 to 250 km. Many physical processes, such as those related to clouds or ocean convection, take place on much smaller spatial scales than the model grid and therefore cannot be modelled and resolved explicitly. Their average effects are approximately included in a simple way by taking advantage of physically based relationships with the larger-scale variables. This technique is known as parameterization. In the future, when more computer power will be available, climate models are planned to incorporate other interactive components (or modules) and their spatial resolution will increase in order to resolve explicitly physical processes which are now parame- 002_corti 26 11-07-2008 9:21 Pagina 26 Global Climate Change and the Ecology of the Next Decade terised. The ultimate aim is, of course, to model as much as possible of the whole of the Earth’s climate system so that all the components can interact and, thus, the predictions of climate change will continuously take into account the effect of feedbacks among components. These all-comprehensive models are called Earth Systems. Because they provide a better representation of the climate system, it is expected that Earth Systems will provide more accurate climate forecasts. How accurate? Would they be able to produce a deterministic, detailed (and reliable) long-range prediction of the first kind? Could they, for example, forecast what the weather will be like on 1 March 2020? Or, even with the best Earth System model, one will have to be content to predict changes in the statistics, as in figure 4? The answer has to be found in the next section. 5. “One flap of a sea-gull’s wing may forever change the future course of the weather” Edward Lorenz We introduce the notion of “determinism à la Laplace” starting from Karl Popper’s 1965 essay “Of Clouds and Clocks”. In this essay clouds represent physical system “which are highly irregular, disorderly, and more or less unpredictable”. By contrast clocks represent systems “which are regular, orderly and highly predictable in their behavior”. The central thesis of the proponents of determinism is that all clouds are clocks. In other words: the distinction between clouds and clocks is not based on their intrinsic nature, but on our lack of knowledge. If only we knew as much about clouds as we do about clocks, clouds would be just predictable as clocks. Or, in climatological terms, a perfect model of all the components of the climate system, initialised and forced with perfect data at infinite resolution, and run on an infinitely powerful computer, should in principle produce a perfect forecast with an unlimited range of validity. The aim of Popper was that of demolishing this extreme deterministic (pro)position. Overall his reasoning is quite convincing, however it would have been a lot easier for him to dismount the determinist “staggering proposition” if he had known the Edward Lorenz’ 1963 paper on deterministic, nonperiodic flow. Lorenz discovered that, 002_corti 11-07-2008 9:21 Pagina 27 Predictability of Climate Change 27 despite determinism, forecasts of the first kind were not predictable indefinitely into the future. For suitable parameter values, his model equations (shown next to fig.6), which contain two essential ingredients – instability e nonlinearity – give rise to the phenomenon referred as “sensitivity to initial conditions”, i.e. small variations of the initial condition produce large variations in the long term behavior of the system. Therefore the effective forecast range of such a system is finite. Some years later (Lorenz 1969) it became clear that the predictability of dynamical systems which possess many scales of motion (like the atmosphere) is limited to the typical life span of their most energetic phenomena. The “prediction horizon” of midlatitude weather is comparable to the average time span of extratropical cyclones: one/two weeks. The 1963 Lorenz model can be considered a drastically simplified version of the full fluid-dynamical equations which retained their nonlinearity and instability. The model consists of a system of three differential equations with three variables. A state of instantaneous weather can therefore be represented by a point in a three-dimensional phase space, and the evolution of the weather with time can be represented by a line, or near by points, in this space. The climate of the model, the set of all possible model weather states, is known as the Lorenz attractor (see fig. 6). Fig. 6: Lorenz attractor colored with the rate of error growth. Blue: decay; green: low growth; yellow and purple: average growth; red: high growth. Copyright: Figure from Carrassi 2001 (tesi di laurea available at the University of Ferraara). Fig. 7: Phase-evolution of an ensemble of initial points on the Lorenz attractor, for three set of initial conditions. The attractor itself is shown as reference. Copyright: Figure from Palmer (1993). 002_corti 28 11-07-2008 9:21 Pagina 28 Global Climate Change and the Ecology of the Next Decade This attractor has no volume in this three-dimensional phase space, yet it is neither a simple one-dimensional line, nor a smooth two dimensional surface. The attractor has a fractional dimension (2.06), and therefore, not surprisingly, carries the epithet “strange”. It represents one of a generic class of strange attractors whose topology characterizes the chaotic unpredictable properties of the basic equations. The Lorenz model contains many qualitative similarities with the large-scale atmosphere. One of these is the existence of regime structure, another is the variation of predictability around the attractor. To illustrate this second property, Fig. 7 shows the attractor, superimposed on which are three ensemble predictions (of the first kind) started from different parts of the attractor. The ensemble of initial values is shown as a small black ring of points. These represent an uncertainty in the initial conditions for the forecast. In the top panel of fig. 7 all members of the ensemble integration make the transition from left to right regime (the inhomogeneous regime structure is represented by the two “butterfly wings”); as such the regime transition is very predictable. In the bottom left panel the forecast ensembles diverge more rapidly. There is about a 60% chance that there will be no regime transition, and about 40% chance that one will occur. In the final example (bottom right panel) forecast dispersion is large, and forecast evolution is essentially unpredictable. This property of variable predictability around the attractor is illustrated in fig.6 where different rates of error growth are shown by colours. Predictability in the atmosphere, as in the Lorenz model, is associated with the local instabilities of the flow: it varies around the (unknown) atmospheric attractor. If we knew the structure of the atmospheric attractor and its error growth properties as we know those of the Lorenz model, we could forecast the forecast skill. In other words we could give an (a priori) estimate of confidence in a forecast of the first kind. In practice the problem of forecasting uncertainty in weather and climate predictions is solved using ensemble techniques: i.e. from multiple integrations of the governing equations from perturbed initial conditions, using multiple models and/or stochastic parameterizations to represent model uncertainty. This technique, known as 002_corti 11-07-2008 9:21 Pagina 29 Predictability of Climate Change 29 “ensemble prediction” is applied both to predictions of the first and second kind. Climate hindcasts and forecasts shown respectively in fig. 3 and 4 are examples of ensemble predictions. 6. Flow regimes and climate change The atmosphere can be regarded as a dynamical system with an infinite number of degrees of freedom. If one considers all the spectrum of atmospheric phenomena, covering different spatial and temporal scales, the number of states that can be assumed by the atmospheric variables is indeed infinitely large. However, restricting the interest to the large-scale features of the flow, many statistical analyses of the observed record suggest the existence of preferred circulation patterns that seem to be particularly recurrent and/or persistent. From a popular perspective, the atmosphere (especially in the extratropics during the cold season) exhibits “spells of weather” characterized by a run of similar weather systems (i.e. baroclinic disturbances and their associated weather), or an extended period Fig. 8: Left panel: Atmospheric state vector PDF based on monthly mean 500-hPa geopotential height in a reduced two-dimensional phase space. Data from the period 1949-1994. There are four maxima labelled A, B, C, and D. Right panel: Geographical patterns of the four atmospheric regimes. Shown is the geographical distribution of 500-hPa geopotential height anomaly associated with clusters A (The “cold ocean warm land” regime) B, C and D (the “Arctic Oscillation”regime). Contour interval, 10m. Copyright: Figure from Corti et al. 1999. 002_corti 30 11-07-2008 9:21 Pagina 30 Global Climate Change and the Ecology of the Next Decade marked by the absence of weather systems. More precisely there is evidence of flow regimes characterised by persistence on timescales much longer than an individual weather system; but with transitions between regimes characterized by the faster timescale, that of the baroclinic instability. Examples of these recurrent flow patterns are presented in fig. 8. Here (on the right) the four North hemisphere extratropics regimes as computed by Corti et al. (1999) are shown. They correspond to the four maxima of a PDF (on the left) based monthly mean 500hPa geopotential height in a reduced phase space spanned by the two dominant eigenvectors of the covariance matrix (EOFs). Cluster A pattern denotes the manifestation in 500-hpa geopotential height of the “cold ocean warm land” (COWL) pattern, which, to a first approximation, describes much of recent climate change of NH surface air temperature. The height anomalies associated with clusters B and C have projection onto the negative Pacific North American (PNA) pattern. Cluster B also projects onto the positive North Atlantic Oscillation, whilst cluster D is correlated with the 500-hPa height component of the Arctic Oscillation. As mentioned in the previous section, the Lorenz attractor has, like the atmosphere, a regime structure and motions around the attractor are characterized essentially by two timescales: the regime residence timescale and the transition timescale. A typical regime residence timescale is longer than the timescale of transition between regimes. In this context, the Lorenz model is a paradigmatic “toy model” of atmospheric circulation regime behavior and can be used for idealised experiments. Let’s suppose that the question we want to answer is: how would the Lorenz model climate change (i.e. the probability density function of all states) if an external forcing were applied. To answer we apply a (constant for simplicity) forcing term F to the first two model equations. Fig. 9: PDF of the Lorenz Model in the X-Y plane, (a) from the unforced model, (b) with a constant forcing F=2. Copyright: Figure from Palmer 1993. 002_corti 11-07-2008 9:21 Pagina 31 Predictability of Climate Change 31 The actual result of such forcing is shown in fig. 9. Figure 9a gives the PDF of the model when F=0, showing clearly the two regimes. The PDF is symmetric, so that the probability of state vector being found in one regime is equal to the probability of its being found in the other regime. Figure 9b shows the PDF when F=2 (i.e. the forcing points from one regime to the other in the X-Y plane). Now the PDF is no longer symmetric, the state vector is more likely to be found in the regime towards which the forcing points. However, the phase space coordinates of the PDF maxima are virtually identical to those in the unforced model. In other words the structure of the regime centroids in both the original Lorenz model and in the model with forcing is unchanged. This result can be understood qualitatively returning to the ensemble prediction results of fig. 7. It has been shown that the instability properties of the Lorenz model are not uniform around the attractor. Indeed in fig. 6 is shown that most of the instability arises in a neighbourhood of the origin (X=Y+Z=0) in phase space. Just as the ensembles are sensitive to initial perturbations in this region, the influence of the forcing F on the climate of the model will similarly be felt most keenly in the neighbourhood of the origin. On the other hand, the nonlinear balance of terms in the original Lorenz model is dominant over the forcing within a regime (providing F is not too large). However, every time a phase-space trajectory enters the neighbourhood of the origin, the probability of its leaving bound for one of the regimes will be affected significantly by the presence of F. What do the results of these simple nonlinear model experiments have to do with predictions of the second kind in the real atmosphere? If the picture outlined in the Lorenz model were applicable to the real climate system, it would imply that forced changes in climate would project primarily onto the principal patterns of natural variability. Based on analyses of mid-tropospheric geopotential data, evidence has been presented (Corti et al. 1999) that trends in northernhemispheric climate over recent decades can be interpreted in terms of a change in the relative probability of naturally-occurring atmospheric circulation regimes (like the “Cold Ocean Warm Land” and “Arctic Oscillation” patterns) rather than a simple linear shift in the mean climate with superimposed noise. Figure 10 shows the atmos- 002_corti 32 11-07-2008 9:21 Pagina 32 Global Climate Change and the Ecology of the Next Decade pheric state vector PDF based on monthly mean 500-hPa geopotential height using data from the 1971-94. This PDF is relative to the second half of the time span considered in fig. 8. It can be seen that in the second half-period the PDF associated with cluster A is strongly enhanced, whilst the PDFs associated with all the other clusters is reduced. Comparing fig. 10 with fig. 8, it can be seen that the phase space location of the regimes is relatively stable despite these large changes in the PDF. This result indicates that in the NH much of the recent tropospheric climate change can be understood in terms of change in the frequency of residence of dominant, naturally occurring regimes of NH atmospheric variability. This is consistent with the simple picture outlined using the Lorenz attractor. Fig. 10: Atmospheric state vector PDF computed as in Figure 8, but using data from the 1971-94 period. Copyright: Figure from Corti et al. 1999. 7. Concluding remarks Climate prediction is, in principle, possible. In particular, the chaotic nature of the climate system, rules out detailed long-range predictions of the first kind, but it does not prevent climate predictions of the second kind. So, whilst models may not be able to forecast what the weather will be like on 1 March 2020 in central Europe, they may be able to tell whether the probability of having (for example) temperatures higher than 15ºC on 1 March 2020 in central Europe is significantly different from today’s. However it seems essential to model correctly the non-linear structure of the climate attractor to be able to forecast possible cli- 002_corti 11-07-2008 9:21 Pagina 33 Predictability of Climate Change 33 mate change. This can be particularly achieved by ensuring that regime structure is correctly simulated. To date, GCMs have been tested in this way with controversial but promising results. It is likely that ensembles techniques (using models with different physical parameterisations) will be essential to determine the basic uncertainty associated with a climate prediction. Sommario Le previsioni rappresentano, se vogliamo, la linfa vitale della meteorologia e della scienza del clima. Tali previsioni sono eseguite ogni giorno presso i centri operativi di previsioni meteorologiche (quali per esempio il Centro Europeo per le Previsioni Meteorologiche a Medio Termine ECMWF http://www.ecmwf.int); usando grandi modelli computazionali dell’atmosfera che integrano le equazioni di Navier-Stokes per un fluido rotante tridimensionale, multifase e multicostituente le accoppiano a rappresentazioni della superficie terrestre. Questi stessi modelli, accoppiati a rappresentazioni matematiche simili per gli oceani, sono utilizzati per prevedere lo sviluppo di fenomeni come El Ni_o, che influenzano le precipitazioni stagionali e la distribuzione della temperatura in molte regioni del mondo. I modelli di circolazione generale dell’atmosfera e dell’oceano accoppiati a modelli che rappresentano la superficie, il ghiaccio marino, la vegetazione, la chimica atmosferica, il ciclo del carbonio e gli aerosol –i cosiddetti Earth System Models- sono inoltre ampiamente usati per fornire previsioni rispetto a possibili variazioni climatiche prospettate per il futuro e causate da variazioni nella composizione atmosferica di origine antropica (si veda per esempio i rapporti del Comitato Intergovernativo sul Cambiamento Climatico IPCC; www.ipcc.ch). Previsioni climatiche eseguite con tali modelli, relative al “passato” e al “possibile futuro”, saranno presentate nel corso dell’articolo. Comunque, non è molto sensato fare previsioni senza avere preventivamente una qualche idea della loro accuratezza: la quantificazione dell’errore è un concetto base della fisica sperimentale. In altre parole, un calcolo che non comprende in sé anche il calcolo della sua capacità di prevedere non è un prodotto scientifico legittimo. È necessa- 002_corti 34 11-07-2008 9:21 Pagina 34 Global Climate Change and the Ecology of the Next Decade rio perciò determinare qual è la “predicibilità” intrinseca di un determinato fenomeno. Ovvero, supponendo di voler eseguire una previsione, ossia di voler prevedere lo stato di un sistema in un certo istante futuro a partire dalle informazioni sul suo stato presente, è necessario stabilire se e come, cioè con quanto anticipo e con quale potenziale probabilità di successo, una tale previsione è possibile. In questo articolo cerchiamo di rivisitare alcuni dei concetti fondamentali che riguardano le previsioni e la loro potenziale capacità di successo in ambito meteo-climatologico. Affronteremo due tipi di previsioni: problemi ai valori iniziali, ovvero problemi del primo tipo secondo la definizione data da Lorenz (1975), e problemi ai “parametri”, cioè problemi del secondo tipo. Più precisamente, dato uno stato atmosferico (e/o oceanico) ad un certo istante fissato, e una qualche legge del moto deterministica, si parla di previsioni di primo tipo quando si è interessati alla previsione dell’evoluzione temporale delle singole traiettorie del sistema. Invece, dato un sistema soggetto a variazioni della forzatura esterna, si parla di previsioni del secondo tipo quando si vuole prevedere come variano le proprietà statistiche del sistema al variare di un qualche parametro esterno. Questo tipo di previsioni non dipende dai valori iniziali. Le previsioni meteorologiche sono chiaramente del primo tipo; così anche la previsione di El Ni_o è una previsione climatica del primo tipo. Se al contrario vogliamo stimare gli effetti sul clima, di una variazione dell’orbita terrestre o di determinate variazioni nella composizione atmosferica, allora queste costituiscono previsioni del secondo tipo. Nel seguito dell’articolo chiariamo cosa significa e quali sono le implicazioni pratiche, nel caso di sistemi fisici non-lineari e instabili quali l’atmosfera, dell’esistenza di un “orizzonte di previsione degli eventi” finito, e quindi di un limite teorico alla predicibilità di primo tipo. Questi concetti saranno introdotti utilizzando le numerose analogie qualitative che vi sono fra la dinamica su grande scala dei flussi atmosferici e i moti che contraddistinguono il modello a tre variabili di Lorenz. Infine introduciamo la nozione di regimi di flusso e cercheremo di spiegare come, la loro presenza nella circolazione atmosferica, che in qualche modo rompe la “normalità” della distribuzione degli stati atmosferici, si possa rivelare di fondamentale importanza per il riconoscimento del segnale di cambiamento climatico e per le previsioni. 002_corti 11-07-2008 9:21 Pagina 35 Predictability of Climate Change 35 References Brohan, P., J.J. Kennedy, I. Haris, S.F.B. Tett and P.D. Jones (2006). “Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850”. J. Geophysical Research 111: D12106. DOI:10.1029/2005JD006548. Carrassi, A., 2002: Esponenti di Lyapunov e dimensionalità locale nell’attrattore di Lorenz. Tesi di Laurea in fisica available at the University of Ferrara. Corti, S., F. Molteni and T. N. Palmer, 1999: Signature of recent climate change in frequencies of natural atmospheric circulation regimes. Nature 398, 799-802. IPCC 2001 Climate change 2001: the scientific basis. Contribution of working group I to the third assessment report of the Intergovernamental Panel on Climate Change. Cambridge (Cambridge University press). p. 881. IPCC 2007 Four Assessment Report, Summary for Policymakers. Hansen, J., R. Ruedy, M. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl (2001). “A closer look at United States and global surface temperature change”. J. Geophys. Res 106: 23947-23963. Lorenz, E.N., 1963: Deterministic nonperiodic flow. J.Atmos.Sci., 20, 130141. Lorenz, E. N. 1969: The predictability of a flow which possesses many scales of motion Tellus 21, 289-307. Lorenz, E. N.,1975: Climate predictability The physical basis of climate modeling WMO (GARP Publication Series vol 16) (Geneva: World Meteorological Organisation) pp. 132-6. Palmer, T.N., 1993: A nonlinear dynamical perspective on climate change, Weather, 48, 313-348. Popper K.R., 1972: Of clouds and clocks. In Objective knowledge, Clarendon Press, Oxford. Reynolds, R.W., N.A. Rayner, T.M. Smith, D.C. Stokes, and W. Wang (2002). “An improved in situ and satellite SST analysis for climate”. J. Climate 15: 1609-1625. 002_corti 11-07-2008 9:21 Pagina 36