Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Surrogate Ensemble Study of Climate Reconstruction Methods: Stochasticity and Robustness

A Surrogate Ensemble Study of Climate Reconstruction Methods: Stochasticity and Robustness Reconstruction of the earth’s surface temperature from proxy data is an important task because of the need to compare recent changes with past variability. However, the statistical properties and robustness of climate reconstruction methods are not well known, which has led to a heated discussion about the quality of published reconstructions. In this paper a systematic study of the properties of reconstruction methods is presented. The methods include both direct hemispheric-mean reconstructions and field reconstructions, including reconstructions based on canonical regression and regularized expectation maximization algorithms. The study will be based on temperature fields where the target of the reconstructions is known. In particular, the focus will be on how well the reconstructions reproduce low-frequency variability, biases, and trends. A climate simulation from an ocean–atmosphere general circulation model of the period a.d . 1500–1999, including both natural and anthropogenic forcings, is used. However, reconstructions include a large element of stochasticity, and to draw robust statistical interferences, reconstructions of a large ensemble of realistic temperature fields are needed. To this end a novel technique has been developed to generate surrogate fields with the same temporal and spatial characteristics as the original surface temperature field from the climate model. Pseudoproxies are generated by degrading a number of gridbox time series. The number of pseudoproxies and the relation between the pseudoproxies and the underlying temperature field are determined realistically from Mann et al. It is found that all reconstruction methods contain a large element of stochasticity, and it is not possible to compare the methods and draw conclusions from a single or a few realizations. This means that very different results can be obtained using the same reconstruction method on different surrogate fields. This might explain some of the recently published divergent results. Also found is that the amplitude of the low-frequency variability in general is underestimated. All methods systematically give large biases and underestimate both trends and the amplitude of the low-frequency variability. The underestimation is typically 20%–50%. The shape of the low-frequency variability, however, is well reconstructed in general. Some potential in validating the methods on independent data is found. However, to gain information about the reconstructions’ ability to capture the preindustrial level it is necessary to consider the average level in the validation period and not the year-to-year correlations. The influence on the reconstructions of the number of proxies, the type of noise used to generate the proxies, the strength of the variability, as well as the effect of detrending the data prior to the calibration is also reported. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Climate American Meteorological Society

A Surrogate Ensemble Study of Climate Reconstruction Methods: Stochasticity and Robustness

Journal of Climate , Volume 22 (4) – Oct 29, 2007

Loading next page...
 
/lp/american-meteorological-society/a-surrogate-ensemble-study-of-climate-reconstruction-methods-1rmouTuRsu

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
American Meteorological Society
Copyright
Copyright © 2007 American Meteorological Society
ISSN
1520-0442
DOI
10.1175/2008JCLI2301.1
Publisher site
See Article on Publisher Site

Abstract

Reconstruction of the earth’s surface temperature from proxy data is an important task because of the need to compare recent changes with past variability. However, the statistical properties and robustness of climate reconstruction methods are not well known, which has led to a heated discussion about the quality of published reconstructions. In this paper a systematic study of the properties of reconstruction methods is presented. The methods include both direct hemispheric-mean reconstructions and field reconstructions, including reconstructions based on canonical regression and regularized expectation maximization algorithms. The study will be based on temperature fields where the target of the reconstructions is known. In particular, the focus will be on how well the reconstructions reproduce low-frequency variability, biases, and trends. A climate simulation from an ocean–atmosphere general circulation model of the period a.d . 1500–1999, including both natural and anthropogenic forcings, is used. However, reconstructions include a large element of stochasticity, and to draw robust statistical interferences, reconstructions of a large ensemble of realistic temperature fields are needed. To this end a novel technique has been developed to generate surrogate fields with the same temporal and spatial characteristics as the original surface temperature field from the climate model. Pseudoproxies are generated by degrading a number of gridbox time series. The number of pseudoproxies and the relation between the pseudoproxies and the underlying temperature field are determined realistically from Mann et al. It is found that all reconstruction methods contain a large element of stochasticity, and it is not possible to compare the methods and draw conclusions from a single or a few realizations. This means that very different results can be obtained using the same reconstruction method on different surrogate fields. This might explain some of the recently published divergent results. Also found is that the amplitude of the low-frequency variability in general is underestimated. All methods systematically give large biases and underestimate both trends and the amplitude of the low-frequency variability. The underestimation is typically 20%–50%. The shape of the low-frequency variability, however, is well reconstructed in general. Some potential in validating the methods on independent data is found. However, to gain information about the reconstructions’ ability to capture the preindustrial level it is necessary to consider the average level in the validation period and not the year-to-year correlations. The influence on the reconstructions of the number of proxies, the type of noise used to generate the proxies, the strength of the variability, as well as the effect of detrending the data prior to the calibration is also reported.

Journal

Journal of ClimateAmerican Meteorological Society

Published: Oct 29, 2007

References