The concepts and data used to make temperature and climate reconstructions, or estimates, are constantly evolving. Currently, there are over 100,000 global weather stations on land and over 4,500 Argo floats and weather buoys at sea. This is in addition to regular measurements by satellites and ships at sea. The measurement locations are known accurately, the date and time of each measurement is known, and the instruments are mostly accurate to ±0.5°C or better. Thus, we can calculate a reasonable global average surface temperature. However, the farther we go into the past the fewer measurements we have. Prior to 2005, the sea-surface measurements deteriorate quickly and prior to 1950 the land-based weather station network is quite poor, especially in the Southern Hemisphere. Before 1850, the coverage is so poor as to be unusable for estimating a global average temperature. Prior to 1714 the calibrated thermometer had not even been invented; the world had to wait for Gabriel Fahrenheit.
Is the global average temperature a useful climate metric? How do we compare today’s global or hemispheric temperature to the past? Modern instruments covering the globe have only been in place since 2005, plus or minus a few years. If we have accurate measurements since 2005, how do we compare them to global temperatures hundreds or thousands of years ago? The world has warmed 0.8°C since 1950 and 0.3°C since 2005, that doesn’t sound very scary. This two-part series will investigate these questions. We will propose a solution to the problem in a second post that should be up in a day or two.
This is a re-post from the CO2 Coalition, here. Re-posted with permission.
[Ed. Note] The number of opinion pieces disguised as “Fact Checks,” and the use of them to censor one side of scientific debates has reached epidemic proportions. We need to fight back against this abhorrent trend. Science is all about debate. The debate must be with well-founded evidence, with objective reason, without personal attacks, and with both sides represented. The following post is long, but addresses many climate alarmist generated media myths quite well and authoritatively. AM
In 2017 I wrote a post analyzing the data and literature surrounding the human impact on Earth’s environment. The analysis uses the U.N. IPCC’s definition of environmental harm. The sources are peer-reviewed articles or government and U.N. agencies. It is a very straightforward reporting of the facts and data.
That Linkedin banned David’s recommendation of the post shows we are becoming a totalitarian state, with thought and opinions regulated by corporate overlords. Honest, free discussion based on observations and straightforward statistical analysis is banned.
Obviously, Linkedin did not like the conclusion of the post:
We are living in a time of nearly boundless prosperity. The rate of poverty has plunged to unimaginable lows. This is a time when the definition of poverty in the United States is set so high, a poor person in the U.S. would be the envy of any wealthy person prior to World War II. Inequality in the world is at its lowest level ever and decreasing at a rapid rate. People who were born in abject poverty can now become doctors and lawyers. Why we still have doomsayers predicting the end of the world is beyond my understanding.
I guess if you hate humanity and yourself, that would be offensive.
This is an update to a 2016 post; the original post is here.
We often hear that the planet is warming faster than ever before, or at the fastest rate since the beginning of the industrial era! Is it true? We haven’t had thermometers for very long. How do thermometer readings compare to temperature proxies like ice cores and tree rings? Greenland is a good place to start, we see the high resolution Greenland ice core temperatures all the time. How accurate are they? How do Greenland temperatures compare to temperatures elsewhere?
In previous posts (here and here), I’ve compared historical events to the Alley and KobashiGISP2 Central Greenland Temperature reconstructions for the past 4,000 years. Unfortunately, these two reconstructions are very different. Steve McIntyre suggested I consider a third reconstruction by Bo Vinther. Vinther’s data can be found here. Unfortunately, Vinther is significantly different from the other two. Nothing agrees very well.
The D.C. Superior Court dismissed Michael Mann’s lawsuit against the National Review today in a definitive way. The National Review was sued by Mann over a blog post that Mark Steyn posted in 2012 criticizing Mann’s work. Mark Steyn was not a National Review employee, and no one at the magazine had reviewed the post before he put it up.
Rich Lowry, the NR editor in chief, said: “It’s completely ridiculous that it took us more than eight years to get relief from the courts from this utterly meritless suit.”
Lots of discussion around the details of the lawsuit. Mann’s original complaint against the National Review and Mark Steyn can be downloaded here.
Mark Steyn updates us on the progress of the lawsuit here. One of the previous judges in the case had this to say:
The main idea of Defendant [Steyn]’s article is the inadequate and ineffective investigations conducted by Pennsylvania State University into their employees, including Jerry Sandusky and Plaintiff [Michael E Mann].
This post examines CO2 data collected in Antarctic firn and its journey as firn transitions to ice where CO2 is eventually trapped in bubbles. Atmospheric gases within the firn and trapped in bubbles are smoothed due to gas mixing processes with depth and time. The bubble trapping zone, also known as the Lock-in-Zone (LIZ), is a mysterious thin interval where CO2 concentrations decrease significantly with depth creating a kink in CO2 concentrations.
In my last post, it was suggested that Michael Mann’s 2008 reconstruction (Mann, et al., 2008) was similar to Moberg’s 2005 (Moberg, Sonechkin, Holmgren, Datsenko, & Karlen, 2005) and Christiansen’s 2011/2012 reconstructions. The claim was made by a commenter who calls himself “nyolci.” He presents a quote, in this comment, from Christiansen’s co-author: Fredrik Charpentier Ljungqvist:
“Our temperature reconstruction agrees well with the reconstructions by Moberg et al. (2005) and Mann et al. (2008) with regard to the amplitude of the variability as well as the timing of warm and cold periods, except for the period c. AD 300–800, despite significant differences in both data coverage and methodology.” (Ljungqvist, 2010).
My previous post on sea-surface temperature (SST) differences between HadSST and ERSST generated a lively discussion. Some, this author included, asserted that the Hadley Centre HadSST record and NOAA’s ERSST (the Extended Reconstructed Sea Surface Temperature) record could be used as is, and did not need to be turned into anomalies from the mean. Anomalies are constructed by taking a mean value over a specified reference period, for a specific location, and then subtracting this mean from each measurement at that location. For the HadSST dataset, the reference period is 1961-1990. For the ERSST dataset, the reference period is 1971-2000.
My last post compared actual sea-surface temperature (SST) estimates to one another to see how well they agreed. It was not a pretty sight; the various estimates covered a range of global average SSTs from ~14°C to almost 20°C. In addition, some SSTs were declining with time and others were increasing. While I did check the latitude range of each of the grids I averaged, John Kennedy (HadSST climate scientist in the UK MET Hadley Centre) pointed out that I did not check the cell-by-cell areal coverage of the HadSST grid, relative to the NOAA ERSST grid. He suspected that the results I presented were mostly due to null grid cells in HadSST that were populated by interpolation and extrapolation in the ERSST dataset. The original results were presented in Figure 6 of my previous post which is Figure 1 here.