IPCC Politics and Solar Variability

By Andy May

This post is about an important new paper by Nicola Scafetta, Richard Willson, Jae Lee and Dong Wu (Scafetta, Willson and Lee, et al. 2019) on the ACRIM versus PMOD total solar irradiance (TSI) composite debate that has been raging for over 20 years. ACRIM stands for Active Cavity Radiometer Irradiance Monitor, these instruments recorded solar irradiance from space for many years. Richard Willson is the principle investigator in the laboratory that studied the results, Nicola Scafetta worked in the laboratory, until he accepted a position as a professor at the University of Naples Federico II.

The paper casts a spotlight on the political problems at the IPCC. In order to properly put the ACRIM vs PMOD debate into context and to show why this obscure and complicated scientific and engineering debate is important, we need to also discuss the messy politics within and between the IPCC and the UNFCCC.

The ACRIM composite TSI record shows an increase in solar activity from the 1980s until about the year 2000, when it flattens and then begins a decline. The ACRIM composite was introduced by Richard Willson in an article in Science in 1997 (Willson 1997). This composite was updated in 2014 by Scafetta and Willson (Scafetta and Willson 2014).

The next year, 1998, a rival composite was published by Claus Fröhlich and Judith Lean (Fröhlich and Lean 1998), it uses the same data but shows a declining solar irradiance trend from the 1986 to 1997. The late Claus Fröhlich worked for the Physikalisch-Meteorologisches Observatorium Davos and World Radiation Center, which is abbreviated “PMOD” and this is where the composite gets its name. The two composites are compared in Figure 1.

Figure 1. A comparison of the ACRIM and PMOD total solar irradiance (TSI) composites. The upper graph shows the ACRIM composite increasing from 1986 to 1997, contrast this with the slightly decreasing trend shown in the lower PMOD graph. Most of the difference between the two is how they handle the “ACRIM gap” from 1989.5 to 1991.8. Source: Modified from (Scafetta, Willson and Lee, et al. 2019), their figure 3.

There were three ACRIM satellites and their measurements are accurate and generally undisputed, except by the PMOD group. The dispute between PMOD and ACRIM revolves around how to handle the “ACRIM gap” from 1989.5 to 1991.8. This gap was created when the Challenger disaster delayed the launch of the ACRIM2 instrument. Another significant difference is the PMOD team chose to use the Virgo results in place of ACRIM3. To emphasize why this dispute matters, let us look at what the authors say. The first quote is from the abstract of Richard Willson’s first ACRIM composite paper in Science:

“The trend follows the increasing solar activity of recent decades and, if sustained, could raise global temperatures. Trends of total solar irradiance near this rate have been implicated as causal factors in climate change on century to millennial time scales.” (Willson 1997)

Judith Lean, who was the lead author in charge of the relevant section of the IPCC AR4 report (Chapter 2.7, p. 188, “Natural Forcings”) and a Senior Scientist for Sun-Earth System Research at the U.S. Naval Research laboratory, and the late Claus Fröhlich, wrote the following in the conclusion of their 1998 PMOD introductory paper (Fröhlich and Lean 1998):

“these results indicate that direct solar total irradiance forcing is unlikely to be the cause of global warming in the past decade, the acquisition of a much longer composite solar irradiance record is essential for reliably specifying the role of the Sun in global climate change. Detection of long-term solar irradiance trends and validation of historical irradiance reconstructions rely on the acquisition of a much longer irradiance time series than is presently available.” (Fröhlich and Lean 1998)

Judith Lean later told a NASA reporter, Rebecca Lindsey, one of the reasons she decided to help create an alternative TSI composite:

“The fact that some people could use [the ACRIM group’s] results as an excuse to do nothing about greenhouse gas emissions is one reason, we felt we needed to look at the data ourselves. Since so much is riding on whether current climate change is natural or human-driven, it’s important that people hear that many in the scientific community don’t believe there is any significant long-term increase in solar output during the last 20 years.” (Lindsey 2003)

It seems that Judith Lean had some political motivation to challenge the ACRIM composite. But there is more to the story and to tell it properly we need to review the IPCC climate change reports briefly. As we will see the only way the IPCC could compute how much of climate change is human-caused and how much is natural, is to model the natural component and subtract it from the observations to derive the human component. The natural component is very complex and works on multiple time frames, at its root it is driven by solar variability and ocean oscillations, these are poorly understood and the IPCC and CMIP models may not be accurately modeling it. The IPCC does a lot of research on many topics, but we will focus only on the most important, how much do humans influence climate change?

The first IPCC Report

The IPCC (Intergovernmental Panel on Climate Change) is an independent body founded under the auspices of the World Meteorological Organization and the United Nations Environment Programme. The IPCC states that its goal is:

“The [IPCC] is the international body for assessing the science related to climate change. The IPCC was set up in 1988 by the World Meteorological Organization (WMO) and United Nations Environment Programme (UNEP) to provide policymakers with regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.” (IPCC 2020)

The UNFCCC (United Nations Framework Convention on Climate Change), which is not directly connected to the IPCC, states that the mission of the IPCC is:

“The Intergovernmental Panel on Climate Change (IPCC) assesses the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.” (UNFCCC 2020)

According to the IPCC they investigate the risks of climate change without any mention of the cause. According to the IPCC they advise on both mitigation (of fossil fuels presumably) and adaptation, such as sea walls, levees, air conditioning, heating etc. According to the UNFCCC they are to investigate human-induced climate change, these statements are different. In a similar fashion the two bodies define “climate change” differently. The politically oriented UNFCCC defines it as

“[A] change in climate which is attributed directly or indirectly to human activity.” (United Nations 1992)

This contrasts with the IPCC definition of climate change, which is less political and more scientific:

“A change in the state of the climate that … persists for an extended period, typically decades or longer. Climate change may be due to natural internal processes or external forcings, or to persistent anthropogenic changes in the composition of the atmosphere or in land use” (IPCC 2012)

We can easily see that the UNFCCC and the IPCC have a potential conflict. In fact, if the IPCC does not find that humans have a significant impact on climate, the UNFCCC has no reason to exist. In the first IPCC report, published in 1990 and usually called “FAR” for “first assessment report,” they were unsure whether global warming was human-caused or natural, their conclusion was:

“global-mean surface air temperature has increased by 0.3°C to 0.6°C over the last 100 years … The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability. … The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.” (IPCC 1992, p. 6)

Given the wide range of opinions in the scientific community and the lack of any solid evidence of human influence on climate, this was a logical conclusion. But this statement caused political problems for the UNFCCC. Its entire reason for existence was human-caused climate change. If the IPCC could not determine climate change was human-caused, they were in trouble. Enormous pressure was put on the scientists working on subsequent reports to attribute climate change to human activities.

The political state-of-mind at the time can be seen with this quote from Senator Tim Wirth at the 1992 U.N. Earth Climate Summit in Rio de Janeiro:

“We have got to ride the global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic policy and environmental policy.” (National Review Editors 2010)

The Second Report, SAR

All subsequent reports did attribute most climate change and global warming to humans. The second report (“SAR“) barely stepped over the threshold with the following conclusion:

“The balance of evidence suggests a discernible human influence on global climate.” (IPCC 1996, p. 4)

Ronan and Michael Connolly (Connolly 2019) explain that this statement was included in SAR because Benjamin Santer, the lead author of the SAR chapter on the attribution of climate change, presented some unpublished and non-peer-reviewed work he had done that he claimed identified a “fingerprint” of the human influence on global warming. His evidence consisted of measurements that showed lower atmosphere (tropospheric) warming and upper atmosphere (stratospheric) cooling from 1963-1988. This matched a prediction made by the climate models used for SAR. Supposedly, additional CO2 in the atmosphere would increase warming in the troposphere and increase cooling in the stratosphere. He did not connect these measurements to human emissions of CO2, or to CO2 at all, he simply said that they showed something like what the models predicted.

It was weak evidence, and it was evidence that had not been peer-reviewed or even submitted for publication, but it was accepted. Further, the paper admits that they did not quantify the relative magnitude of natural and human influences on climate. They had simply shown a statistically significant similarity between some observations and their model’s predictions.

The political pressure from the UNFCCC to blame humans was unrelenting and they had to do something. There were some persistent rumors that someone was secretly changing the text in SAR after the authors had approved the text and doing it in such a way that supported the conclusion above and removed dissenting statements. These allegations may or may not be true.

Unfortunately, when Santer’s paper (Santer, et al. 1996) was finally published it ran into a firestorm of criticism. In particular, Patrick Michaels and Paul Knappenberger (Michaels and Knappenberger 1996) pointed out that the tropospheric “hot spot” that comprised Santer et al.’s “fingerprint” of human influence disappeared if the 1963-1987 range was expanded to the full range of available data, 1958-1995. In other words, it appeared Santer, et al. had cherry-picked their “fingerprint.”

Besides cherry-picking a portion of the data, there were other problems with Santer et al.’s interpretation. The warming and cooling trends that they identified may have been natural, as explained by Dr. Gerd R. Weber. The beginning of Santer, et al.’s selected period was characterized by volcanism and the end of the period by strong El Ninos.

The Third Report, TAR

The IPCC had been embarrassed by the revelation that Santer et al. had fudged the data in SAR, but they still needed some way to blame humans for climate change. They found another study that seemed to make the case and highlighted it in the third report, called TAR (IPCC 2001). In 1998, Michael Mann, Raymond Bradley, and Malcolm Hughes published a Northern Hemisphere temperature reconstruction of the past 600 years (Mann, Bradley and Hughes 1998), based primarily on tree rings. This paper is often abbreviated as MBH98. This reconstruction appeared to show that the recent warming period was unusual, so it was easy to assume humans did it.

A simplified version of the MBH98 graph, often called the “Hockey Stick” because of its shape, extended to 1000 AD (Mann and Bradley 1999, MB99) was featured prominently on page 3 of the TAR Summary for Policymakers, it is reproduced here as Figure 2.

Figure 2. The infamous “Hockey Stick” from (Mann and Bradley 1999). It purports to show that the recent warming is unusual. Source (IPCC 2001, p. 3).

The reconstruction in Figure 2 generated a firestorm of criticism that made the Santer et al. debacle look like a campfire. But the graph was used to increase the certainty that human greenhouse emissions caused recent warming, the conclusion of TAR:

“In the light of new evidence and taking into account the remaining uncertainties, most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.” (IPCC 2001, p. 699)

The criticisms of the MBH98, MB99 and TAR temperature reconstructions are too numerous to list here, but they were devastating. An entire 320-page book by Mark Steyn, A Disgrace to the Profession, was written to list them (Steyn 2015). The fraudulent Hockey Stick and Michael Mann were even memorialized in a song.

The Hockey Stick not only appeared as Figure 1 of TAR but was prominently displayed in Al Gore’s movie “An Inconvenient Truth.” Steyn’s book clearly shows that the graph and the movie have been thoroughly discredited by hundreds of scientists who attempted and failed to reproduce Michael Mann’s hockey stick. Further, MB99 attempts to overturn hundreds of papers that describe a world-wide Medieval Warm Period from around 900AD to 1300AD.

When Michael Mann’s hockey stick was chosen to be Figure 1 of the TAR summary for policy makers, Mann had just received his PhD. As many have noted, the ink was not yet dry on his diploma. Yet, in addition, he was made one of the lead authors of the very section of TAR that presented his hockey stick (see the TAR Chapter 2 author list on page 99 and figure 2.20 on page 134). As a result, it was up to him to validate his own work.

The MBH98 paper was deeply flawed. In 2003, Soon, et al., presented evidence that the Little Ice Age and Medieval Warm Period were global events (Soon, Idso, et al. 2003). This meant the flat hockey stick handle was incorrect.

Two years later, in 2005, it was thoroughly debunked by Steve McIntyre and Ross McKitrick (McIntyre and McKitrick 2005). They showed that using the statistical technique invented by Michael Mann even random number series (persistent trendless red noise) will generate a hockey stick. Basically, Mann had mined many series of numbers looking for hockey stick shapes and gave each series that had the shape he wanted a much higher weight, up to a weighting factor of 392! This was truly a case of selecting a desired conclusion and then molding the data to fit it. Prominent statisticians Peter Bloomfield, Edward Wegman and Professor David Hand said Michael Mann’s method of using principle components analysis was inappropriate and misleading and exaggerated the effect of recent global warming.

The Fourth Report, AR4

By the time the fourth report was written, the MBH98 hockey stick was thoroughly discredited and in light of this, the lead author of the relevant chapter (Chapter 6), Keith Briffa, admitted that the recent warming was not unusual. He wrote:

“Some of the studies conducted since the Third Assessment Report (TAR) indicate greater multi-centennial Northern Hemisphere temperature variability over the last 1 kyr than was shown in the TAR” (IPCC 2001, Ch. 6 p. 1)

It is a weak admission of failure, as we might expect, but he acknowledges that the hockey stick handle was too flat and that temperatures during the Medieval Warm Period might have been higher than today. This admission certainly took the wind out of the sails of TAR, so what can they do now? There seemed to be no data to support the idea that humans were causing global warming.

The IPCC decided to emphasize their climate models in AR4, rather than paleo-temperature reconstructions, atmospheric “fingerprints,” or any other observational data. Human causation was the goal, they needed to shape the evidence to support it. For twenty years they had looked for evidence that humans were the major cause of recent warming and failed to find any. But, they “discovered” that if their climate models were run without any human climate forcings the resulting computed global temperatures were flat. You can see this in Figure 3b.

Figure 3. The IPCC model calculation of human influence on climate change. There are two model averages shown in both graphs. The blue one is from AR4, CMIP3. The red from AR5, CMIP5. Both are compared to observations, in black. The upper graph shows models that include both human and natural climate forcings, the lower shows natural forcings only. Source: (IPCC 2013, Ch. 10, page 879).

Then they can rerun the model with human plus natural climate forcings (Figure 3a) and the model temperatures will rise. Voila! We have shown human-caused global warming and did not need a shred of observational data! With this proof they triumphantly write:

“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” (IPCC 2007, p. 10)

We have shown the comparison between models and observations from AR5 in Figure 3 already. In Figure 4 we show the similar figure from AR4.

Figure 4. A comparison of a natural warming model (b) to human plus natural (a) and observations in black. Source: (IPCC 2007, p. 684).

In these climate simulations the only natural forcings, that make any difference, are volcanic eruptions. Solar variations and ocean oscillations are assumed to net to zero over the period studied. The volcanic eruptions are labeled in Figure 4. The lack of a robust model of natural climate change can be seen in the poor model-observation match from 1910 to 1944 in both Figures 3 and 4. Given the abundant literature supporting significant solar variability (Soon, Connolly and Connolly 2015) and natural ocean oscillations (Wyatt and Curry 2014) it is easy to doubt any calculation of human forcing made from the models shown in Figures 3 and 4. Thus, the conclusion given in AR4 and the similar conclusion reached with similar logic in AR5 are suspect.

The Fifth Report, AR5

AR5 is essentially a redo of AR4, they do the same thing, take the same approach. No new data supporting human involvement in climate change is presented, the same models are rerun with a few tweaks here and there and they reach essentially the same conclusion for the same reasons as in AR4:

“More than half of the observed increase in global mean surface temperature (GMST) from 1951 to 2010 is very likely due to the observed anthropogenic increase in greenhouse gas (GHG) concentrations.” (IPCC 2013, p. 869)

TSI and the IPCC

Soon, Connolly and Connolly (Soon, Connolly, & Connolly, 2015) identified several valid, peer-reviewed solar activity reconstructions that could explain a lot of the warming since 1951 and earlier. These reconstructions were ignored by the IPCC.

At the time AR4 was being written, the accepted solar activity (TSI or Total Solar Irradiance) composite from satellite solar radiation measurements was the ACRIM composite shown in Figure 1 (Willson 1997). It showed an increasing trend of solar activity from the 1980s to the 1990s. This supported the idea that at least some of the warming seen then was due to increasing solar activity. Scafetta and Willson in 2014 reported:

“Our analysis provides a first order validation of the ACRIM TSI composite approach and its 0.037 %/decade upward trend during solar cycles 21–22 [1986-1997]. The implications of increasing TSI during the global warming of the last two decades of the 20th century are that solar forcing of climate change may be a significantly larger factor than represented in the CMIP5 general circulation climate models.” (Scafetta and Willson 2014)

As we saw above, Judith Lean led the development of the rival PMOD TSI composite and admitted that part of the reason was political. Fröhlich and Lean conclude that TSI is unlikely to have caused any global warming, then say they do not have enough data to be sure. While the ACRIM composite was more accepted at the time, the PMOD composite also had a lot of support.

The PMOD and ACRIM composites are complex because the satellite measurements must be scaled properly so they fit together end-to-end. The process is discussed in some detail in Scafetta and Willson’s 2014 paper and in (Scafetta, Willson, Lee, & Wu, 2019). The process used by Fröhlich and Lean is different, they make changes to the raw data that are not supported by the satellite teams (Scafetta, Willson and Lee, et al. 2019). This is an important controversy and it directly affects the calculation of human influence on climate. Reconstructions of solar activity depend heavily upon proxies, how the proxies are converted to TSI in Watts per square meter (W/m2), depends heavily on the modern TSI composite used. It appears the IPCC and CMIP decision to ignore the ACRIM composite and the more active TSI reconstructions was a political decision. As we saw above, Judith Lean admitted as much to Rebecca Lindsey of NASA. The Hoyt and Schatten (Hoyt and Schatten 1993) “active” reconstruction, calibrated to ACRIM, is compared to the “quiet” reconstruction by (Wang, Lean and Sheeley 2005) and (Kopp and Lean 2011) in Figure 5. The latter, quiet reconstruction, is the one the IPCC encourages the climate modelers to use.

Figure 5. Two example TSI reconstructions extended to 1700AD using proxy data tied to satellite measurements. The green curve is from (Wang, Lean and Sheeley 2005), but rescaled to the TSI base value given in (Kopp and Lean 2011). The red curve is from (Scafetta and Willson, ACRIM total solar irradiance satellite composite validation versus TSI proxy models 2014, Their Figure 16). The green TSI curve is the curve the CMIP5 organizers strongly recommended that the climate modelers use for AR5 (Scafetta and Willson 2014). Notice how short the period of actual satellite measurements is relative to the reconstructions (blue line).

As explained by Ronan and Michael Connolly (Connolly 2019), of the five models that contributed to the “natural forcings only” AR4 dataset illustrated in Figure 4(b), four used low variability solar reconstructions recommended by Lean. The quieter or low variability reconstructions tend to rely heavily on sunspot numbers and other measurements that are representative of the active regions of the Sun for their TSI reconstructions (Soon, Connolly, & Connolly, 2015) and (Scafetta, Willson and Lee, et al. 2019). This creates problems since when there are no sunspots, their number (zero) implies no solar variation, yet in periods of no sunspots, other indications of solar activity show there is still solar variability, see Figures 6, 7, and 8. The more active reconstructions used proxies that are sensitive to the less active portions of the Sun and are less reliant on sunspot number (Scafetta, Willson and Lee, et al. 2019).

The next three figures show recent TSI measurements by the SORCE TSI instrument, which measured TSI continuously, with one notable gap in 2013, from 2003 until February of 2020. The first figure shows an overview of the data, the next shows periods of zero sunspots before cycle 24 and the last figure shows the recent period of zero sunspots. Notice that the TSI varies quite a lot even when there are no sunspots.

Figure 6. Overview of Solar Cycle 24. The gray line is TSI from SORCE and the blue line is the sunspot record from SILSO.

Figure 7. The solar minimum before Solar Cycle 24. Notice the activity in TSI when there are no sunspots.

Figure 8. The solar minimum at the end of Solar Cycle 24, notice the TSI activity when there are no sunspots.

By ignoring the more active TSI reconstructions, the IPCC and CMIP have not considered a major source of uncertainty. Both the ACRIM- and PMOD-based reconstructions should have been used, or the reason for rejecting the ACRIM composite altogether explained to everyone’s satisfaction.

The use of models to “show” that humans are causing climate change is perfect for the politically motivated IPCC and UNFCCC since you can get a model to do anything you want if you feed it the appropriate data and tweak the adjustable parameters properly. One of the key elements to adjust in the IPCC models was solar variability. If it is invariant, which is difficult for a variable star like the Sun, most of the warming can be attributed to humans.

ACRIM v. PMOD

Whether the ACRIM or the PMOD composite is used to calibrate the solar proxies makes a difference (Fröhlich and Lean 1998). It is not the sole reason for the difference between the two representative solar reconstructions shown in Figure 5, but it is a big part of it. Scafetta et al. (Scafetta, Willson and Lee, et al. 2019) take a look at the differences in the two composites and provide some evidence that the ACRIM composite is preferred.

The most significant difference between the two composites is the overall TSI trend from 1986 to 1997, these are the minima before and after Solar Cycle 22, see Figure 1. The reason that they are so different is that they handle the so-called “ACRIM gap” differently. The ACRIM gap, from mid-1989 to late 1991, had no functioning high-quality TSI-measuring satellite. Only the Nimbus7/ERB and the ERBS/ERBE satellites were functioning and they had opposite trends. The Nimbus7/ERB measurements trended up 0.26 W/m2 per year and the ERBS/ERB trends down 0.26 W/m2 per year (Scafetta, Willson and Lee, et al. 2019). This difference was enough that one of the satellites had to be wrong.

The PMOD group used solar proxies and a proxy model to attempt to show that the Nimbus7/ERB instrument had problems. During the gap, the PMOD group then significantly changed the TSI measurements of this instrument and changed the slope of the readings in the gap from positive to negative (Fröhlich and Lean 1998). Then they further modified measurements from the very accurate ACRIM1 and ACRIM2 instruments, claiming they had sensor problems. The PMOD modifications were made without consulting with the original satellite experiment science teams or examining the raw data. Their idea was that their solar proxy models were superior to the data and could be used to “fine-tune” the observations (Scafetta, Willson and Lee, et al. 2019) and (Fröhlich and Lean 1998). Regarding the “corrections” the PMOD team made to the Nimbus7/ERB satellite data, the leader of the Nimbus7 team, Douglas Hoyt, wrote:

“[The NASA Nimbus7/ERB team] concluded there was no internal evidence in the [Nimbus7/ERB] records to warrant the correction that [PMOD] was proposing. Since the result was a null one, no publication was thought necessary. Thus, Fröhlich’s PMOD TSI composite is not consistent with the internal data or physics of the [Nimbus7/ERB] cavity radiometer.” (Scafetta and Willson 2014, Appendix A)

In Lean, 1995:

“Deviations of the SMM and UARS data from the reconstructed irradiances in 1980 and 1992, respectively, may reflect instrumental effects in the ACRIM data, since space-based radiometers are most susceptible to sensitivity changes during their first year of operation.” (Lean, Beer and Bradley 1995)

Yes, Judith Lean is saying that her models “may reflect” that the instruments are wrong. Modifying the measurements to match an unvalidated model is not an accepted practice. Besides the original “corrections” to the satellite measurements made by Fröhlich and Lean, there are new “corrections” suggested by Fröhlich (Fröhlich 2003). Which set should we use? Scafetta, et al. comment on the “corrections:”

“a proxy model study that highlights a discrepancy between data and predictions can only suggest the need to investigate a specific case. However, the necessity of adjusting the data and how to do it must still be experimentally justified. By not doing so, the risk is to manipulate the experimental data to support a particular solar model or other bias.” (Scafetta, Willson and Lee, et al. 2019)

The ACRIM group and Douglas Hoyt believe that the upward trend in the Nimbus7/ERB data is more likely correct than the modeled downward trend created by the PMOD group. Further, the Nimbus7/ERB trend is supported by the more accurate ACRIM1 instrument. The downward trend of the ERBE instrument is in the opposite direction of the ACRIM trend and was caused by well-documented degradation of its sensors. The ACRIM team also investigated the PMOD “corrections” to the ACRIM1 and ACRIM2 data and found that they were not justified.

Conclusions

The IPCC appears to have hit a dead end. They have been unable to find any observational evidence that humans contribute to climate change, much less measure the human impact on climate. They are reduced to creating models of climate and measuring the difference between models that include human forcings and those that do not. This was the approach taken in both AR4 and AR5, the approach was similar and the results similar. AR5 was simply a redo of AR4, without any significant improvement or additional evidence.

It appears that a significant problem with the AR4 and AR5 results was they used low variability solar variability reconstructions and ignored the equally supported high variability reconstructions. This reduces the computed natural component of climate change and enlarges the computed human component. Part of the problem with the low variability reconstructions is they are “tuned” to the PMOD TSI composite, which is also based upon a proxy model. Thus, we have used a model to alter satellite measurements, then used the altered measurements to calibrate a proxy model. The proxy model is then projected back to 1700AD. Not very convincing.

What Scafetta, et al. did in their paper was reverse the PMOD process. They used the uncontroversial TSI observations before and after the ACRIM gap period to empirically adjust the low-frequency component of the TSI proxy models to fill in the gap. Their process explicitly allows for the models to be missing a slow varying component in the quiet sun regions, allowing variation from solar minimum to solar minimum. They tackled the ACRIM gap problem without using the Nimbus7/ERBS or ERBS/ERBE lower quality TSI records. They simply evaluated how the proxy models reconstruct the ACRIM gap. The proxy models underestimated the TSI increase to the solar cycle 22 peak and overestimated the decline. There were also problems properly reconstructing solar cycles 23 and 24.

Scafetta, et al. then adjusted the models to correct the mismatch (rather than changing the data!) and produced a TSI composite that agreed well with the ACRIM composite and another composite created by Thierry Dudok de Wit (Dudok de Wit, et al. 2017). Both the new composite and the Dudok de Wit composite show an increasing trend from 1986 to 1997, like the ACRIM composite and unlike PMOD.

The new composite shows an increase in TSI of 0.4 W/m2 from 1986 to 1996 and twice that much from 1980 to 2002. It decreases after 2002. This is like the ACRIM composite. The PMOD composite goes down from 1986 to 1996. PMOD appears to have been discredited by this paper, it will be interesting to follow the discussion over the next year or so.

The Bibliography for this post can be downloaded here.

This post was improved by many helpful suggestions from Dr. Willie Soon and Dr. Ronan Connolly.

Published by Andy May

Petrophysicist, details available here: https://andymaypetrophysicist.com/about/

Discover more from Andy May Petrophysicist

Subscribe now to keep reading and get access to the full archive.

Continue reading