Site icon Andy May Petrophysicist

The anthropogenic greenhouse effect – a spectroscopic triviality

By Dr. Heinz Hug | Aug. 20, 2012

Translated and lightly edited by Andy May and google translate, the original German post is here.

Update 25.9.12

Due to the extremely active discussion, Dr. Hug has revised his article again. What is difficult to understand is presented in a more understandable way and any missing information is added.

Summary: 

Quite obviously, the effect of the anthropogenic greenhouse effect is greatly overestimated, because the CO2 greenhouse effect was already exhausted in Goethe’s time, except for insignificant, spectroscopically justifiable residual amounts (“peak flanks”). Because of the extensive “saturation effect,” the anthropogenic share of greenhouse gases is of minor importance in current climate change. Rather, the variance of Earth’s surface temperature is due to cloud cover, which in turn depends on fluctuations of the solar magnetic field. A larger cloud cover of the globe has about the same effect as volcanic ash after an eruption. The influence of cloud cover can only be depicted extremely inadequately with the IPCC climate models. Climate models are also a spectroscopic artifact because they must work with arbitrary flux corrections, the amounts of which exceed those of the anthropogenic greenhouse effect many times over.

1. What is meant by the term “saturation” of the greenhouse effect?

Climate research is a natural science cast in politics. Since the two fields of action cannot be separated from each other, in the run-up to Rio92 there was a nationwide bombardment of opinion about the coming climate catastrophe in the media which was under the influence of politics.

While under the media bombardment, scientists in the 1990s tried to learn something about the spectroscopic basis for the proposed climate Armageddon in the generally read periodicals – and found little to nothing. Neither Nature, Science, Scientific American, nor Spektrum der Wissenschaft contained informative articles about the basic spectroscopic mechanism of the “greenhouse effect” or its limits. Roger Revelle, who introduced the later US Vice President Al Gore to the greenhouse gas theory at college, also published no useful information on this subject.[2] It was almost as if they were afraid to reveal their cards.

Since one has been let down by the generally read scientific journals and the propaganda cascade in the media, one would like to know as a chemist at what CO2 concentration the greenhouse effect is exhausted for spectroscopic reasons under realistic assumptions. This is particularly true if you work with the quantification of analytes using spectroscopic methods and know that there is always a concentration range above which the measuring radiation is almost completely absorbed. It behaves similarly to dripping black ink into a glass of water. The more you add, the more of the incident light is absorbed until the absorption is finally complete (transmission t = 0).

An essay from 1984 in the member journal of the Society of German Chemists, the so-called “Blue Leaves,”[5] was completely misleading. In this publication, which I will return to below, the “role of greenhouse gas trace gases” is explained with the help of transmission spectra. A clear indication that the driving effect is about atmospheric spectral radiance (emissions) by atmospheric trace gases is only subliminally described. Inspired by the literature,[5] I took measurements of transmission (or absorption or extinction) in a gas cuvette to determine the extent to which the greenhouse effect of CO2 has been “exhausted.”

If, for example, 35% of the radiation emanating from the ground is absorbed by greenhouse gases at a certain wavelength at a certain altitude, then the absorption (formerly Ais a = 0.35 (or a = 35%) The transmission (formerly T) is t = 0.65 or t = 65%.

Since the absorption of IR radiation (“thermal radiation”) quantitatively follows the Bouguer-Lambert-Beer law (an e-function), there should be a “fluid limit” beyond which there is no significant increase in absorption “a,” or transmission “t” approaches zero as the concentration of the absorber increases (Figure 1).

Figure 1. Transmission of radiation as a function of CO2 concentration.

The consideration of transmission “t” for estimating the anthropogenic greenhouse effect is not completely wrong, because the “emissivity” of the greenhouse gas molecules correlates with the absorption capacity according to Kirchhoff’s radiation law (see the discussion of Kirchhoff’s law at the end of this post). Therefore, there should be a (“flowing”) concentration range above which the atmospheric greenhouse effect CO2 no longer increases significantly. I call this “saturation”, which actually does not exist when you rummage through smaller and smaller details. Not quite correct, but descriptively, the “saturation” is similar to the limit value of a geometric series. Example:

For such a geometric series, the following applies:

Here, a1 is the initial term and q is the quotient of two consecutive terms (q¹1). In the geometric series above, a1 = 1 and q = 0.5 (e.g. 1/4 : 1/2 = 0.5). For example, if you now set n = 11, the result is

and for n = 12 you get:

The increase between S11 and S12 is: DS = 1.9995117 – 1.9990234 = 2.777·10-4 corresponds to 0.014%. If the limit value of this arithmetic series is called “complete saturation” (S¥ = 2), then the magnitude of S12 is only

from this complete “saturation”. Read “¥” as approximating infinity.

Of course, the greenhouse effect is based on the emission of IR radiation by atmospheric trace gases and not on the absorption or transmission of IR radiation by them. But because

  1. IR-active trace gases emit in accordance with Kirchhoff’s radiation law exactly as they absorb.
  2. The “saturation character” of the greenhouse effect can be estimated in a simple approximation with the help of the Bouguer-Lambert-Beer law.

Now, this law is not based on a geometric series as above, but on an e-function that converges towards zero as a power series (see Figure 1).

Suppose for an absorption band of CO2 the numerical value x = 8 is arbitrarily used for a certain CO2 concentration at a certain height, then the result is

If the conditions are kept the same and only the CO2 concentration is increased tenfold, you get

You can continue the game as you like and quickly reach the limits of a calculator and later also those of a supercomputer (  . Clearly speaking, this means that true “saturation” is only achieved in a pure CO2 atmosphere with infinite layer thickness. Only then is the transmission formally t = 0 and the absorption a = ¥. Of course, this is nonsense, but orthodox supporters of the greenhouse gas theory can always argue that even with a steadily increasing CO2 concentration, there is no true saturation of the greenhouse effect which results in an endless discussion of a perpetual motion machine in motion.

By the way, in 1995 the English chemist Jack Barrett pointed out in the Spectro Chimica Acta the “saturation” of the greenhouse effect, which had already been known for a long time, and he triggered a fierce controversy.[3, 8, 9, 10, 11] The then chairman and later co-chairman of the IPCC, Sir John Houghton, called Barrett after the publication and demanded that he withdraw his publication. Sir Houghton also demanded the same from the management of Imperial College, where Barrett was working at the time. Imperial College immediately prohibited him from making further publications critical of the greenhouse effect, under threat of consequences.[35]

2. The greenhouse effect

It is often mistakenly assumed that the greenhouse effect is identical with the absorption of infrared radiation (IR radiation) by atmospheric trace gases (CO2, CH4, water vapour, etc.). The case could then be made that the air heats itself by radiation-free deactivation (‘thermalization’) of the excited molecules and transfers the resulting sensible heat energy to Earth’s surface. This is like a hot liquid heating the wall of a container through heat transfer. But, according to the greenhouse hypothesis, the situation is exactly the opposite: Earth’s surface is heated almost exclusively by irradiation and the atmosphere absorbs its heat energy through direct contact with the ground.

However, the irradiation of the earth’s surface is made up of two parts:

  1. Direct solar irradiance (depending on cloud cover)
  2. Radiation from the atmospheric greenhouse effect (which is essentially maxed out)

The existence of atmospheric back-radiation that characterizes the greenhouse effect can be easily proven, but the magnitude of the natural greenhouse effect can only be calculated and the effect it has on the climate completely eludes verification.[20] To the same extent, climate computer models completely elude falsification. You can use them to prophesy any development without fear of being refuted.

In order to understand the CO2 greenhouse effect, the “normal” infrared absorption of atmospheric trace gases must first be considered.[2] Since IR-active compounds with more than two atoms usually have different absorption bands or emission bands, it must be clarified which CO2 absorption/emission bands are important. Satellite spectra provide information about this.[4]

Figure 2. Satellite spectra, above the Sahara and over the Antarctic

The dotted lines in figure 2 are the ideal Planck radiation curves of the Earth’s surface, calculated at different temperatures in K. In addition to the unhindered emission from Earth’s surface, the “jagged lines” represent the emission graphs of atmospheric greenhouse gases. The radiation windows that are always open are 800 – 1000 cm-1 and 1050 – 1300 cm-1. The red arrows in figure 2 mark the ν2 bands at 15 μm (667 cm-1). Therefore, only this relatively weak band is important and not the much stronger n3 band around 4.2 μm (2349 cm-1). In addition, you can see that the atmosphere actually emits IR radiation (“greenhouse effect”). This can be seen very clearly in the satellite spectrum over Antarctica (lower spectrum in Figure 2). The ground temperature there is about 200 K (-73 °C), while the atmosphere at an altitude of about 10 km has a higher temperature of about 210 K (-63 °C). This is a first indication that the radiative transfer on which the IPCC computer models are based does not describe the conditions correctly. Rather, it is a matter of energy transport and a result of natural convection. In this process, warmer air rises and cools and releases its energy into space above an altitude of 10 km as “undisturbed emission”. [Ed. note: Some greenhouse gas emissions to space occur as low as 2 km, the average emission height is ~5 km, but by 10 km emissions are mostly “undisturbed”]

The radiation windows clearly visible in Figure 2, which are always open when the sky is cloudless, cause some skeptics to argue that even if there were a greenhouse effect, it could not cause the claimed warming of Earth’s surface, because the cooling caused by the radiation windows is at least as great as the effect of the greenhouse effect.

Other skeptics say that because “climate-effective” trace gases such as CO2 etc. not only absorb IR radiation but also emit it to the same extent (Kirchhoff’s radiation law), “greenhouse gases” have a cooling and not a warming effect. This is proven, among other things, by the maxima of the CO2 emission spectrum, marked with a red arrow in Figure 2, which presents the emission of radiant heat into space. The same skeptics say that if there really were a greenhouse effect, the surface of the Antarctic (200 K) would have to be warmed to the temperature that prevails at an altitude of about 10 km (210 K). In addition, the greenhouse gas effect violates the 2nd law of thermodynamics, because colder, higher air layers cannot heat the warmer ground with back radiation (see temperature gradient in Figure 10).

3. Own measurements

Since, as explained at the beginning, no even remotely useful information on the spectroscopy of the greenhouse effect was published in the generally read scientific journals, I once measured the absorption of carbon dioxide in an industrial laboratory and a university laboratory and extrapolated the transmission of individual bands, as far as they could be detected, to the atmosphere.

Some of the measurements (see figure 4) were carried out using a 10 cm cuvette with an IR-permeable window, which was filled with synthetic CO2-free and anhydrous air. Then CO2 was added with a microliter syringe, so that 357 ppm CO2 was present (CO2 concentration of 1993). Furthermore, water was added until 2.6 % water vapor was present. The IR radiation source was a globar, a silicon carbide rod electrically heated to 1000-1200 ºC with a downstream variable interference filter. After recording this spectrum, CO2 was added, so that 714 ppm were contained. The measurement was carried out using an FT-IR spectrometer “Bruker IFS 48.”

To check the influence of extraneous gases, pure CO2 and CO2 in the presence of He and N2 were measured at 1020 mbar with a Perkin Elmer System 2000 FT-IR spectrometer. Figure 3 shows the result. The spectrometer’s beam path contained the same number of CO2 molecules each time. Pure CO2, which has the lowest absorption, was measured at a layer thickness of 0.35 mm. The others in a 10 cm cuvette in the presence of He (mean absorption) and nitrogen (greatest absorption).

Figure 3. Spectrum of CO2 (pure) and in the presence of He and N2. “mit” is German for with. “Bereich des anthropogenen Treibhauseffekts it. IPCC” means Anthropogenic greenhouse effect (IPCC).

It is clear that CO2 molecules excited in layers close to the ground return to the ground state by radiation-free deactivation (thermalization). Similar to what Jack Barrett stated in his 1995 Spectro Chimica Acta publication.

The area that, according to the IPCC, presents the additional – anthropogenic – greenhouse effect is indicated by black arrows in Figure 3. Now you can argue that if you choose a cuvette with a length of 1,000 m, you will find more “climate-effective” absorption/emission bands at the edges. This is especially true if you choose a route from the ground to the tropopause (approx. 10,000 m). As I said, the discussion perpetual motion machine can be kept going endlessly.

3.1 Evaluation of your own measurements

Figure 4 shows the unprocessed spectrum of the 15 μm band for 357 ppm CO2 and 2.6% H2O.

Figure 4. Unprocessed spectrum of the 15 μm bands (ν2 bands). Note, the P branch has a lower frequency (higher wavelength) than the fundamental Q frequency and the R branch the opposite.

As in figure 3, the R– (ΔJ = + 1) and P– (ΔJ = – 1) as well as the Q branch (ΔJ = 0) of the n2 bands can be clearly seen. The extinction coefficient at the maximum (ΔJ = 0) was given by:

= 20.2 m2 mol-1 (ν2 at 667 cm-1)

To calculate the absorption, the average CO2 content of the atmosphere was assumed to be c = 1.03·10-3 mol/m3. If one inserts the molar extinctions measured above together with the concentration and the layer thickness of the troposphere (h = 10 km = 104 m) into the Lambert-Beer law, one obtains an extinction (formula symbol according to DIN: A) of:

A(n2) = 20.2 m2 mol-1 × 1.03·10-3 mol/m3 × 104 m = 208

This means that the transmissions in the middle of the absorption band at the 357 ppm CO2 (the 1997 value) is: T(n2) = 10-208 (Fig. 5).

Figure 5. Spectral evaluation scheme. “Bereich des anthropogenen Treibhauseffekts bei CO2 – Verdopplung” means Range of the anthropogenic greenhouse effect at CO2 doubling. Wellenlänge means wavelength.

This is an extremely low transmission value, which completely excludes an increase in the greenhouse effect if the climate-affecting trace gas in this area is doubled.

If the molar extinction coefficient e for the n2 band [bending mode at 667/cm] and the volume concentration in mol/m3 (357 ppm CO2) are inserted into the Lambert-Beer law and a layer thickness of 10 m is assumed, the result is an extinction of:

A = 20.2 m2 mol-1 × 0.0159 mol/m3× 10 m = 3.21

This corresponds to a transmission of T = 10-3.21 = 0.6 per thousand. In other words, after only 10 m, the Q-band 1 – T = absorbs 99.94% of the IR radiation.

In the case of absorption at the peak flanks, the absorbance is naturally smaller. For this reason, the IPCC states in 1990 that the vibrational rotation bands in the middle range of the 15 mm band are as good as saturated, which is why they hardly contribute to the “greenhouse effect” with further increased concentrations. But: “The effect of added carbon dioxide molecules is, however, significant at the edges of the 15 μm band, and in particular around 13,7 and 16,3 μm.[14]” The anthropogenic greenhouse effect, which is classified as dangerous by the IPCC, is said to be based primarily on IR bands of carbon dioxide below 625 cm-1 or above 729 cm-1. The best way to see what this statement means is to look at figure 3 again. Apart from that, IPCC researchers rely on a completely insignificant harmonic of CO2 at 9.6 μm, which, as is usual with harmonics, is 15 to 30 times weaker than the corresponding fundamental vibration. In addition, there are other “greenhouse gases” such as N2O, CH4, etc., which I cannot go into here. Even sulfur hexafluoride (SF6), which was used, among other things, to fill car tires so that vehicles would roll more quietly, had to give way to the feared climate catastrophe years ago.

Of course, these above-mentioned “edges” exist around 15 μm, because the rotational quantum number J goes from J = + 1 to J = + ¥ and from J = – 1 to J = – ¥. Unfortunately, however, the “unsaturated areas” of the 15 μm CO2 spectrum are becoming weaker and weaker at the edges.

Since the occupation (NJ/NJ=0) of the rotational vibration levels of the Boltzmann distribution (Equation 4)

(B = rotation constant, J = rotational quantum number, h = Planck constant, c = speed of light, k = Boltzmann constant, T = thermodynamic temperature)

obey, at high rotational quantum numbers (J) at a given fixed point in time (“snapshot”), only very few CO2 molecules are present that absorb in this region and emit according to Kirchhoff’s radiation law (“greenhouse effect”). How weak the spectroscopy bands are can be checked using the following HITRAN data:[6]

Wavelength


Absorption coefficient per molecule


 in 10-22 cm-1/(molec.cm2)


13.5 mm (= 741.7 cm-1)

76

15.0 mm (= 666.7 cm-1)

79,452 (middle of the band)

12.5 mm (= 797.3 cm-1)

46

In order to estimate the absorption at the peak flanks, the working hypothesis assumed in the measurements was that the absorbance should increase by the order of magnitude A = 3 (= 10-3) when the CO2 content doubles. For this purpose, the total integral of the bands up to the ends of the R and P branches was determined at A = 0 (see Figure 5). Subsequently, the digitally stored spectra were integrated from an extinction corresponding to the value A = 3 (total path through Earth’s troposphere) to the ends (A = 0) of the R and P branches.

Since the measured values were difficult to reproduce, all measurements were repeated 30 times so that the “edges” could be approximately recorded. These “edges” started at 15.80 μm for the P branch and at 14.00 μm for the R branch and ran to the baseline A = 0. IPCC has the bands start at the edges at 13.7 and 16 μm and end at the “HITRAN detection limit.” [14] According to our own measurements, the n2-band yielded:

Table 1. 15 μm band (total integral and flank integrals A = 0 to A = 3)

15 μm band

357 ppm

714 ppm

Total integral 624.04 cm -1 of 703.84 cm -1

0.5171/cm

1.4678/cm

Sum of flank integrals

1:11. 10-4/cm

9.79.10-4/cm

Of course, extinctions cannot be combined with Planck’s law of radiation. That is not the intention at all. But the relative increase that can be estimated from this when the CO2 content doubles is decisive. It corresponds to the difference of the flank integrals at 714 ppm and 357 ppm in relation to the total integral at 357 ppm.

(9.79.10-4/cm – 1.11.10-4/cm) / 0.5171/cm = 0.17 %

As already mentioned, it is only an estimate and not an exact measured value. Nevertheless, he makes it clear on what minor parameters the “climate catastrophe” prophesied at the time in Der Spiegel (33/1986) is based.

3.2. Criticism of one’s own measurements

Although the above readings are relatively inaccurate, I had published them at the time because

  1. Every chemist and physicist not directly involved in climate research, well aware of the properties of IR-active trace gases, had to assume, under the impression of the media hype, that a doubling of atmospheric CO2 would lead to a considerable increase (possibly even doubling) of the greenhouse effect. In any case, I and about 30 chemists from industry and academia had expected about a doubling of the greenhouse effect with 100% more CO2 according to my own survey about 20 years ago.
  2. The insignificance of the anthropogenic greenhouse effect was notably concealed in the information from the media quoting the “climate modelers” directly involved in it. But something like that “belongs on the table on Sundays.”

Interestingly, my measurement results are also supported by the testimony of Nobel Prize winner Paul Crutzen. In 1993, he wrote in a textbook:[21]There is already so much CO2 in the atmosphere that in many spectral ranges the absorption by CO2 is almost complete, and additional CO2 no longer plays a major role.” According to this, the greenhouse effect is almost “saturated” according to this Nobel Prize winner. You can also put it this way: A greenhouse heats up at best slightly (trace effect!) more if you replace the normal window glass with a ten-centimeter-thick bulletproof glass!

The FT-IR spectrometers used for measurement are used with great success in the chemical and pharmaceutical sectors, both in industry and at universities (see Figures 3 and 4). However, they only have a mirror path of 5 – 15 cm. This results in IR band resolutions of 0.2 – 0.07 cm-1. Since the “unsaturated” spectral ranges concern unusually weak IR bands, spectrometers with a resolution of 0.0004 cm-1 are needed.[6] To do this, you have to build an FT-IR spectrometer, which theoretically has a mirror path of 2,500 cm (25 m!). Only then can the extremely weak IR bands on which the IPCC’s climate modelers rely be measured. They are bands with an “absorption strength” of only 0.05% of the 15 μm CO2 main band!

The lower-resolution FT-IR spectrometers commonly used in industry and university research cost between 4,000 and 10,000 euros, depending on the equipment. An FT-IR spectrometer, such as the Bruker IFS 125 HR, which can be used to achieve the accuracy of the HITRAN database, costs at least 125,000 euros. An industrial chemist who purchased such a device on his own authority, for example to elucidate the structure of organic molecules, would have to reckon with his immediate dismissal. And rightly so! Devices of this kind are only useful if you want to measure trace effects. For this reason, the devices are mainly found in institutes that deal with atmospheric ultra-traces on behalf of politicians. The money for this comes from the taxpayer.

4. Comparison with the official data of the IPCC

If you take the official IPCC figures, then the “natural” greenhouse effect” is 324 W/m2.[22, 23] If the CO2 is doubled (100% increase!), it is assumed by consensus (“best guess”) that the radiative forcing increases by 3.7 W/m2.[23, 24] Figure 6 shows the conditions.

Figure 6. Percentage increase in the greenhouse effect with doubling of atmospheric CO2 according to the official data of the IPCC. Mehr means more. “fuhrt zur Erhohung des CO2-Treibhausef-fekts um” means leads to an increase in the CO2 greenhouse effect.

Figure 6 also proves the most extensive saturation described above, because the increase in the greenhouse effect with CO2 doubling is only a marginal 1.2%. This is well known in climate research. Therefore, attempts are made to refute the “saturation character” with the argument that the climate is such a sensitive system that it can be thrown out of balance by even the smallest changes in radiative forcing. It is claimed that the cooling between 1930 and 1970 was caused by the dust pollution of industrial society. This is incorrect. The cooling was caused by the changed magnetic field of the Sun, as can be seen from Figure 15 (circled in red).

In addition to the HITRAN database, the MODTRAN program developed by the US Air Force and Spectral Sciences Incorporated, which is also used by the IPCC researchers, can be used to calculate the general and anthropogenic greenhouse effect. David Archibald has calculated the dependence of the Earth’s surface temperature on atmospheric CO2 content with the help of the MODTRAN program of the University of Chicago.[37] The astonishing result is shown in Figure 7.

Figure 7. Effect of CO2 and increase in mean earth temperature.

The highest increase in the mean surface temperature of the earth is caused by the first 20 ppm CO2. After that, the influence of CO2 drops rapidly. This is not surprising, because the influence of the “edges” becomes smaller and smaller with higher CO2 content. This is one of the reasons why the CO2 greenhouse effect was already largely exhausted in Goethe’s time.

5. Earth surface temperature, greenhouse effect and CO2 concentration

To calculate the Earth’s surface temperature without a greenhouse gas atmosphere, a simple equation based on the Stefan-Boltzmann law is used in general and not only by the IPCC researchers.

                                                                                                 

A is the albedo – the average “reflectivity” of the earth (not to be confused with extinction A!). It is assumed to be A= 0.3. In fact, other information existed in the past. The solar constant, which is not so constant in reality, has the value Fs = 1368 W/m2. Furthermore, the equation contains the Stefan-Boltzmann constant s = 5.67 · 10-8 W·m-2·K-4.

If one calculates with these data, the result for the surface temperature of the Earth is:

The result is questionable. Earth is not a waterless pile of rocks in space. It is very likely that Earth’s average temperature is much higher without greenhouse gases! But if you stick with it for now, then the specific radiation of Earth’s surface at this temperature (formula symbol according to DIN 5031, Part 1) is:

M1 = (1-0.7) · 0,25 · 1368 W/m2239 W/m2

Designated for the climatic normal period, an average temperature of + 15 °C (T = 288 K) was agreed years ago and consensus was formed. If you now use the “unchanged” Stefan Boltzmann law:

and thus calculates the specific radiation of the surface again, the result is:

Consequently, the “natural” greenhouse effect, with a hypothetical warming of DT = 288 K – 255 K = 33 K (33 °C), increases the specific radiation of the Earth’s surface by

DM = M2 – M1 = 390.0 W/m2 – 239.0 W/m2151 W/m2

As already explained above, if the CO2 is doubled (100 % increase), an additional radiative forcing of 3.7 W·m-2 is assumed (initially IPCC stated 4.2 W·m-2, the order of magnitude eludes falsification and is based on a consensus “best guess”!).

If you take the aforementioned 3.7 W·m-2, then the specific radiation of the surface rises from 390.0 W·m-2 to 393.7 W·m-2.

According to this, the temperature increases from 288.0 K to 288.7 K when CO2 is doubled (100% more CO2!). This corresponds to just DT = 288.7 K – 288.0 K = 0.7 K (0.7 °C) and no more. Climate modelling would not have received any political attention if the hypothesis of water vapour amplification had not been introduced into the discussion. Fortunately, however, this can be falsified, as will be explained in the next paragraph.

6. The water vapor amplification

Since a warming of only 0.7 °C with 100% more CO2 seems too little, it was agreed years ago that this slight increase in temperature would cause significantly more water to evaporate from the oceans according to the well-known Clausius-Clapeyron equation. Because water vapor is a greenhouse gas, the temperature increase caused by CO2 is therefore much higher. IPCC: [25]The ‘water vapor feedback’ remains consistently the most important feedback effect that causes the global warming predicted by the general circulation models in response to a doubling of CO2.” If this is correct, especially in a colder period, during which direct solar radiation does not evaporate as much water, the water vapor content (bar in Figure 8) over the oceans must nevertheless increase in parallel with the atmospheric CO2 content (red dash-dot line in figure 8). This is clearly not the case, as figure 8 shows. [32] Therefore, the climate modelers can in no case refer to the quite plausible, hypothetical water vapor amplification mechanism, which predicts a far too large temperature increase. [For a modern comparison of Clausius-Clapeyron to measurements see here. Ed.]

Figure 8. Percentage deviation of the water vapour content over the Atlantic [see also 34].

To prevent misunderstandings: The percentage in Figure 8 does not of course refer to the relative humidity, which can never be higher than 100%, but rather to the deviation of the water vapour content upwards and downwards by the measured value of 1950. In 1956, the absolute water vapour content was 25% higher than in 1950. In 1968, the water vapour content was about 45% lower, although the CO2 content continued to rise! Even though the period is relatively short, it clearly proves that water evaporation over the oceans does not correlate with rising CO2 levels. The water vapour amplification on which all predictions of the climate models are based does not exist.

7. The radiative transfer equation

The calculation of the greenhouse effect is based on a “layer or cascade model”, according to which constant absorption (I) and emission (L) take place within the atmosphere. This fictitious radiative transfer is based on the Schwarzschild equation, which was originally developed to describe the behavior of atoms in a stellar atmosphere. [12] For an infinitesimal path dz, the absorption coefficient sa and the number of particles n, the following applies in local thermodynamic radiative equilibrium (LTE):

The quantity is the radiance (cf. DIN 5031, Part 1), which indicates the emission according to the temperature-dependent Planck radiation equation.

The fact that radiative transport takes place within the atmosphere is thought to be proven by comparing the measured satellite spectra with the calculated ones. The result is amazing, as Figure 9 shows.

Figure 9. On the left, the measured emission spectrum of the Earth (satellite spectrum); on the right, the emission spectrum calculated using the radiative transfer equation.

The local thermodynamic radiative equilibrium (LTE) is based on Kirchhoff’s law of thermal radiation. According to this, the emissivity of a body is exactly as great as its absorption capacity. There is only one catch. There is no “law of conservation of radiant energy“, as is implicitly assumed in this mechanism [See the discussion at the end of this post. Ed.]. In fact, in the “comprehensible” calculation of the satellite spectra, the measured atmospheric temperature profile is inserted into the Plank radiation equation, as shown in Figure 10.

Figure 10. Radiative transfer, Planck equation and measured temperature gradient. The temperature profile is assumed!

As a reminder, the greenhouse effect is also about the temperature gradient profile that the atmosphere assumes in contact with the ground under adiabatic expansion. If you think about this, then the result – the temperature or the temperature gradient – is inserted into the Planck equation (see figure 10) in order to obtain the temperature-dependent emission spectrum of the atmosphere and the Earth’s body (see also figure 2). Therefore, the agreement of the spectra shown in Figure 10 is no proof of radiative transfer within the atmosphere. Rather, it is proof that one calculates around in circles and then rejoices in the mathematically “proven” greenhouse effect in the sense of a law of conservation of radiant energy. Of course, you can also insert the measured emission within a layer (radiance L) into the Planck equation to then obtain the temperature. Even then, the calculations are circular.

Experimental studies and the generally accepted theory clearly prove that molecular fluorescence works differently from atomic fluorescence, in which 100% of the absorbed radiation is re-emitted. This is done in an atomic absorption spectrometer (AAS device), which is used in instrumental analysis to quantify metals in samples (e.g. Cd [Cadmium] in a soil sample).

LTE requires “100% molecular fluorescence”, which does not exist – not even in the IR range (see also Figure 3).[26, 27] While excited atoms can only return to the ground state by emitting radiation, relaxation in molecules usually takes place without radiation according to the principles of the Jablonski diagram[28] because of the degrees of rotation and vibrational freedom. Therefore, it would actually be sufficient to describe the “theoretical” greenhouse effect without any absorption only with the – temperature-dependent – Planck equation, which is multiplied by the respective band strengths of “climate-impacting” trace gases and the “number of IR-active molecules in an air volume”. If the solid angle is taken into account, the thermal emission of the atmosphere within a half-space is obtained.

Result:

It goes without saying that atmospheric thermal radiation is given according to Planck’s law. However, as long as a tropospheric temperature profile exists (colder at the top – warmer at the bottom), and convection contributes significantly to energy transport, the hypothesis that in the open system of the atmosphere the energy of IR radiation absorbed near the ground is passed on from bottom to top by means of radiation transport is wrong. There is no “law of conservation of radiant energy”. Rather, excited “ground-level greenhouse gas molecules” essentially transfer their energy to the non-IR-active main components of the atmosphere (N2 and O2).

Interestingly, the radiative transfer equation is used when the cooling behavior of large glass bodies is to be controlled by means of modeling, for example in the production of reflecting telescopes. This works very well. Because there is no convection in a solidified molten glass at 600K!

8. What climate models can’t do

Climate models are computer algorithms (read: “calculation rules” that reflect the opinion-dependent specifications of programming) and not reality. Because the complexity of climate events cannot be grasped in a reality-oriented way by any computer at present and in the foreseeable future, it is more a matter of political instruments than of exact natural science.

8.1 The Flow Corrections

According to the hypothesis, the “natural” greenhouse effect is supposed to heat the globe by 33°C. It is erroneously assumed that the Earth, which is 70% covered with water, would behave similarly to the completely waterless moon. If one assumes that only the first 10 m water depth of the oceans would thermostatically regulate the average earth temperature, the calculation shows that the oceans store an amount of energy of 1.57.1018 MJ [megajoules] in the temperature range from -18°C to +15°C. Here is the counter-calculation: In 24 hours, 1.43.1016 MJ are turned over by the earthly greenhouse effect.[23] The total natural greenhouse effect thus accounts for only the 0.9% of the energy stored in the oceans to a water depth of up to 10m. This results in considerable difficulties in coupling general circulation atmospheric models with general circulation oceanic models. These problems can only be overcome with the help of so-called “flow corrections”, the amounts of which, as figure 11 shows, are many times greater than the anthropogenic greenhouse effect. The right-hand column in figure 11 shows the radiative forcing of CO2 when it doubles. According to the study, the 100 W/m2 flux correction for the ocean-surface-atmosphere coupling alone is about 27 times (!) greater than the anthropogenic greenhouse effect at CO2 doubling (3.7 W/m2).

Figure 11. Anthropogenic greenhouse effect (small pillar on the right) in relation to the flux corrections applied in climate modelling. “notwendige Flubkorrekturen zur Uberbruckung der Unsicherheiten bei der Computermodellierung” means necessary flow corrections to bridge the uncertainties in computer modelling.

It should not go unmentioned that relevant institutes have recently announced that flow corrections are no longer necessary. In fact, climate models still cannot do without them.

8. 2 The density of clouds

When estimating the radiative forcing of clouds, climate models come to extremely different results. For example, the “Bureau of Meteorology Research Center” (BMRC) of Australia finds that the clouds provide a cooling of about 1 W/m2, while the Labaratoire de Météologie Dynamic (LMD) from France thinks that clouds cause a warming of about 1.7 W/m2 (fig. 12). This is also worth noting!

Figure 12. Radiative forcing of clouds calculated with different climate models.

8. 3 Model-dependent parameters

In the publication by Hermann Flohn [5], which has already been cited several times, he presents the global temperature increase in a graph according to relevant computer modelling (fig. 13). In the essay, Flohn refers to the state of climate modelling in 1984, on which the Earth Environment Summit Rio92 was based a little later in a slightly improved form.

Figure 13. Mean ground temperature and model-dependent parameter. The caption reads: “Change in the mean temperature of the Earth’s surface (Ts) depending on the CO2 content of the atmosphere and the (model-dependent) parameter D=nCB-1.”

Interestingly in the simple model that Flohn refers to is the model-dependent parameter “D“, which indicates the slope of the respective straight line.

The larger D is, the stronger the temperature increase. The quantity n is the ratio of the uncertain radiation balance of all trace gases to the radiation balance of CO2:

Flohn nobly refers to B and C as “sensitivity parameters”, which of course take into account the water vapour amplification (B= 1.8 W · m · K-1C = 6.5 W · m · K-1). Flohn gives an uncertainty factor of about 20 % for the parameters, i.e. depending on how nB and C are chosen, more dramatic or less dramatic climate changes result in the computer (!). I have no doubt that there are more sophisticated models today, but as long as the water vapor amplification mechanism is included in them, the highest doubts are warranted.

8. 4 The Past

Furthermore, climate models fail to reproduce the past. To this day, they cannot reproduce the cooling between 1930 and 1975 shown in figure 14 without the help of artifices (atmospheric dust input by industrial society!). (The actual cause of the unusual course is shown in figure 15.)

Figure 14. CO2 content and temperature profile

In the IPCC’s 2001 assessment report, another curve appeared that showed the Roman and medieval climate optimum, times that were at least as warm as today. Greenland was settled in 875 by the Viking Erik “the Red” Thorvaldsson, the Bishop of Trondheim was able to harvest his own wine at that time and there were high-altitude mountain villages that were abandoned during the Little Ice Age and are reappearing today as a result of the retreat of the glaciers. That is disturbing. That is why the formulation “We have to get rid of the Medieval Warm Period” circulated in IPCC circles. This was complied with. In 1999, Michael E. Mann et al. published the famous hockey stick curve,[1] in which all pre-industrial warming periods were “ironed out” and the present was portrayed as the warmest period in history. Stephen McIntyre and Ross McKittrick have since thoroughly refuted the study.[38] Among other things, almost any number could be inserted into the program used by Mann, always resulting in a “hockey stick”.

9. The Alternative

Since greenhouse gas-fixated computer climate models not only fail in the period between 1930 and 1970 but also cannot replicate the Little Ice Age (14th to 18th centuries) and the Medieval Climate Optimum (11th – 13th centuries), there must be another crucial mechanism.

There is a lot to suggest that this is the cloud density which is influenced by cosmic rays, as the director of the Centre for Sun-Climate Research at the Danish National Space Center, Henrik Svensmark, has been emphasizing for years.[39] Cosmic rays consist mainly of protons, which penetrate our solar system as an echo of the Big Bang. When these positively charged nuclear building blocks enter the atmosphere, they lead to the condensation of water vapor via a mechanism that is not yet fully understood, forming clouds. If the sun’s magnetic field increases with higher solar activity, then the protons are shielded more strongly. As a result, fewer clouds are formed, and the incident solar radiation can warm Earth’s surface (ocean surfaces!) more. Therefore, the global temperature curve follows the fluctuation of the solar magnetic field (fig. 15).

Figure 15. Solar magnetic field and global temperature [29, modified]. The German reads: “cooling period.”

Incidentally, as measurements show, solar activity since the year 850 has never been as high as it was after 1940.[30] The cloud cover-dependent fluctuation in solar radiation is also more noticeable in the heat balance of the world’s oceans.  Thus, the anthropogenic greenhouse effect is likely to be a small, superimposed quantity on a natural climate fluctuation.

Apart from that, the temperature of -18 °C calculated under point 5 (equation 5), which Earth supposedly would have without greenhouse gases, seems to be set far too low. But it corresponds to the official doctrine, which – and this must be emphasized – is a hypothesis. Because, as already pointed out, Earth is not a waterless rock pile in space, but is 70% covered with water, direct absorption in the near infrared (NIR) region and the very delayed radiation emission of ocean water must be taken into account more.

Figure 16. Cloud cover (cloud density) and global temperature

Figure 16 shows that global cloud cover decreased from 69% to 65% between 1986 and 2000 (left ordinate, plotted “falling”). At the same time, the global mean temperature rose (right ordinate, plotted “rising”).

While the temperature fluctuations in the Sahara during the day can easily be up to Dd = 50 °C, oceans behave much more sluggishly. The total heat conversion (heat energy, not heat output!) of an ocean is the sum of many variables:

Qges = (QS – QA) – QK – QV – QT + QC + QE + QF + QR

QS = solar and sky radiation absorbed in the sea (= “greenhouse effect”)

QA = effective radiance

QC = “perceptible” heat transfer air-water

QV = latent heat transfer air-water (evaporation, condensation)

QT = Heat transport through flow

QC = chemical-biological processes

QE = heat supply from the earth’s interior

QF = Frictional heat

QR = radioactive decay

Since the storage capacity of water is considerably higher than that of rocks, it is impossible for the earth to cool down abruptly by 50°C at night if the water-air system exists. This is because the radiation power dQA/dt in particular differs significantly from that of the Sahara because of the significantly higher heat storage capacity of seawater. According to the Stefan-Boltzmann equation (Equation 6), the specific radiation of Earth’s surface is directly related to temperature. The higher the latter, the greater the radiation.

However, it can be seen from the above list that the energy content of seawater is also based on the heat supply from the earth’s interior, on chemical-biological processes, on radioactive decay and on frictional heat. The Q F (heat of friction) in the above list depends on the wind speed. As H. Volz reported at a conference of the Bavarian Academy of Sciences and Humanities, the specific radiation differs at wind forces 0 and 7 in an order of magnitude of DM = 11.1 W/m2 [31]. However, the inflow and outflow balance assumes a calm sea. If you add this quantity to the specific radiation at 15 °C, you get:

= 239.0 W/m2 + 11.1 W/m2 = 250.1 W/m2

Inserted into equation 2, the temperature results in T = 257.7 K (-15.3 °C). This temperature is 2.7 °C higher than the aforementioned -18 °C or 255K. The greenhouse gases then do not increase the mean temperature by 33 °C but “only” by 30.3 °C, insofar as the thesis is correct, according to which the “normal” average temperature of the earth is +15°C. How high was this actually during the Medieval Climate Optimum (11th – 13th century) and during the “Little Ice Age” (14th to 18th century)?

Another is added. About 50% of the solar radiation that reaches the ground is thermal radiation (near and medium IR). This is absorbed by the seawater. The radiation-fixated greenhouse gas theory only offsets the irradiation and radiation in the course of the day, as if the world’s oceans could suddenly cool down to -18 °C on the night side. However, the inertia of the oceans is very high, with a relaxation period of up to 200 years. Consequently, the average global equilibrium temperature of the oceans without atmospheric greenhouse gases is likely to be around +4 °C (water of the highest density at the bottom of a frozen body of water) rather than -18 °C.

Some time ago, when I corresponded with a former head of a climate computing center and asked him how high the Earth’s mean temperature should be without oceans but with the current atmospheric greenhouse gas content, I was told that this was an interesting question, but that it had not yet been calculated.

Summary

Finally, I would like to draw the reader’s attention to the fact that the current CO2 content is assigned a different temperature effect depending on the literature reference. In the book The Global Climate edited by J.T. Houghton, Kondratyev and Moskalenko give 7.2 K.[15] The authors quote themselves.[16] If you get hold of the book written in Cyrillic and look at the page indicated, you will find no information about how Kondratyev and Moskalenko arrive at the above-mentioned 7.2K. On the other hand, one seems to be quite sure, because the authors’ statements are often quoted.[17] However, there are contradictions, because K.P. Shine[18] gives a different value, namely 12 K, and R. Lindzen[19] assumes that only about 5% of the natural greenhouse effect can be attributed to CO2. That would be 1.65K, less than a quarter of the 7.2 K value used by the IPCC.

Dr. Markus Ott helped with this translation.

Literature

[1] Michael E. Mann, Raymond S. Bradley und Malcolm K. Hughes (1999): Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations, in: Geophysical Research Letters, Vol. 26, No. 6, S. 759–762

[2] R. Revelle, Scientific American, 247, No.2, Aug. 1982, 33-41[3] J. Barrett, Spectrochim. Acta Part A, 51, 415 (1995)
[4] R.A. Hanel et al. Journal of Geophysical Research, 77, 2629-2641 (1972)

[5] H. Flohn, Nachr. Chem.Tech.Lab, 32, 305-309 (1984)

[6] L.S.Rothman et al., Appl.Opt. 26, 4058 (1987)

[7] H. Hug, Chemische Rundschau, 20. Febr., S. 9 (1998)

[8] P. S. Braterman, Spectrochim. Acta Part A, 52, 1565 (1996)

[9] K. Shine, Spectrochim. Acta Part A, 51, 1393 (1995)

[10] J. Houghton, Spectrochim. Acta Part A, 51, 1391 (1995)

[11] R. S. Courtney, Spectrochim. Acta Part A, 53, 1601 (1997)

[12] R. P. Wayne, Chemistry of Atmospheres, Oxford University Press,2nd. Edition, 44-49 (1991),

[13] Murry L. Salby, Fundamentals of Atmospheric Physics, Academic Press, 198-257 (1996)

[14] Climate Change 1990. The IPCC Scientific Assessment, p. 49

[15] K.Ya. Kondratyev, N.I. Moskalenko in J.T.Houghton, The Global Climate, Cambridge
Universitiy Press, 225-233 (1984)

[16] K.Ya. Kondratyev,N.I. Moskalenko, Thermal Emission of Planets, Gidrometeoizdat,263 pp (1977) (Russisch)

[17] C.D. Schönwiese, Klimaänderungen, Springer-Verlag Berlin Heidelberg, p. 135 (1995)

[18] K. P. Shine, A. Sinha, Nature 354, 382 (1991)

[19] R. S. Lindzen, Proc. Nat. Acad. of Sciences, 94, 8335 (1997)

[20] R. Raschke, R. Hollman, Radiation Transfer in the Atmosphere, Modelling and Measurement, Preprint for the CO2 Colloquium of DECHEMA in Frankfurt/Main on 11.10.2001

[21] T. E. Graedel, Paul J. Crutzen, Chemistry of the Atmosphere, Spektrum Akademischer Verlag, Heidelberg, Berlin, Oxford 1993, p. 414

[22] IPCC, Climate Change 2001, Chap.  1.2.1 Natural Forcing of the Climate System

[23] J. T. Kiehl, K. E. Trenberth, Bull. Amer. Meteor. Soc.78 (1997) 197

[24] IPCC, Climate Change 1994, Radiative Forcing of Climate Change and Evaluation of the IPCC IS92 Emission Scenarios, Cambridge University Press, p. 174

[25] IPCC, Climate Change 2001, Working Group I: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panal on Climate Change, Chap. 7.2.1.1

[26] H. Hug, Energy & Environment, 11, 631, (2000)

[27] N. D. Coggeshall and E. L. Saier, J. Chem. Phys., 15, 65, (1947), Fig. 1

[28] Matthias Otto, Analytical Chemistry, Wiley-VCH Verlag, Weinheim (2000), p. 280 ff

[29] S. Solanki, M Schüssler, M Fligge, Nature408 (2000) 445

[30] I. G. Usoskin, S. K. Solanki, M. Schüssler, K. Mursula, K. Alanako, Phys. Rev. Let., 91 (2003) 211101-1

[31] Round Tables of the Commission on Ecology, Climate Change in the 20th and 21st Centuries: “What Role Do Carbon Dioxide, Water and Greenhouse Gases Really Play?” Bavarian Academy of Sciences and Humanities, Verlag Dr. Friedrich Pfeil, Munich, April 2005, p. 93

[32] Water vapour graph according to H. Flohn, BdW 12/1978, p. 132

[33] U. Cubasch, Phys. Bl51 (1995) 269

[34] H. Hug, Die Angsttrompeter, Signum Verlag, Munich, 2006, p. 227

[35] Private communication Jack Barrett, 2001 and 2012

[36] E. Raschke et al., Chemische Rundschau, 23 Oct., p. 9 (1998)

[37] http://www.davidarchibald.info/papers/Failure%20To%20Warm.pdf

[38] Stephen McIntyre, Ross McKitrick, Geophysical Research Letters, Vol. 32, L03710, 5PP., 2005

[39] Henrik Svensmark, A&G, February 2007, Vol. 48, p 1.18

Heinz Hug, for EIKE; Wiesbaden August 2012

Related Files

Exit mobile version