Moving to Texas

By Andy May

“You may all go to hell and I will go to Texas” Davy Crockett, 1835

According to the Texas Tribune in 2016, Texas has become the top destination for people moving from other states and leading the way are people from California. Beginning in 2005, Texas has outpaced all other states in population growth. Half of the growth is due to people moving to Texas. From 2005 to 2013, 5.9 million people moved to Texas and 4.8 million of those came from other states. In 2013, net migration to Texas was 126,230, that is the difference between those moving to Texas and those moving away.

Further, Pew Research reports that Texas is the stickiest state. Meaning that more than 75% of those born in Texas are still here. Why is this? Davy Crockett also wrote the following to his children in 1836 after moving to Texas:

“I must say as to what I have seen of Texas, it is the garden spot of the world.”

Texas is my adopted state, I wasn’t born here, but I do agree with Mr. Crockett. It is a beautiful place and it is marvelously easy to live well here. John Steinbeck wrote in Travels with Charley:

“I have said that Texas is a state of mind, but I think it is more than that. It is a mystique closely approximating a religion. And this is true to the extent that people either passionately love Texas or passionately hate it and, as in other religions, few people dare to inspect it for fear of losing their bearings in mystery or paradox. But I think there will be little quarrel with my feeling that Texas is one thing. For all its enormous range of space, climate, and physical appearance, and for all the internal squabbles, contentions, and strivings, Texas has a tight cohesiveness perhaps stronger than any other section of America. Rich, poor, Panhandle, Gulf, city, country, Texas is the obsession, the proper study and the passionate possession of all Texans.”

How very true, we are all very proud of our state. Texas is also very welcoming of visitors and immigrants and well known for being friendly. This is a point of pride in the state and it shows. But, there are other reasons why people flock to our state.

The late Molly Ivins was not born in Texas either, but she was raised in the affluent River Oaks neighborhood of Houston, Texas, by her father “General Jim.” She graduated from St. John’s independent school, one of the top prep schools in the country. She has often been a critic of our state. She once said:

“I dearly love the state of Texas, but I consider that a harmless perversion on my part, and discuss it only with consenting adults.”

Texas politics is famous, and rightly so. Texans acknowledge that some sort of government is required, but we keep it on a very short leash. Our Legislature is only allowed to meet once every two years to minimize the damage they can do. It also means all the legislators have “real” jobs and don’t rely solely on the government (in Texan that’s “gov’ment”). They are only allowed to meet for a maximum of 140 days, another rule meant to limit potential damage. About politics in Texas, Molly Ivins once said:

“Good thing we’ve still got politics in Texas – finest form of free entertainment ever invented.”

Another famous Texan is Kinky Friedman. He was born in Chicago, but his parents moved to a ranch near Austin when he was very young. He graduated from Austin High School in 1962 and the University of Texas at Austin in 1966. His nickname comes from his very curly hair. Friedman is Jewish, so naturally his band was called Kinky Friedman and the Texas Jewboys. Friedman’s father hated the name of the band, which was a big part of the motivation for Kinky to keep it. The band produced 16 albums and several singles. Friedman was also a columnist for the Texas Monthly and has written several books. He was also a candidate for governor of Texas, receiving 13% of the vote. His campaign slogan was “Why the hell not?” Kinky once said:

“How can you look at the Texas legislature and still believe in intelligent design?”

Kinky was for decriminalizing marijuana, his reasoning was:

“We’ve got to clear some of the room out of the prisons so we can put the bad guys in there, like the pedophiles and the politicians”

Texas has had its share of colorful governors, one of my favorites is Ann Richards, who was born in Texas.  The photos below make that very clear.

Willie Nelson is a rare breed in Texas today, he was born here, in Abbot, Texas. His parents left him with his grandparents who raised him. The grandparents taught singing and music and started Willie on the guitar when he was only six. When he was young he and the family picked cotton along with the other citizens of Abbot, but Willie hated it. So, he earned extra money by singing in dance halls, taverns and honky-tonks from the age of 13. He graduated from Abbot High School and his first band, formed by his brother-in-law, was called The Texans. Willie once said:

“I’m from Texas and one of the reasons I like Texas is because there’s no one in control.”

Think about that when you consider Richard Daley and his son running Chicago, Tom Pendergast running Kansas City, Tammany Hall in New York. Corruption like that is not likely to happen in Texas, we never let anyone get that much control. Texas is the fifth most libertarian state in the country and the only major state with a large libertarian population. This is one of the few states where libertarians have someone on the ballot for almost every local and state office.

Government is not very important in Texas, this gives us a great advantage over our sister states.  We are not kind to politicians who want to “run” things. This goes back to our early days, Sam Houston once said (figure1):

Figure 1

OK, we’ve discussed the natural beauty of the state, the nice people and our so-called government. Another important reason people and businesses move to Texas is the business-friendly atmosphere and the robust economy. Texas is #1 in combined foreign and domestic business investment (link). It has also been the country’s top exporter for 14 years running, with $251 billion in exports, 16% of all US exports. Texas also outpaced California for high-tech exports ($6.3 billion) for the last 3 years. Texas was ranked #1 as a place to do business by U.S. CEO’s in 2016. Texas is also #1 in job creation, adding more than 1.8 million jobs since 2007.

More immigrants move to Texas from California than any other state, so contrasting the two states is instructive. Why has California become a state to escape from rather than a place to move to? Simply put, it has too much government, too many regulations and the taxes are too high (link).

The American Legislative Exchange Council (ALEC) says that California has the fourth-highest tax burden in the country. The state’s top marginal income tax rate is the worst in the country, and its top marginal corporate rate is not much better (40th). Its personal income tax progressivity is in last place.

ALEC also reports that California’s civil court system (law suits) is among the worst and the state ranks 44th in economic outlook. This is very demoralizing for businesses.

Only New York has higher net domestic out-migration than California from 2004 to 2013, when 1,394,911 abandoned California.

Last year, business location consultant Joseph Vranich wrote that

“in California, costs to run a business are higher than in other states and nations — largely due to the state’s tax and regulatory policies — and the business climate shows little chance of improving.”

Recently Jamba Juice, Toyota, Occidental, Carl’s Jr, Jacob’s Engineering and Kubota have all moved to Texas from California. In all, from 2008 to 2014, 219 businesses moved or moved operations from California to Texas. In the same period, California lost 1,510 businesses. Texas gained 37,553 jobs and $6.5 billion in investment, just from California. Texas has cheap energy, an educated work force, low taxes, minimal red tape and great universities. These are all very attractive to business.

From other areas, JP Morgan, Fannie Mae, and the German company Siemen’s Oil and Gas have also moved to Texas. The Austrian steel maker Voestalpine moved a large operation to Texas, as did the Chinese company Tianjin Pipe Group.

Figure 2 compares migration from Texas to several states and migration from the same states for 2013. In all cases, except for Oklahoma and Colorado, the net population movement is to Texas.

Figure 2 (source)

Regarding foreign immigration, the top three receiving states, in order, are California, Texas and Florida. 83% of immigrants to Texas are Asian or Latino, see figure 3.

Figure 3 (source)

Many foreign-born immigrants move to Texas from California, see figure 4.

Figure 4 (source)

From the Texas publication Origins of Immigrants to Texas:

“Since 2005, Texas has outpaced all other states in annual population growth. Almost half of this growth occurred because of people moving to Texas. Close to one in six of these movers immigrated to Texas from another country. Texas, with the nation’s second largest population, attracted the second highest number of immigrants between 2005 and 2013. Although immigration to Texas experienced a strong decline during the 2007-2009 recession, it has been on the rise since 2010. This rebound occurred even as Mexican immigration to Texas fell sharply. The recent decline in Mexican immigration has been partially offset by an increase in the number of non-Latin American immigrants, particularly those of Asian-origin. As a consequence, total [net] immigration to Texas in 2013 reached 126,230, the second highest level during the 2005-2013 time period. Given the state’s high rate of natural increase, a continuation of recent immigration trends will ensure strong population growth into the foreseeable future.”

Texas maintains a welcoming and friendly atmosphere and is very pro-business. We keep regulations to the minimum and taxes low. Our government does its best to stay out of the way and let people run their own lives. This seems to work; the people just keep moving here.

Exergy and Power Plants

By Andy May

Key question: Can renewables ever replace fossil fuels and nuclear?

Understanding the value of renewables, vis-à-vis fossil fuels and nuclear power, requires that we consider that all energy is not equal in value. In fact, the quantity we call energy can be misleading and many experts prefer the quantity called “exergy,” which is defined in economics as (source Exergy Economics):

The maximum useful work which can be extracted from a system as it reversibly comes into equilibrium with its environment.”

Or it can be thought of as the measure of potential work embodied in a material or device. As Ayres, et al. (1998) argue exergy is a more natural choice as a measure of resource quantity than either mass or energy. Even today it seems BTU’s, a measure of heat of combustion, or MToe, million tonnes of oil equivalent, are commonly used and mislabeled energy (see the Exxon Outlook, 2017 or the BP Energy Outlook, 2017). In a previous post (here) I discussed EROI, or energy returned from energy invested. I complained in that post about the inconsistency and inaccuracy in current EROI and LCOE (Levelized cost of electricity) calculations. The problems mostly stemmed from comparing energy or electricity output from different sources (solar, wind, natural gas, coal, nuclear) as if all produced energy was equally valuable, which it isn’t. While comparing the heat of combustion or million tonnes of oil equivalent is clearly incorrect, Rud Istvan and Planning Engineer show that comparing the cost of producing megawatts of electricity, like the IEA and EIA do, is also incorrect, see here and here. Since exergy is a measure of useful work, it helps get around that problem. In a comment to that post, Captain Ike Kiefer posted a reference to Weißbach, et al. (2013) which has a much more valid EROI comparison (see figure 2) of conventional and renewable electricity sources in Germany. Since Germany is, in many ways, a testbed of renewable energy sources for the world; this is very helpful.

Continue reading


Renewable Energy, what is the cost?

By Andy May

What are the costs of using renewable energy? The sun and wind are free, does that make wind and solar power free? Biofuels require power to plant crops, make fertilizer and spread it, harvest the plants, make and transport the ethanol. Solar and wind require power to produce, transport and install the equipment. All renewable energy sources require lots of land per megawatt of electricity produced. We will not be able to determine a cost for renewable power in this essay, but we can discuss the components of the calculation and provide some context.  A key question to think about, do renewable fuels decrease fossil fuel use, or do they increase it?

Continue reading

The timing of Interglacials

By Andy May

P. C. Tzedakis and co-authors have just published a new paper in the February 23, 2017 issue of Nature entitled “A simple rule to determine which insolation cycles lead to interglacials.” The paper introduces new rules for defining interglacial periods in the geological record. They come up with the same interglacial periods that Javier identified in his post Nature Unbound I: The Glacial Cycle.

The Earth has been in an ice age for the last 2.6 million years, Javier defined an ice age as:

“… any period when there are extensive ice sheets over vast land regions, as we see now.”

Tzedakis, et al. note that

“The fundamental property that underlies the concept of an interglacial is high sea-level.”

The higher sea-level is a result of melting a significant amount of land-ice during the interglacial. We are currently in the “Quaternary Ice Age,” which is either the coldest or the second coldest period in the last 500 million years as can be seen in figures 1 and 2. These are the most popular temperature reconstructions of the past 540 million years. Ice ages (or a collection of closely spaced continental glacial periods) have occurred in the geological record roughly every 150 million years in the Phanerozoic. The cause of these cold periods is not known, but we are clearly in one now.

Figure 1, source Veizer, et al., 1999 and Wikipedia

Figure 2, Phanerozoic temperatures, source Geocraft

The current (Quaternary) ice age is punctuated by warm periods, called interglacials. These warm periods are identified in the geological record by rising sea level. They persist for about 15,000 years on average and are typically 4° to 5°C warmer than the preceding glacial period, with the difference much larger at the poles than at the equator. Glacial periods are much longer than interglacials, and are the norm for the Quaternary, the warm interglacials are the anomaly. As discussed in Nature Unbound I and in Tzedakis, et al., 2017, we have had 13 interglacial periods in the past one million years. These are identified with red bars in Figure 3 (Javier’s figure 12).

Figure 3, Orbital obliquity increases, which correlate to July insolation peaks at 65°N, are colored. Red identifies successful interglacials and blue identifies a failure. The labels are MIS numbers. Low late-glacial temperatures (red circles below the blue dashed line) stimulate interglacials. High insolation at 65°N, the green circles above the green dashed line also stimulate interglacials. MIS 13 is an anomaly. Source Nature Unbound I.

The same interglacials are identified, with slightly different nomenclature, in figure 2 (our figure 4) of Tzedakis et al. The numbers in figure 3 and across the top of figure 4 are the Marine Isotope Stage (MIS) number, the odd numbers refer to “interstadials” which are warmer periods, separating the even numbered “stadials” or cooler periods. Notice that both Tzedakis et al. and Javier find more than one interglacial in MIS 7 and 15. We are currently living in MIS 1. Some interstadials are significant enough (as judged by the rise in sea level) to be labeled interglacials and some are not. One of the problems in Quaternary geology is how to objectively tell a true interglacial period from a common interstadial. Javier and Tzedakis, et al. have different criteria, but come to very similar conclusions.

Figure 4, Obliquity peaks are shaded in gray, the black line is the caloric summer half-year insolation at 65°N, the red circles are insolation maxima nearest the onset of interglacials, black diamonds are continued interglacials, light blue triangles are failed interstadials. The orange line is the δ18O stack representing temperature. The upper numbers are MIS numbers for interglacials and the lower are kyrs (thousands of years) before present or the number of a continued interglacial or a failed interstadial. The “Mid-Pleistocene Transition” toward lower-frequency higher-amplitude glacial cycles is apparent near MIS 38/37. Source Tzedakis, et al., Nature, 2017.

Javier’s methodology for identifying interglacials begins with locating every period of rising obliquity which creates a window that can initiate an interglacial. Fewer than half of these periods results in an interglacial. Next, he looks for the periods where summer insolation at 65°N exceeds 550 W/m2 and where the temperature of the preceding glacial period is below 4.55 0/00 δ18O. δ18O is a common proxy for atmospheric temperature because the colder it gets, the less 18O is found in glacier ice . The boundaries and the resulting classification are shown in figure 3.

Tzedakis (2017) uses a different methodology that results in the same set of interglacials for the past one million years. The methodology is summarized in figure 5.

Figure 5: Temperature peaks for the last 2.6 million years separated into successful interglacials (red dots), failed interglacials (blue diamonds), continued interglacials (black diamonds) and uncertain assignments (open symbols). The dashed black line separates successful interglacials from unsuccessful interstadials with only two misclassifications (59 and 63). The ramp in the dashed line is the “mid-Pleistocene transition.” Source: Tzedakis, et al., 2017.

Figure 5 plots effective energy required to cause an interglacial versus time. As can be seen more effective energy is required to initiate an interglacial over the past 600,000 years than before 1.5 million years. In figure 4, interglacials (red dots) were more frequent and more regular before 1.5 million years ago, when they corresponded to the obliquity cycle of 41,000 years. Peak summer solstice insolation at 65°N is a function of the 21,000-year precession cycle. But, rising obliquity enhances the “caloric half-year insolation at 65°N” which is more relevant to ice loss. Prior to 1.5 million years ago, every other insolation peak at 65°N was boosted by increasing obliquity and an interglacial would occur. The idea of “caloric summer half-year insolation” originated with Milanković.

More recent interglacials occur about 100,000 years apart, meaning more insolation peaks are skipped now than before 1.5 million years ago. Thus, recent glacial periods are longer now and average ice volume is larger today than in the past. The ramp between the two horizontal lines is the mid-Pleistocene transition (MPT). Effective energy is computed using equation one from Tzedakis, et al., 2017. It is computed using the caloric summer half-year insolation peak at 65°N in (GJ/m2) and the time since the previous interglacial period. Tzedakis, et al. explain including the time since the previous interglacial in terms of ice stability. That is, the longer the ice has existed and the thicker it is the more unstable it is.

Why current interglacials require more effective energy to initiate is not known. Tzedakis, et al. list several possible reasons, but do not offer a preferred theory. Why current glacial periods are more severe today than prior to 1.5 million years ago, is also not known.

Clark, et al. 2006 have noted that the severity of glacial periods and the total land-ice volume increased dramatically after the mid-Pleistocene transition. The additional land-ice present now, versus before the MPT, represents a decrease of 50 meters of sea-level equivalent. While land-ice volume increased after the MPT, the area covered with ice did not, suggesting that average land-ice thickness increased. Clark, et al. (2006) also estimate a decrease in in global deep-water ocean temperature of 1.2°C currently, relative to the pre-MPT period of 41,000 year glaciations. Thus, we are not only in a major ice-age, we are also in the coldest part of the current ice age.

So, although Javier and Tzedakis, et al. used different criteria they did identify the same interglacials for the past million years. Tzedakis et al.’s method is able to classify all but two interglacials correctly for the past 2.6 million years and their method only uses orbital forcing and elapsed time as input. This last point is important as they found no need to incorporate either CO2 concentration or δ18O records. This suggests that glaciations are caused solely by astronomical forcing, although the reason for the MPT is unclear. Tzedakis, et al. is also important because they seem to have resolved most, if not all, outstanding problems with the original Milanković theory.

Global Climate Models

By Andy May

Global Climate Models (GCM) are used to compute the social cost of carbon dioxide emissions and to compute man’s contribution to recent global warming. The assertion that most of “climate change” is due to man’s influence is based solely on these models. They are also the sole basis for concluding “climate change” is dangerous. Just how accurate are they? How close are their predictions to observations?

Dr. Judith Curry has written an important white paper, for the layman, describing how the models work. It is easy to understand and well worth reading.

Her key conclusions:

GCMs have not been subject to the rigorous verification and validation that is the norm for engineering and regulatory science.

There are numerous arguments supporting the conclusion that climate models are not fit for the purpose of identifying with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural.

There is growing evidence that climate models predict too much warming from increased atmospheric carbon dioxide.

Some portions of the GCMs are rooted in fundamental physics and chemistry, but there are thousands of atmospheric and surface processes that cannot be deterministically modeled and must be “parameterized” using simple empirical formulas based on observations. These empirical formulas are “tuned” or “calibrated” to make the models match observations. They are tweaked to match the twentieth century, especially the warming period from 1945 to 2000. Even with all of the tuning, the models do a very poor job matching the warming from 1910-1945.

Since all models are “tuned” to the twentieth century (see Voosen, et al., Science, 2016) and since the “more than half” of warming is due to man conclusion is based upon comparing two model runs “from 1951 to 2010” the validity of the computation of man’s influence is highly questionable. Dr. Curry points out:

“GCMs are evaluated against the same observations used for model tuning.”

This is not something that inspires confidence. Further, the Earth has been warming for 300 to 400 years, as Dr. Curry writes:

“Understanding and explaining the climate variability over the past 400 years, prior to 1950, has received far too little attention. Without this understanding, we should place little confidence in the IPCC’s explanations of warming since 1950.”

She adds:

“Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but of which the potential magnitude is highly uncertain.”

Precisely so.

Risk and Nuclear Power Plants

By Andy May

The financial risk is too great.

Updated post (2/21/2017)

In any discussion of the future of energy, nuclear power generation is brought up. Once a nuclear power plant is built and operating, it can produce cheap electricity reliably for decades. Further, in terms of human health, some claim it is the safest source of energy in the U.S. Others, like Benjamin Sovacool, claim the worldwide economic cost (worldwide total: $177B) of nuclear accidents is higher than for any other energy source and nuclear power is less safe than all other sources of energy except for hydroelectric power. Some of the costs could be due to an over-reaction to nuclear accidents, especially Chernobyl and Fukushima.  Others have much lower fatality estimates than Sovacool, it is unclear how many later cases of cancer are, or potentially will be, due to Chernobyl.

Permitting a new nuclear power plant and building it is a problem because there have been more than 105 significant nuclear accidents around the world since 1952, out of an IAEA total of 2,400 separate incidents. Thirty-three serious nuclear accidents compiled by The Guardian are listed and ranked here and mapped in figure 1. As figure 1 shows these incidents have occurred all over the world, some are design flaws, like the Fukashima-Diachi 2011 disaster and some are due to human error, like the loss of a Cobalt-60 source in Ikitelli, Turkey.

Figure 1: All nuclear power plant incidents, source The Guardian.

Continue reading

Oil – Will we run out?

By Andy May

“Prediction is very difficult, especially about the future” (old Danish proverb, sometimes attributed to Niels Bohr)

In November, 2016 the USGS (United States Geological Survey) reported their assessment of the recent discovery of 20 billion barrels of oil equivalent (technically recoverable) in the Midland Basin of West Texas. About the same time IHS researcher Peter Blomquist published an estimate of 35 billion barrels. Compare these estimates with Ghawar Field in Saudi Arabia, the largest conventional oil field in the world, which contained 80 billion barrels when discovered. There is an old saying in the oil and gas exploration business “big discoveries get bigger and small discoveries get smaller.” As a retired petrophysicist who has been involved with many discoveries of all sizes, I can say this is what I’ve always seen, although I have no statistics to back the statement up. Twenty or thirty years from now when the field is mostly developed, it is very likely the estimated ultimate hydrocarbon recovery from the field will be larger than either of those estimates.

The technology for producing this sort of shale oil was invented very recently, well after Marion King Hubbert produced the “Hubbert curve” predicting that U.S. oil production would peak in the early 1970’s.  As Daniel Yergin points out in The Quest:

“Hubbert got the date right, but his projection on supply was far off. Hubbert greatly underestimated the amount of oil that would be found – and – produced in the United States. By 2010 U.S. production was four times higher than Hubbert had estimated- 5.9 million barrels per day versus Hubbert’s 1971 estimate of no more than 1.5 million barrels per day.”

A comparison of actual oil production versus a version of Hubbert’s curve is shown in figure 5 (this curve is slightly different than the one Yergin used):

Figure 5, source

Technically Recoverable Reserves

So clearly Hubbert’s Malthusian curve did not predict oil supply correctly, new technology has allowed us to tap into oil that was not part of the potential supply when he did his calculation. Paul Ehrlich’s ominous 1968 prediction in The Population Bomb that 65 million Americans would starve to death in the 1980’s was incorrect for the same reason. He could not have predicted the green technology revolution that included natural gas based fertilizer (the Haber-Bosch process) and Nobel Prize winner Norman Borlaug’s new hybrid strains of wheat, rice and corn. Some might say, well Hubbert was wrong then; but what about tomorrow? Isn’t oil still a finite resource? Let’s examine that idea. Table 1 shows a rough estimate of the technically recoverable reserves of oil and gas known today, using only known oil and gas technology. More deposits will obviously be found and technology will improve in the future.

Table 1

The reserve estimates are in billions of barrels of oil equivalent. NGL and oil volumes are presented as is and natural gas is converted to oil-equivalent using the USGS conversion of 6 MCF to one barrel of oil. The table includes the “proven” worldwide oil, gas and NGL reserves from BP’s 2016 reserves summary. It also includes the 2012 USGS estimate of undiscovered “conventional” oil and gas reserves fully risked, the EIA estimate of unconventional shale oil and gas reserves, and the IEA oil shale (kerogen) and oil sands (bitumen) reserve estimates. Our estimate of 1,682 BBOE in world-wide unconventional shale oil and gas reserves is lower than the IEA estimate of 2,781. The spread in these estimates gives us an idea about how uncertain these numbers are. Our estimate of 781 BBO in oil sand bitumen reserves is lower than the IEA estimate of 1,000 to 1,500 BBO. So, please consider this table very conservative.  Yet, it results in a 148-year supply!

The moral of the story? Never underestimate the ingenuity of mankind and never assume that technology is static. Also, the resources that technology recognizes today are not all the planet’s resources.

More here!