1. Introduction
Is the world heating up because of a build-up of carbon dioxide (CO2) in the atmosphere? If so, does human activity — like burning fossil fuels — produce enough CO2 to be a decisive factor, or is the process largely natural? Would such global warming be a good thing for humanity and life on Earth, or a danger? Can science give us an accurate measure of the amount of heating per unit of CO2 emission? Does such a process continue monotonically and indefinitely, or does it change character by accelerating wildly — a nonlinear or chaotic behavior — beyond a certain concentration of CO2 in the atmosphere? Can nonlinear and chaotic behavior lead to a completely new climate, like an Ice Age? How quickly can such changes take place? How soon will we know all the answers? How much control will we have over our destinies? How will the world politics of global warming play out, and how can I be a winner in that game?
This article will describe some of the technical considerations that go into making a climate model, and in this way give some context to the many claims and counterclaims made about global warming. As with any phenomenon that has the potential of changing the status quo of human socio-political and financial arrangements, there are many self-interest factions who each have a stake in the molding of public opinion on the topic. Unraveling the truth from the propaganda begins by listing the fundamental scientific considerations needed in order to understand the linked and complex phenomena we call climate.
1. Introduction
2. A historical analogy with the birth of modern physics
3. How greenhouse gases hold heat
4. Water vapor and anthropogenic greenhouse gases
5. A note about ozone
6. How climate models work
6.1 Models and links
6.2 Space and time, scales and resolution
7. Solar Heat Into The Geartrain Of Climate
8. Justifying the IPCC consensus
9. Criticizing the IPCC consensus
10. The Open Cycle Closes
Endnotes
2. A Historical Analogy with the Birth of Modern Physics
Climate research in 2007 may be at a similar point of development as physics research was in 1907, poised for revolution.
Albert Einstein (1879-1955) found that the mechanics of Isaac Newton (1642-1727) was only a low speed, low mass limit of “general relativity,” a reality where space, time and gravity are linked, as are mass and energy.
During these same years, Max Planck (1858-1938) introduced his “quantum theory,” which was soon expanded by Einstein and Neils Bohr (1885-1962). Quantum theory revolutionized the 19th century view of electromagnetics, so elegantly stated by Michael Faraday (1791-1867), James Clerk Maxwell (1831-1879), and other scientists of their time and before (e.g., Coulomb, Ampère, Biot, Savart, Hertz). The “old” electromagnetics assumed that a “luminiferous aether” existed in otherwise empty space, and it was the oscillations of this massless “material,” which manifested electromagnetic waves, and as a result all known electrical effects. This idea was a logical extension of the observation that mechanical waves in solids (e.g., elastic waves, earthquakes) and fluids (e.g., water waves, sound waves) were the motion of vibrations through matter.
The great difficulty of 19th century experimental physicists was that they could never devise any experiment to actually detect the luminiferous aether, despite the obvious reality of electrical effects and the many motors, generators, radios and other devices built by Nikola Tesla (1856-1943), Thomas Edison (1847-1931) and other electrical engineers. An experiment to detect the aether (in 1887), by Albert Michelson (1852-1931) and Edward Morley (1838-1923), was famous for establishing that the speed of light in a vacuum was a constant (299,792,458 meters per second, a standard value adopted in 1983) regardless of any motion by the measuring device itself (Einstein’s interpretation). Another paradox was that light could exhibit a wave-like nature, as when it refracted (bent) on passing through a glass-air or water-air boundary, and when it diffracted (separated by color) on passing through a prism or narrow slit; and light could also exhibit a particle-like nature in its very precise and selective initiation of luminescent or electron (charged particle) emission from atoms.
Einstein and the quantum theorists resolved the paradoxes of electromagnetism with the quantum theory. It stated that the luminiferous aether did not exist (thus agreeing with all experiments) and that the seeming contradiction of light (and all electromagnetic radiation) having both a wave and particle nature simultaneously was in fact true. The “wavelength” of a particle or “quantum” of light was exactly proportional to its energy content as given by Planck’s formula, E = h×c/wavelength, where h is Planck’s constant, and c is the speed of light in a vacuum. Despite the seeming oddness of ascribing a wavelength to a single particle (quantum), this model of electromagnetic radiation has proved to be consistent with all measurements. Light has both a wave and particle nature, a fact exploited in electrical, communications, optical and photo-electronic technology.
Now, consider the analogy to climate research today. A consensus has developed, and is voiced by the United Nations Intergovernmental Panel on Climate Change (UN IPCC), that the accumulation of CO2 in the Earth’s atmosphere does cause an accumulation of heat in the atmosphere and biosphere of the Earth. Furthermore, human activity, primarily the burning of fossil hydrocarbon fuels, is a significant cause of this CO2 accumulation. This case has not yet been definitively proved, but the majority of scientists and their professional organizations have reached the conclusion that this case passes the test of being true beyond a reasonable doubt. They see an improving agreement between the many complicated and highly regarded (for theoretical rigor and predictive abilities) numerical (computational) models of climate, and the growing body of paleo-, historical, and current climate data.
The vastness of this entangled problem makes it impossible to know and calculate every conceivable detail “exactly,” so there are many scientist critics of the IPCC consensus. Exceptional scientists and many others of equivalent learning and capability to the consensus scientists are among the critics. However, they appear to be in the minority of scientific opinion on the issue of CO2 and climate change.
We can ask, are the climate change critics of today like the relativity and quantum theory revolutionists of 1900, their ideas not yet expressed compellingly enough to overturn a highly developed consensus view like luminiferous aether, which was orthodoxy taught in the universities by the teachers of Einstein and his generation? If so, then the “real story” has yet to emerge and revolutionize thinking on climate change.
The other possibility is that the revolution in understanding climate change has already begun, being the IPCC consensus, which will be borne out as more data is gathered, bigger computers are used and models of superior refinement are devised. Are the critics resistant to adopting a still fairly nebulous new idea, and to abandon the certainties of their long-standing views — like luminiferous aether a century ago — and the technical doubts they have about the new models, doubts which some can articulate with great logic and precision?
Science will march along and in time we will know the answers. However, our social and political problem is that if the IPCC consensus is correct (and, worse yet, if it is conservative) then we have little time to do anything about the predicted negative consequences of CO2 accumulation in the atmosphere.
3. How Greenhouse Gases Hold Heat
The significant greenhouse gases are water vapor (H2O, 36-70%), carbon dioxide (CO2, 9-26%), methane (CH4, 4-9%), ozone (O3, 3-7%), nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, perfluorocarbons and chlorofluorocarbons. The chemical symbol and the percentage contribution to the greenhouse effect on Earth by that species appears in parentheses for the first four gases.
Sunlight that penetrates the atmosphere and is absorbed by the lands and oceans of the Earth warms its surface. In turn, the Earth’s surface radiates heat in the form of infrared radiation up into the atmosphere. Greenhouse gases absorb and retain this heat, and this effect is due to their molecular nature.
Many types of molecules will develop a slight electrical charge imbalance when their heavy nuclei rotate and vibrate relative to each other as seen along the directions of their chemical bonds. These charged oscillations can have frequencies and energies that match those of a quantum of infrared radiation. So, such molecules readily absorb incident infrared photons (“particles” of infrared electromagnetic energy), and they apply the added energy to boost themselves into a higher state of rotational and vibrational excitation. Basically, molecules store heat “internally” by fidgeting (like little children who would rather be running around than sitting at a dinner table or in a church pew). Gases made up of isolated atoms, like helium, neon and argon, cannot store heat internally (by rotation and vibration about a chemical bond); their response to being heated is to move more quickly, and this is called kinetic energy, an “external” form of energy, which adds to the aggregate effect of an increase in pressure and temperature in a volume of gas.
Nitrogen (N2) and oxygen (O2), the major gas species in Earth’s atmosphere, do not develop a significant charge imbalance when they rotate and vibrate, because of the symmetry of their chemical structure (one end of the “dumbbell” never looks more nor less positive that the other). Molecules of this type do not absorb nor emit (very much) infrared radiation. Molecules with more chemical bonds, and nuclei from several chemical elements will have more heat storage capacity, a good example being the CFCs, chlorofluorocarbons, highly volatile fluids devised as refrigerants.
Molecules with stored heat (internal energy) can transmit this energy to other molecules and atoms by colliding with them. Such “inelastic collisions” can de-excite the rotation and vibration of molecules while boosting the speed of other molecules and atoms. In this way the internal energy of greenhouse gas molecules can contribute to the kinetic energy of atmospheric particles: the sensible heat of the atmosphere.
It is interesting to note that the air about you has 2.7×1025 particles/meter3, spaced by an average distance of 3.3×10-9 meters; and that each air molecule collides 1010 times/second, with an average travel between collisions of 6×10-8 meters. These numbers characterize sea-level air.
4. Water Vapor and Anthropogenic Greenhouse Gases
Nature supplies all the water vapor in the atmosphere, and much of the carbon dioxide, methane and ozone. Human activity supplies all of the very high heat capacity volatile organic compounds (VOCs). Obviously, a VOC gas whose molecules can each hold ten to one hundred times the internal energy of a CO2 molecule will be as effective as ten to one hundred times the VOC quantity of CO2. Even with this leverage, the quantities of H2O, CO2, CH4 and O3 in the atmosphere are large enough to dominate the effect of heat retention (this does not justify emitting more VOCs). So, the emission of CO2 by human activity is our most effective contribution to atmospheric heat retention.
As CO2 accumulates, the atmosphere warms, more water is evaporated, which adds heat retention capability to the atmosphere and increases warming, a positive feedback loop. A mitigating effect is the formation of clouds from the water vapor, which has a cooling effect by reflecting sunlight. Heat retention capability is called “heat capacity” in the study of thermodynamics. The effect of CO2 emission is not merely to add its own heat capacity to the atmosphere, but to act as an agent causing a further increase in the dominant component of atmospheric heat capacity, water vapor. Humans have no control over the water cycle, but they can have some control over the emission of CO2.
Today, there are nearly 380 ppm (particles per million) of CO2 in the atmosphere, whereas prior to 1800 (for about 10,000 years) there was usually about 280 ppm. The total emission of carbon from burning is 6.5 GT/y (giga-tons/year, for giga = 109, tons = metric tons of 1000 kg); of this total, 4 GT/y enters the atmosphere. Individual molecules of CO2 remain in the atmosphere for several years before being taken up by biological systems or absorbed by the oceans. However, because of the many sources and sinks of CO2 (e.g., outgassing from warming seas, like a ginger ale going flat on a hot summer day) the average concentration of atmospheric CO2 will take between 200 years to 450 years to equilibrate (level out) in response to any small perturbation (increase or decrease) of its concentration. So, if all burning by human activity (anthropogenic sources) were to stop today, it might take hundreds of years for the CO2 concentration to reach an equilibrium; it would probably rise for a time, peak, then equilibrate to a steady level below the peak concentration.
5. A Note about Ozone
Ozone (O3) absorbs ultraviolet light, which is dangerous to human skin and many living things. In filtering this higher-energy component of sunlight, upper atmospheric ozone performs a valuable service for us. CFCs destroy ozone by oxidizing, they strip off an oxygen atom leaving O2. CFCs are regulated by the Montreal Protocol, to address the problem of the degradation of the upper atmospheric UV shield.
Lower atmospheric (tropospheric) ozone is produced by chemical reactions that involve auto exhaust and pollution gases. Ozone is corrosive, it damages lungs, brittles plastics and fades painted surfaces (e.g., automobiles; poetic justice?), and corrodes the stone faces of many ancient monuments. Tropospheric ozone is the species considered a greenhouse gas.
6. How Climate Models Work
6.1 Models and Links
“A climate model is a computer based version of the Earth system, which represents physical laws and chemical interactions in the best possible way. We include the sub-systems of the Earth system, which is gained from investigations in the laboratory and measurements in reality. A global model is composed of data derived from the results of models simulating parts of the Earth system (like the carbon cycle or models of atmospheric chemistry) or, if possible with the available computer capacity, the models are directly coupled. The functionality of the models is tested by comparing simulations of the past climate with measured data we already have.”
The energy of the Sun drives the Earth’s weather and climate. We will follow this energy as it falls through the atmosphere, warming the land and the oceans, to turn over the many interlocking cycles that produce the phenomena of climate. First, consider these major subsystems of climate, and the links between them.
The atmosphere will be represented by two models, one physical (M_Atmos_phys), one chemical (M_Atmos_chem). The physics model of the atmosphere will apply mechanics and thermodynamics to account for the temperature distribution, the generation of wind, the formation of clouds, as well as the vertical variation of properties on account of gravity. The chemical model of the atmosphere will produce the concentration of species, which results from the many chemical reactions possible at any elevation, given the local temperature and density of the atmosphere.
The oceans are represented by a model (M_Ocean) that links salinity and temperature to local current, and this current conveys heat (e.g., the Gulf Stream).
The biosphere may be modeled (M_Bio) as a series of sources and sinks of gases (O2, CO2), fluids (H2O), other substances (waste production, deforestation) and heat, which interacts with the oceans (M_Ocean) and atmosphere (M_Atmos_phys and M_Atmos_chem).
The carbon cycle can be singled out as a separate model (M_CO2) acting in parallel to the biosphere model.
Links between the ocean model and the atmospheric physics model would include the force of wind on the ocean, the cycle of evaporation and precipitation, and the cycles of (infrared) radiation and heat flow (by convection) between air and water.
It is understood that the physics models of the air and oceans include the effects of the Earth’s rotation. A schematic of the global model might be as follows (M = model, L = link, directions of influence can be > [right], < [left ] or <> [2 way], see footnote
[M_Atmos_chem]<<[M_Bio]>>[M_Ocean].
[M_Atmos_chem]<>[M_Atmos_phys]‹L_heat>[M_Ocean].
[M_Atmos_chem]<>[M_Atmos_phys]>L_wind>[M_Ocean].
[M_Atmos_chem]<>[M_Atmos_phys]‹L_rain>[M_Ocean].
[M_Atmos_chem]<>[M_CO2]<>[M_Ocean].
One can imagine many refinements to this basic climate model. The first is obviously to include a land surface model, and link it to the atmosphere and oceans. The land surface model could be further elaborated by including dynamic aspects of vegetation (perhaps there would be overlap with the biosphere model). Another refinement is to account for the many particulates (e.g., dust, salt, droplets) in air, an aerosol model. Aerosols can scatter and absorb light (producing the “blue” of the sky), capture gas molecules on their surfaces and act as catalysts to certain chemical reactions, and they have a major impact on the formation of clouds. The injection of sulfate aerosols into the atmosphere by large volcanic eruptions has cooled the planet and affected weather globally for a time (e.g., for 5 years after the Krakatoa eruption of 1883). Given that aerosols rain out into the oceans, one could add an ocean chemistry model (especially if considering ocean sequestration of CO2 as an active scheme; this would acidify the oceans and kill a variety of marine life). Another refinement would be to include a sea-ice model (heat flow at the ocean-air interface, light reflection) with links to the ocean and atmosphere models.
6.2 Space and Time, Scales and Resolution
The limitation to model complexity is not human imagination, nor any limit placed by the inventory of known facts about natural processes; it is the finite capacity of computing machines. Computer models of the oceans and the atmosphere will be calculations performed on a three dimensional wire-mesh representation (grid) of the space taken by the air and water. Such grids may include an enormous quantity of points and yet have very coarse resolution. Typical atmosphere models have a 250 km horizontal resolution and 1 km vertical resolution; they may have 20 horizontal (spherical shell) layers in the first 30 km of elevation (90 percent of the atmosphere is below 16 km, 99.99997 percent is below 100 km). Ocean models can have 125 km to 250 km horizontal resolution and 200 m to 400 m depth resolution (ocean depth can be as much as 10,000 meters).
“Small scale physical processes which are below the size of the grid cells cannot be explicitly resolved. Their net impact on the coarse scale processes is estimated and included into the model by parameterization. In the atmosphere this is in particular the case for cloud formation, in the ocean for small scale eddies and for convection processes.”
Climate models are supposed to predict general conditions many years in the future (and reproduce the record of the past). So, they calculate across “big” cells of space and “long” steps of time. They “average over” small spatial effects and those of short duration, what we would experience as local weather and day-night cycles. It is easy to see that the daily oscillations of temperature during a “hot” July we recall from our past do not diminish our memories of having lived through a continuing “hot spell.” Climate models aim to predict these seasonal, even monthly averages, rather than reproduce (or predict) the filigrees of day-to-day weather variations about the mean conditions.
But, don’t small scale and short time effects have some impact on the bigger picture of climate? For example, doesn’t the formation and dispersal of clouds, though brief localized phenomena, affect climate in that they can effectively block sunlight, so that over many stormy seasons and places they might have significantly reduced the solar heating of the planet? Yes, which is why such effects are estimated, and these estimates are included in climate models as “parameters,” or, as affectionately know to all scientists, “fudge factors.” A fudge factor might be a table or formula derived from data or other work, which pairs a given property, say percentage cloud cover, to a quantity of the model, say relative humidity (percentage of water vapor in the air). A fudge factor might be elaborate (e.g., a separate computer subroutine, evaluated at every space and time step) or very elementary (e.g., a single and constant value for the needed factor, arbitrarily specified by the programmer for each run of the program).
The task of any climate model scientist is to improve the spatial and temporal accuracy of the model (finer grids, bigger computers), and to eliminate as many parameters (fudge factors) as possible by replacing them with self-consistent physics and chemistry models (mathematical abstractions of the actual processes). Like any crutch, fudge factors are only a problem when we remain wedded to them instead of trying to build up our strength (knowledge) so as to eliminate them from our activity. The immensity of the problem at hand, and the reality of any person’s finite resources means that some of these fudge factors will remain in use for quite some time. Recall that fudge factors show a recognition of considerations that one does not wish to ignore even though they may be difficult to handle. I imagine that these tasks make up most of the day-to-day, nitty-gritty work of climate modeling research.
7. Solar Heat into the Geartrain of Climate
The Sun, our star, has its own cycles of behavior (e.g., sunspots with an irregular cycle of about 11 years), which have been carefully studied and are now monitored by satellites. The quantity and spectrum of solar radiation arriving at the Earth at any given time (insolation) is known. Variations of solar radiation are relatively small, and for most purposes the output of the Sun can be taken as constant. The “solar constant” (1340 watts/meter2) is defined as the solar energy falling per unit time at normal incidence on a unit area of the Earth’s surface (ignoring the atmosphere). At any moment, Earth is intercepting 1.7×1017 watts, or 170 million gigawatts of solar power.
The motion of the Earth has several cycles whose collective effect influences changes in climate; these are Milankovitch cycles (Milutin Milankovitch, 1879-1958). One is a 100,000 year “ice age” cycle, which coincides with the periods of glaciation during the last few million years, the Quaternary Period. Milankovitch cycles are the net effect of three periodicities, those of eccentricity, axial tilt and precession. The eccentricity of the Earth’s orbit around the Sun is the “ovalness” of that circuit. The axial tilt of the Earth’s rotational axis (~north-south axis) is the angle between the plane of rotation (~the plane of the equator) and the orbital plane (the plane of the Earth’s orbit about the Sun). The precession is the wobble of the Earth’s axis (like the wobble of a spinning top). Milankovitch cycles are a major factor in climate change, but they do not explain everything about past climate (for which there is data).
The ultraviolet portion of the solar flux begins interacting with the tenuous and ionized upper fringes of the atmosphere (from 50 km to 1000 km), before most of it is absorbed in the ozone layer (25 km) at the threshold to the bulk of Earth’s atmosphere. The visible light streams through a generally transparent atmosphere, except where it is reflected and scattered by clouds and aerosols. Visible light eventually strikes land or water, being absorbed, or it strikes ice and snow and is largely reflected. Solar energy absorbed into the Earth warms its surface, down to a depth of perhaps 100 meters, to an average (equilibrium) temperature of 15° C (59° F). Of course, at the immediate surface (down to at most 10 meters) the temperature is set by the latitude, season and local weather. Below, say 1 km, the heat produced by the Earth’s gravitational compression of its core becomes evident, and temperature increases with depth.
The surface of the Earth (-60° C to 50° C) radiates infrared photons of about 10-20 Joules of energy, with frequencies in the range of 15,000 GHz, and wavelengths in the range of 20 micrometers (microns). As already described, greenhouse gases can absorb these photons and add heat to the atmosphere.
The absorbed solar energy powers many cycles. In the oceans, the flow of heat involves currents that include changes of salinity and density (and thus of depth). The thermohaline cycle is a complex “conveyor belt” of salt and heat linking all the world’s oceans. In general, ocean currents transport heat absorbed in tropical latitudes up (and, in the Southern Hemisphere, down) to higher latitudes. For example, Ireland, Scotland, Wales and England experience warmer climate than is usual at their latitudes, comparable to those of Hudson Bay, Newfoundland, the Kamchatka Peninsula, the Bering Sea and the Aleutian Islands. Western Europe is warmed by the Gulf Stream, which emanates from the Caribbean Sea. Here, heat and evaporation produce a warm, salty and buoyant surface current that sweeps north along the Eastern Seaboard of the United States, cooling in the North Atlantic, becoming denser, freshening by mixing with glacial melt south of Greenland, and then sinking to the ocean floor to continue in a circuitous path that has it bobbing up in tropical latitudes and sinking in polar ones. One theory about the effects of global warming holds that the melting of Greenland’s ice cap will dump so much fresh water into the North Atlantic that the thermohaline current will become so fresh (free of salt) and buoyant (less dense) that it will no longer sink there, thus stopping the convection of tropical heat to colder latitudes (the actual stopping of the massive momentum of this worldwide current might take decades to a century). Without such warming, the poles would once again ice over, and these ice caps could easily extend to mid latitudes, cooling the Earth into a new Ice Age.
The heat absorbed by the atmosphere, combined with the forces imparted to it by the rotation of the Earth, will produce patterns of circulation and a distribution of temperature that will change in response to the Milankovitch cycles, as well as alterations to atmospheric chemistry introduced by human activity. The 36 percent increase in atmospheric CO2 from 280 ppm to 380 ppm represents the addition of 217 gigatons (metric tons) of carbon over the last two centuries, most of it during the last 50 years. The weight of suspended carbon has increased from the pre-industrial amount of 607 gigatons to 824 gigatons today.
For completeness, we note that the incidence of any low probability natural catastrophe, like the fall of a massive comet, or a caldera eruption (an extremely large volcanic eruption) could radically alter climate (and might be fun to model).
It is easy to see that there are many, many uncertainties, approximations, and links that any particular subsystem model relies on, and which in turn affect the accuracy and reliability of any global climate model. So, there is more than enough material for critics to point to as serious deficiencies. Where the criticisms are knowledgeable and specific, they will direct the efforts of climate modelers to refine their synthesis. Breakthroughs will come from scientists who put their minds to understanding why certain disagreements between climate models and reality persist. Whether such breakthroughs will put the final polish on the models, or utterly destroy them by giving birth to new conceptions, I cannot say.
8. Justifying the IPCC Consensus
The IPCC Fourth Assessment Report (2007) concluded that “Most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” The report defines “very likely” as a probability greater than 90% that more than 50% of the observed warming is attributable to human activity.
From a scientific point of view, the IPCC is a nightmare. From a government and corporate (sadly, the same) point of view, the IPCC is a useful bureaucracy that dampens the “alarmist” potentialities of unfiltered scientific findings being broadcast to the public. From the public’s perspective, the net result may be an acceptably reliable source of sobering information that gently understates the possibilities.
The IPCC was established in 1988 by two U.N. organizations, the World Meteorological Organization (WMO) and the United Nations Development Programme (UNDP). The purpose of this panel is to evaluate the human impact on climate. The members of the panel are representatives appointed by governments, and they include scientists as well as others concerned with socio-economic (e.g., development) and policy issues. Besides an upper management and administration layer, the panel operates as three Working Groups (WG) assessing: I, the scientific research on climate; II, the vulnerability of socio-economic and natural systems; and, III, options (policies) for limiting greenhouse gas emission, and otherwise countering the potential hazards.
The “report” from the IPCC is actually in three volumes, one from each working group. The IPCC does not conduct any climate research itself, its scientists evaluate the peer-reviewed scientific literature, and their consensus on the state-of-the-art is then further smoothed into summary reports by the process of “committee authorship.” The WGI volume of the IPCC Technical Assessment Report (TAR) would be the essential scientific (as in math, physics, chemistry) report.
Any particular technical conclusion by WGI might represent a consensus of many individual scientific efforts, perhaps hundreds of published papers by thousands of scientists. For example, the attribution to anthropogenic CO2 emission for the global warming above what would be expected from natural causes relies, in part, on the observation that climate models that include natural causes of warming and anthropogenic sources of greenhouse gases reproduce the data on global temperature rise (within a reasonable error band), while climate models that only have natural causes of warming do not reproduce this temperature history.
It appears that the variety of choices made about their parameters (fudge factors, like for cloud cover) by the many climate modelers who were sampled were not the decisive factors in determining the average temperature rise. The process of peer-reviewed publication ensured that all the works sampled by the IPCC met good technical standards. So, the IPCC is making technical conclusions based on the overall trend of scientific findings, the “state of the art.”
The IPCC’s emphasis on technical conservatism is paid for by the deliberate (perhaps slow?) pace of publishing its findings. The recent observation of methane outgassing from melting tundras — a potentially huge new source of a high heat capacity gas — is not included in the latest IPCC report. The measured trends of global warming (e.g. temperatures and sea level changes) are always at the top of the ranges of predictions published by the IPCC.
The IPCC is led by government scientists, and most of the panelists and authors are also scientists. The “political” people in the IPCC can just as easily be scientists who manage a more than purely scientific group process, which has multiple political sponsors under the UN umbrella. Clearly, scientists who distinguish themselves in the field of climate research can be invited and appointed to the panel. However, they can also be removed when their government’s key corporate sponsors find them too “alarming.” This was the case in the replacement of Robert Watson as IPCC chairman by Rajendra K. Pachauri in 2002. ExxonMobil had beseeched the Bush Administration to lobby the IPCC for this change.
Any IPCC scientist will have both compelling and restraining motivations. Their original passion for science, the interest and excitement of the work, will drive them to uncover as much of the mechanisms of climate as they can, and to tell others about their findings and the implications to human society. When their results are accepted and adopted by other scientists in their field, their esteem rises, and they become invested in maintaining their technical reputations. These two motivations, one personal the other social, combine to push scientists into becoming advocates for their fields. However, successful government scientists are supremely political creatures who have mastered the art of extracting money from political structures to fund their activities. They understand the value (to their careers) of packaging the message for sponsor consumption; so the asperity of the raw and knotty truth emerging from science’s workbenches must be slipped into the most svelte form possible that preserves the facts. It is easy to see how these forces of personal psychology will find an equilibrium that matches the institutional character of the IPCC, a measured and deliberate style and a thorough technical conservatism (all scientists except the mad ones and the geniuses are terrified of ever being wrong). Politics slows and dampens the message from the IPCC, but it does not quash it.
9. Criticizing the IPCC Consensus
I am always happy to be in the minority. Concerning the climate models, I know enough of the details to be sure that they are unreliable. They are full of fudge factors that are fitted to the existing climate, so the models more or less agree with observed data. But there is no reason to believe that the same fudge factors would give the right behavior in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.
— Freeman Dyson, 2007More on Freeman Dyson
The bad news is that the climate models on which so much effort is expended are unreliable because they still use fudge factors rather than physics to represent important things like evaporation and convection, clouds and rainfall. Besides the prevalence of fudge factors, the latest and biggest climate models have other defects that make them unreliable. With one exception, they do not predict the existence of El Niño. Since El Niño is a major feature of observed climate, any model that fails to predict it is clearly deficient. The bad news does not mean that climate models are worthless. They are, as Manabe said thirty years ago, essential tools for understanding climate. They are not yet adequate tools for predicting climate.
— Freeman Dyson, 1999Freeman Dyson
That portion of the scientific community that attributes climate warming to CO2 relies on the hypothesis that increasing CO2, which is in fact a minor greenhouse gas, triggers a much larger water vapor response to warm the atmosphere. This mechanism has never been tested scientifically beyond mathematical models that predict extensive warming, and are confounded by the complexity of cloud formation — which has a cooling effect…. We know that [the sun] was responsible for climate change in the past, and so is clearly going to play the lead role in present and future climate change. And interestingly… solar activity has recently begun a downward cycle.
— Ian Clark, 2004Scientists Opposing the Mainstream Scientific Assessment of Global Warming
Our team… has discovered that the relatively few cosmic rays that reach sea-level play a big part in the everyday weather. They help to make low-level clouds, which largely regulate the Earth’s surface temperature. During the 20th Century the influx of cosmic rays decreased and the resulting reduction in cloudiness allowed the world to warm up. …most of the warming during the 20th Century can be explained by a reduction in low cloud cover.”
— Henrik Svensmark, 1997Scientists Opposing the Mainstream Scientific Assessment of Global Warming
I’m not saying the warming doesn’t cause problems, obviously it does. Obviously we should be trying to understand it. I’m saying that the problems are being greatly exaggerated. They take away money and attention from other problems that are much more urgent and important. Poverty, infectious diseases, public education and public health. Not to mention the preservation of living creatures on land and in the ocean.
— Freeman Dyson, 2005Freeman Dyson
This sampling of criticism of the IPCC consensus captures much of the substance of the opposition. Freeman Dyson, an extraordinary scientist, creative thinker and popular author, accurately focuses on the weakest technical elements in the entire CO2 climate computer calculation construction: fudge factors and coarse resolution (and, elsewhere, on the CO2-water vapor connection). Ian Clark, a hydrogeologist and professor at the University of Ottawa, succinctly states the doubts about the connection between CO2 and water vapor, and voices a belief in the controlling role of solar variability combined with Milankovitch cycles. Henrik Svensmark, an astrophysicist at the Danish National Space Center, describes a specific mechanism claimed to control the formation of low-level clouds and which is moderated by solar variability, hence a completely alternate theory of global warming (and climate) as a completely natural process. Finally, Dyson voices a sentiment common to the opposition critics that the failings they point to are so grave or unlikely to be overcome that the funding for climate modeling work should be drastically reduced.
Dyson’s point on fudge factors is that they stand in for physics that is missing (e.g., a detailed model of evaporation from the sea, condensation in the air, and precipitation; to arrive at a dynamic and spatially resolved reflectivity of the atmosphere: clouds), and they are arbitrarily adjusted to make the calculations agree with present trends. Once a set of “good” fudge factors is arrived at by matching the data, then the code is run far into the future to predict climate. However, this procedure relies on the unjustified assumption that the operation of the physics behind any fudge factor in that hypothetical future world is exactly like the operation of that physics today, even if those future conditions are very different. How do we know that the evaporation-precipitation cycle of that future time will result in exactly the same cloud cover fudge factor as occurs today? If the composition of the atmosphere (gases and aerosols) is very different, this would not be the case. The only reliable course is to actually put in the physics of the processes covered over by fudge factors, and allow them to be calculated in a self-consistent way with the evolving conditions. This criticism is so clear and correct, that one can only presume it is being addressed directly by cloud research and advances in climate modeling. Perhaps in a few years this will be solved; and it is even possible that the fudge factors won’t be that different.
Dyson’s other point is that models of greater resolution in space and time, which reproduce localized and transient phenomena like El Niño (a periodic warming in the mid Pacific Ocean, which is big compared to cell size), will boost the credibility of futuristic predictions. One can only assume that whatever features allowed one group to predict El Niño, at the time Dyson made his comments, have been studied, duplicated and elaborated upon by others since. Again, Dyson’s critique points to what should be (and I assume is) a major focus of climate modeling efforts.
Ian Clark asks for experimental verification of the theoretical CO2-water vapor link; the idea of CO2 capturing infrared energy, heating the atmosphere, which allows more water to evaporate and itself contribute to infrared absorption, thus forming an atmospheric heating positive feedback loop. As he notes, calculations of the effect readily support the hypothesis.
Experimental proof would have to be found in either observations in the natural world, or small scale experiments in a laboratory. Perhaps a comparison of observations of cloud formation and regional air temperature changes over heavily industrialized and urban areas — expected to emit significant CO2 — and remote unpopulated areas might show what effect, if any, excess CO2 has on local humidity and heating, or cloudiness and cooling. I can imagine such measurements being performed from fixed weather stations, ships, airplanes and satellites carrying infrared sensing instruments (heat sensing), radars (aerosol, droplets, cloud probing) and particle sampling filters (aerosols, dust, salt). Again, I imagine cloud physics experimental scientists, following in the footsteps of Vincent J. Schaefer (1906-1993), Bernard Vonnegut (1915-1997) and Duncan C. Blanchard, among others, are actively working to measure the reality of the situation. Another avenue would be to build a laboratory cloud chamber (a chamber with an air space above liquid water, and external controls over volume and pressure), introduce CO2, irradiate it with an infrared laser (e.g., CO2 laser) to selectively heat the CO2, and then measure the heating of the “air” (probably just N2) by inelastic collisions with CO2, and also the change in water vapor concentration. I would be happy to conduct this experiment if given a few million dollars and a plum academic appointment.
Recent findings from the study of ice cores shows that at certain times in the past the average temperature began rising hundreds of years before the increases in CO2 concentration. Some critics point to this as proving that solar heating alone controls climate change, and the rise in CO2 is a result of outgassing from warming seas and thawing tundras. This last effect is certainly true and happening today, but the occasional lag of past CO2 increases with temperature does not prove that the reverse cannot happen. Both the data and basic physics principles support the conclusion that the presence of CO2 amplifies warming initiated by any factor. At certain times in the past, solar-orbital (solar variability and Milankovitch cycle) effects initiated a warming phase, which caused CO2 to bubble out of warming seas and thawing tundras — a lagging effect — that amplified the warming, the further evaporation of water, and so on. Today, the artificial injection of CO2 into the atmosphere has added to its heat capacity and boosted whatever warming might have been occurring from strictly natural causes — a leading effect.
A criticism often hurled back at critics is “well, what’s your explanation?” If the IPCC consensus is wrong about climate change, then what causes it? Henrik Svensmark provides one answer. His claim is that cosmic rays dominate the formation of tropospheric clouds, and the variability of the cosmic ray flux directly influences the variability of the Earth’s cloud cover, and as a result its solar heating, and ultimately its climate fluctuations.
Cosmic rays are very high energy photons and charged particles produced by some combination of nuclear reactions and powerful electromagnetic accelerating effects in deep outer space. The high energy of these rays makes them extremely penetrating, some pass through the diameter of the Earth without change. However, they do occasionally collide with atomic and molecular matter, and this causes a breakup scattering numerous particles (e.g., atomic ions, electrons) from the site of the collision. These collision fragments are detected in laboratories in cloud chambers. As these fragments whisk through the humid (supersaturated) atmosphere in the cloud chamber, they collide with molecules, initiating the formation of droplets, and the trail of each fragment shows as a string of droplets that can be photographed, recording the event. Svensmark’s claim is that cosmic rays that manage to interact near sea-level initiate the beginnings of cloud formation, a process called nucleation. Cloud physics scientists usually assume (and measure) that condensation nuclei are present in the form of salt particles, dust (soil, soot, pollen, microbes) and ice crystals.
Svensmark then describes how the variability of the Solar Wind (a flux of charged particles from the Sun) affects the distribution of magnetism in space around the Earth (well known physics), and how the solar-driven fluctuations of the extent of the Earth’s “magnetic shield” will allow more or less of the cosmic rays to penetrate to the surface. Magnetic fields deflect charged particles (like those inside the atoms of a piece of metal you bring close to a magnet), and conversely a large flux of charged particles can bend or distort a magnetic field. When the emission of Solar Wind is weak and the Earth’s magnetic field is extended further out into space, then a greater portion of the cosmic ray flux is deflected away; a strong Solar Wind compresses the Earth’s magnetic field, and cosmic rays find an easier approach. So, ultimately, the variations of the Solar Wind and of the unknown sources of cosmic rays manifest as variations of tropospheric cloud cover, which in combination with Milankovitch cycles set the heating and climate of the Earth — according to the theory.
Svensmark’s model has a great deal of good and interesting physics, but to establish it as fact will require a tremendous amount of quantification. It appeals to those who prefer an explanation of global warming that does not implicate industrialized society. One questionable assumption in this theory is that cosmic ray interactions dominate cloud formation, for if they do not, then the rest of the theory is unnecessary. Cloud physics is an old and sophisticated discipline, and the observations about the role of aerosols in nucleation and condensation cannot be so easily dismissed. Svensmark’s mechanism may actually occur, but at an insignificant level. Perhaps new data will bring new insights.
Finally, we allow Freeman Dyson to sum up the sense of many critics, that climate modeling research is overfunded. Professional science is a feeding frenzy, being almost entirely a captive of government and corporate funding. The competing sales pitches of various groups and factions in science can reach such levels of hyperbole, and sometimes mendacity, that knowing onlookers become disgusted. It may well be that some climate research people are sounding the alarm of imminent doom in order to get the munificent attention of sponsors, a technique that has proved successful for the military-industrial complex. Some scientific critics of climate modeling may be people who resent their few scraps from the feeding frenzy, jealousy is not unknown among science folk. Other science critics may be allowing their ideological inclinations to overly influence their scientific judgments as regards climate modeling, again, scientists are human and they can sometimes allow their emotions to cloud their thinking. Such people are more likely to use words like “hoax” and “myth.” Criticisms that have technical substance are valuable, whatever the critic’s judgment as to the ultimate value of climate modeling work. The best response is to improve the work.
10. The Open Cycle Closes
It is so hard to give up a comforting fantasy. The shock, denial and anger expressed about global warming is really a psychological resistance to the loss of the pleasurable illusion of the “open cycle.” There is no escape from the 2nd Law of Thermodynamics, and there is no such thing as an “open system,” even though today’s obsessed consumers, and the corporate overlordship prefer to imagine otherwise. Thermodynamically and materially, we live in a fishbowl world, there is no possibility of ejecting waste from our tails and never again swimming through the consequences.
We have enjoyed many false open cycles: disposable bottles and packaging, disposable combustion engine exhaust gases, disposable chemicals and nuclear waste, disposable inner cities, disposable under-educated and under-employed populations, disposable foreign peasants encumbering resource extraction, and private profit at public expense.
The “use” we get out of any item has to be compared to the resource and energy “cost” of producing it from its raw materials, and then of absorbing it back into the processes that produce that energy and those raw materials. When we take responsibility for the impact of the entire cycle, then we are motivated to choose products (and “services”) with the highest ratios of use to cost.
As the expanding impact of global warming cracks through the filters on consciousness of more people, there will be an increasing competition to escape and profit from the consequences. One obvious example of this is the nuclear power industry’s enthusiastic adoption of the fearfulness of global warming, “we are the solution” they say. The profit motive is shameless.
Environmentalists of Luddite persuasions will urge a repentant return to a de-industrialized, agrarian style of life. The military-industrial complex will see the possibilities of “getting into the green” with sales of “green” high technology to the equally messianic capitalist elite, revolted at the idea of sliding “backward” into Third World experience, hence thrusting “forward as to war” to save “our way of life.” Photovoltaics, engineered materials and solid-state micro-electronics are impressive and capable technologies, but they cannot be produced in the quantities and at the costs needed to meet the energy needs of the Third World.
I think the best response to global warming is to greet it as the next challenge to human development — it certainly presents delectable problems to be solved by any engineer and thermodynamicist interested to devise machines and structures that convert sunlight to electricity. It is time to move beyond our dependency on the burning of paleontologic leavings. It is time to ride the wave of heat washing over the Earth from the Sun. We would leave behind many outmoded technologies, political economies, behaviors and ideas, in making this change. There is nothing “dooming” humanity with the approach of global warming, except the mental inertia that seeks to preserve our petty ignorance, prejudices and greed. The laws of physics present no barrier, and economics is always an artificial construction, which we could choose to configure for the benefit of everybody.
Consider this: solar power at 1 percent conversion efficiency on 2 percent of the land area of the USA would produce the total national electrical energy use of 4×1012 kilowatt-hours/year. That is 13,400 kWh/y for each of nearly 300 million people.
Imagine if the expense, manpower and energy that has been put into the Iraq War since 2003 had been put into solar thermal plants (up to 5 percent efficient), solar updraft towers, mountain and offshore wind (instead of oil) derricks, and residential-scale solar, wind (vortex tube) and co-generation (use of “waste” heat from water heaters) electrical generators. Imagine if we seriously tried to electrify our transportation systems and made all such networks, from the neighborhood buses and trolleys to the transcontinental rail service, as free (and quickly available) to use as sidewalks and staircases; who would drive to sit in traffic jams?
At this point we have gone beyond WGI (the science of global warming), to the topics covered in WGIII (policies in response to global warming), a good place to stop. My own conclusion is that the best response to global warming would be a fundamental change in the nature of human society. Logically, there is no requirement that human society change, but then there is also no requirement that it prosper or even survive.
Acknowledgments: Thanks to Jean Bricmont and Roger Logan for interesting questions.
(web sites active on 4-5 May 2007)