lördagen den 28:e september 2013

Reclaiming the Neanderthal Effect

As I have described previously on this blog there appears to be a group of people, here denoted the Lukewarmers, whose mission is to convince people that the Greenhouse Effect is not the Greenhouse Effect. Or perhaps it should be rephrased as follows: The Greenhouse Effect does not refer to the Greenhouse Effect. Why would they do that? Because the Greenhouse Effect™ is now so deeply ingrained into the public consciousness that it would be simply inadmissible to let people know what it actually refers to. What it refers to in the climatology literature is a heat pump composed of so called greenhouse gases that act so as to cool the stratosphere and warm the surface at the same time, and it does so with an astonishing efficiency. The absurdity of this is obvious, especially since it can be so easily disproven by simply looking at the planetary data. For this reason the Lukwarmers have devised at least two strategies to deal with this dilemma, one intended for gullible ordinary citizens and another one for gullible scientists. In short they go as this


1. The Greenhouse Effect is the Tyndall Effect

2. The Greenhouse Effect is the Neanderthal Effect


We will deal with these in the proper order:

1. The strategy to re-define the Greenhouse Effect as the Tyndall Effect is common in many videos available online, often intended for defenseless school children. By shining light from some particular lamp on two different containers of gas one can apparently detect a difference in the heating rate between the two containers. One of the things you could object to is the use of a lamp in the first place. Why can't you do anything unplugged? The answer would probably be: Because in the lab we don't have access to the sun. I could accept that answer if it hadn't been that according to the canonical description of the Greenhouse Effect the greenhouse gases are supposed to let the sunshine through, it is the terrestrial radiation that is supposedly being trapped. Moreover, if we are to believe the radiation intensities used in standard climatology there ought to be an abundance of terrestrial radiation in the lab, hence I do not understand the use of the lamp. 


2. I guess the Lukewarmers have somewhat sensed the above inconsistencies, hence the need for another strategy intended for deniers with scientific training. This one is much more cunning and deceitful, that is probably the reason why so many people have difficulties dealing with the following argument. The argument is: The Greenhouse Effect is the Neanderthal Effect. The Neanderthal Effect is simply the obstruction of radiative cooling caused by blankets, furs, aluminium foil on light bulbs, most probably the atmosphere, in other words a very ordinary effect known even by the Neanderthalians. A common reply by skeptics is something like the following:

-Yes, the obstruction to cooling is real but it is not caused by "back-radiation".

Oh, no no no no no.......

You went into the trap. Now you have, for free, given the Lukewarmers an extra degree of confusion:

3. The Greenhouse Effect is whatever effect is caused by "back-radiation"

The problem is that we don't know exactly what causes the Neanderthal Effect. Hence, you cannot say anything about the role of back-radiation in this case. All we know is that it is very ordinary and is caused by virtually any material, including CO2. And since it is caused by any material there is no justification picking out some particular "greenhouse gases" responsible for some particular "Greenhouse Effect". The latter is simply an illegitimate scientific concept.

In summary:

What the Lukewarmers want you to believe is that:

Radiative heat transfer is special
CO2 is a special gas (as regards thermodynamics)

Whereas the truth reads:

Radiative heat transfer is ordinary
CO2 is an ordinary gas (as regards thermodynamics)

onsdagen den 18:e september 2013

A Discrete Model Atmosphere, UV-updated

I have updated my Discrete Model Atmosphere so that it now includes UV-forcing of the upper layers giving rise to a thermosphere. I have also included a "troposphere" where the heat absorption is uniform leading to a constant lapse rate in that region. I stress that these kind of toy models may very well become superfluous after a more complete simulation using the Navier-Stokes equations. Maybe Claes has some update on this. Anyway, regardless of its usefulness it gives rise to some questions of pure academic interest. Here is what it looks like now:



The thing I wanted to point out is the annoying jump discontinuity at the surface. Numerical experiments suggest that this jump can not be made smaller than F/(2k) regardless of the meshsize and other factors. I would very much like in input from some clever mathematician about the significance of this and how it could perhaps be circumvented in a more developed model. As a side remark, I think that Miskolczi discusses the topic of jump discontinuities at the surface in his paper. The problem is that I don't understand his paper (nor find it on the internet anymore). Input is very welcome.


Updated script:

import numpy as np
import matplotlib.pyplot as plt


interval = 6
trop = 1        ## height of the "troposphere"
meshsize = 0.1

N = int (interval/meshsize)

weight = np.zeros(N)

## Please note that the "weight" is not the actual weight but a positive
## function taking values between 0 and 1 which increases monotonically on the
## actual weight (mass), meant to quantify the "heat-absorption"

weight[0] = 1   ## The surface is given complete heat absoption


for idx in range(1,N):

    if idx*meshsize < trop:
        weight[idx] = 1*meshsize    ## Troposheric weight put to 1 (times meshsize)
    else:
        weight[idx] =  np.exp(-(idx*meshsize - trop))*meshsize
   


A = np.zeros([N,N])
   
for idx1 in range(N):
    for idx2 in range(N):
        if idx1 == idx2:
            if idx1 == 0:
                A[idx1,idx2] = -1
            else:
                A[idx1,idx2] = -2
           
        else:
            A[idx1,idx2] = weight[idx2]

            if idx2>idx1:

                for idx3 in range(idx1+1, idx2):
                    A[idx1,idx2] = A[idx1,idx2]*(1-weight[idx3])

            else:
                for idx3 in range(idx2+1, idx1):
                    A[idx1,idx2] = A[idx1,idx2]*(1-weight[idx3])    
           

k = 1           ## Conductivity
uv = 1          ## UV-factor
screen = 0.5    ## Screening of UV-light (not rigorous, just toy model)

F = np.zeros(N)

F[0] = 1.0    ## Solar radiation incident on surface


for idx in range(N):
    F[idx] = F[idx] + uv*screen**(N-idx)    ## adding UV-forcing

 

temp = np.linalg.solve(A,-F/k)  ## Forcing vector divided by the "conductivity"
                                ## or perhaps more accurately, the diffusion parameter


x = np.arange(0,interval,meshsize)



plt.plot(x, temp)
plt.ylabel('Temperature')
plt.xlabel('Position')

plt.show()

onsdagen den 7:e augusti 2013

(Back-(Radiation)-Therapy)

To cut a long story short:

Whatever the thermodynamic effect of a colder object radiating on a warmer object is, it has already been taken into account by the coefficient of thermal conductivity which, despite its name, measures all kinds of diffusive heat transport including radiation. (How could it not?)

That is the correct solution. I could also mention that I am not alone with this opinion. 

For example, G&T write the following in their first falsification paper:

"A physicist starts his analysis of the problem by pointing his attention to two fundamental thermodynamic properties, namely

the thermal conductivity, a property that determines how much heat per time unit
and temperature difference flows in a medium;"

In their reply to Halpern et al. they write:

"Speculations that consider the conjectured atmospheric CO2 greenhouse effect as an "obstruction to cooling" disregard the fact that in a volume the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients."

I couldn't agree more.  What I have just stated is very powerful in its simplicity, since, if you want to know the thermodynamic effect of doubling the CO2 concentration you only need to measure the changes in the transport coefficients. These changes will of course be unmeasurable (although there is probably some tiny factual difference). And that's it. No need for any redundant radiative transfer calculations. The Greenhouse Effect is no more, gone like a fart in the wind.

The reason I am mentioning this is that there is a tendency of some people to over-do things. The overall theme of these various claims is that radiation from a colder body cannot be absorbed and/or cannot have any effect on a warmer body. My reaction to that is: Why not? Look at Newton's law of cooling:

The heat tranfer Q from hot (T1) to cold (T2) is given by 

Q = k(T1 - T2)

Since the temperature of the colder object T2 occurs in the formula the colder object must be doing something with the warmer object. If it did nothing we wouldn't feel the difference between 20 and -10 degrees. What is this something? Well, maybe in part it is the absorbtion of radiation. I don't know for sure but some people seem to know a whole lot about those things. And if it isn't radiation it must be something else. Does this "something else" violate the second law?

There is no possibility to discuss all of the excessive staments here, I have already done so to a certain extend previously on this blog. I just want to point out an obvious danger. 

Accepting that a warmer object can indeed absorb radiation from a colder object and that this might slow down the cooling is not the same thing as accepting the Greenhouse Effect.

If you do claim the contrary then the Lukewarmers have won. Then their "trademark" has been saved for future generations and can pop up any time with some new twisted definition. Insisting on this naïve simplification would be a great disservice to society.    



onsdagen den 22:e maj 2013

Lost in semantics

Following the long argument about "back-radiation" and its newly invented derivative "back-conduction" I have started to speculate if most of the disagreement cannot be traced back to more or less semantic confusions and misunderstandings. First of all, below I have listed the various modes of heat transfer most commonly thought of:

Radiation
Conduction
Convection

Now consider instead the following list

Diffusive heat transfer
Convective heat transfer

What is the difference? I would say that the first list ties on to the "actual" physical mechanisms, whereas the second list is a classification into different mathematical forms. Convection probably belongs to convective heat transfer, conduction is usually thought of as diffusive, but what about radiation? My guess is that radiation should be considered diffusive too. Now let's add some more confusion: 

Thermal conductivity

The very name seems to imply that it refers only to conduction. But let's suppose you want to experimentally measure the thermal conductivity of a gas. It is possible to tell the molecules "Hey, guys! Could you stop radiating for a while, I only want to measure the conductive heat transfer." Of course it is impossible, yet we stick to the misnomer "conductivity". Now let's move on

Back-radiation

This is perhaps one of the most infuriating concepts of modern time. Who invented it? I don't know. If we look again at the first list, radiation occurs as one particular heat transfer. Hence, accepting back-radiation we also ought to be able to speak of

Back-conduction

But then the protagonists of back-radiation say "Hey, wait a minute, when I speak of back-radiation I am simply speaking of down welling electromagnetic radiation which we can measure". Ok, so in order to avoid confusion let's call it back-photons instead:

Back-photons

(People who don't like photons can instead think of "Back-electromagnetic rays".) But here comes the final nail in the coffin:

Back-phonons

What do you say now? "Well, well. Ok. But photons doesn't stick out their fingers to measure the surrounding temperature".

I rest my case.  

måndagen den 20:e maj 2013

Derivation of the "isothermal" column

Here I outline what I believe to be a conclusive argument showing that the "kinetic" temperature is constant with height for the canonical ensemble of an ideal gas in a gravitational field. It is not taken from any "authoritative" source, hence, I make the reservation for errors. Recall the Boltzmann factor



which is the relative probability to find a single particle at height h with speed v when the system has reached equilibrium, that is, maximum Gibbs entropy. (Notice that this kind of factorization can not be done for the micro-canonical ensemble.) Now I define the "kinetic temperature" in the following way:



The reason for this notation is of course that there already exists a temperature T pertaining to the system as a whole. Let N be the total number of particles, using the Boltzmann factor the kinetic temperature can be calculated as follows:



From this point it is very easy to show that Tk is independent of h, which I leave as an exercise. It can also be shown that with this definition we have that


måndagen den 6:e maj 2013

Reply to Cotton

By reason of a comment made by Doug Cotton on one of my earliest and most read posts "On the temperature profile of an ideal gas under the force of gravity" I will here elaborate further on this important and thought provoking topic. There are many things to say about this, first of all though, I am going to address the seemingly never ending discussion about which temperature profile that maximizes the Gibbs entropy of an ideal gas in a gravitational field.

Note first that what we are discussing is an idealized theoretical construct which relies on a postulate called the "ergodic hypothesis". What that is will be left for later. Most importantly, however, just because you have a theoretical model in your hands it is not self-evident that this model can be applied to any specific problem in the real world. Physicist seems to be especially vulnerable to this kind of confusion between model and reality, not least when it comes to thermodynamics; It is not only in climate science that researchers complain about the arrogant thermometers who never seem to measure the correct temperature.

Ludwig Wittgenstein said that the purpose of philosophy was to clear out linguistic misunderstandings. Karl Popper, on the other hand, was somewhat more skeptic about the usefulness of this principle in science where he instead advocated that a scientific theory is nothing else than the set of its predictions. He saw a danger in the possible infinite regression of constantly dwelling over definitions. In this case, however, I think that we can indeed resolve matters with the Wittgensteinian approach.

Consider an arbitrary energetically isolated (adiabatic) thermodynamic system divided into two parts. Moreover, we impose the two measures entropy (S) and temperature (T) on this system: (S1, T1) for the first half and (S2, T2) for the second half. The entropy is supposed to be a so called "extensive" thermodynamic variable which means that the total entropy S is the sum of the entropies of the two subsystems respectively:

S = S1 + S2

The same does not hold for the temperature which is a so called "intensive" thermodynamic variable. Indeed, unless we are in equilibrium it is not even defined for the system as a whole. Now we define the temperature for each subsystem by means of the entropy as follows:

1/T = dS/dE

Or put into words, the inverse of the temperature is the (infinitesimal) rate of change of entropy per unit change of energy. Now suppose the following:

1. The system as a whole has reached a maximum possible entropy given the available energy.

2. T1 != T2  (Arbitrarily we may assume that T1 > T2)


Now imagine that subsystem 1 looses a small amount of energy \Delta E to subsystem 2. The total change in entropy can then be calculated as follows



In other words: Without adding any external energy to the system as a whole we have increased the total amount of entropy thus contradicting the first assumption.

Notice the very limited number of prerequisites. We didn't say anything about an ideal gas nor anything about a gravitational field. We didn't even define the entropy other than assuming that it was extensive. This is in fact so banal that we immediately realize the following: The modern definition of temperature is constructed in such a way that the two statements

1. The (energetically isolated) system has reached a maximum possible entropy given the fixed amount of energy

2. The temperature is the same everywhere in the system

are logically equivalent. 


The paper of Coombes and Laue and related articles

According to the above analysis a "paradox" can only arise in the minds of physicists who tacitly introduce another definition of temperature (in this case for an ideal gas) and then assume that by some logical necessity this new definition must be the same as the old one. This is what I believe has happened here, that is why I gave this other definition of temperature the name "kinetic temperature" which is the ensemble average of the kinetic energy per constituent particle (omitting constant factors). If one performs the calculations one can indeed show that for the canonical ensemble of an ideal gas in the absence of a gravitational field the temperature (as defined by the Gibbs entropy) is indeed the kinetic temperature. The question you may now ask is the following

Do the distributions maximizing the Gibbs entropy for the canonical ensemble of an ideal gas in a gravitational field imply a uniform (constant) kinetic temperature?

This question has been analyzed rigorously and the answer appears to be yes. Nowhere does Cotton present any calculation or reference to show the opposite. The key to intuitively grasp this is to realize that the gravity make both the density and the pressure decline with height according to the barometric formula from which it follows that the temperature must be uniform. One should remember though that the isothermal column isn't the only hydrostatically stable column, but it is the one that maximizes the entropy. 

There is a small mistake in the Coombes and Laue paper. They talk about an adiabatically closed system when, in fact, the derivation they are referring to pertains to the canonical ensemble. The canonical ensemble is not energetically closed but may exchange energy with an external reservoir kept at a constant temperature. The statistical ensemble corresponding to an energetically isolated system is instead called the "microcanonical ensemble". In fact, the microcanonical ensemble of an ideal gas in a gravitational field was dealt with relatively recently by S. Velasco et al. They concluded that in this case the kinetic temperature was not uniform for finite systems. The practical implications of this result is almost zero though. First of all there is no reason to assume that the atmosphere is an energetically isolated system, moreover  the distributions in the microcanonical ensemble approaches those of the canonical ensemble very quickly in the thermodynamic limit. There is, however, an important didactic value in the sense that it shows that the "absolute" temperature need not be the same as the kinetic temperature for all systems.


The experiments of Graeff

These experiments pose a serious challenge to the standard wisdom if one assumes that the thermometer does indeed measure the kinetic temperature of the gas rather than simply "its own temperature". Since this needs to be verified separately I introduced another "definition" of temperature called the "empirical temperature" which is simply the reading of some particular thermometer. One of the most difficult problems with these kind of experiments is the question of how you verify that your system has reached equilibrium (that there is no tiny heat transport from the warmer to the colder parts). This must be taken on faith. In any case, the only thing these experiments can possibly show is that the ideal gas model with maximum Gibbs entropy is not valid for this experimental setup. Perhaps this is the case for the atmosphere as well.


Other "derivations" of the lapse rate

The "ergodic hypothesis" upon which the Gibbs entropy is based can be formulated like this

All microstates with equal energy are equally probable. 

Somewhere on the path this statement seems to have got confused with something like

The total energy density is the same everywhere

leading to derivations based on the following assumption

potential energy + kinetic energy = constant

which (almost) leads to the adiabatic lapse rate. A simple thought experiment tells us that this cannot possibly hold for a finite atmosphere in an infinite space since that would imply an infinite amount of energy in the system. Also here we touch the problem of how to treat the atmospheric boundary. 

  

The paper of Hans Jelbring and related articles

I and many other owe a great deal to Hans Jelbring for having brought to our attention the issue of the temperature gradient and its relevance to the climate debate. His various statements do however come in a rather incoherent form and do not, as yet, in my mind constitute a physical model of the atmosphere. The theory seems to come in two separate parts, on the one hand an assumption about a "static" gravitationally induced lapse rate and on the other hand a conjecture that the atmospheric mass is the single most important parameter in determining the elevation in surface temperature of the planets, as expressed in the title of his paper "Greenhouse Effect as a function of Atmospheric Mass". In hindsight one can see that there is something, perhaps unwittingly, catchy about this title. Notice in particular that it doesn't say "Pressure induced Greenhouse Effect" or anything like that, more about this soon. People who cannot tolerate any dissent to the greenhouse gas dogma immediately assume though that what is implied is some kind of pressure induced effect against which they can use a plethora of arguments including "the second law of thermodynamics" and "static air pressure cannot create heat" etc. There might be some justification for these arguments but it is somewhat ironic to see the same people embrace a theory that states that instead of gravity some magic gases create a temperature gradient in a system which would be isothermal in their absence  What appears to be missing though in the Jelbring theory is an incorporation of the solar forcing (F) and a theory of the atmospheric boundary layer to produce a formula with which one can calculate the temperature field.   


"Pressure/Gravity effect" versus "Blanket effect"

As I have argued on several posts there is another approach to the atmospheric mass conjecture which I here call the "blanket effect". Put in simple terms the atmosphere acts as a blanket whose effective "thickness" (L) is determined by its mass. If we treat the incoming sunlight as an energy source and assume, at a certain pressure, there is some effective boundary layer (whose temperature we put to zero for simplicity) then we can derive the following simple but illuminating formula:

T = F*L/k

where T is the temperature, F the solar forcing, L the effective length or thickness of the atmospheric blanket above that altitude and k is the thermal conductivity. The concept of effective atmospheric thickness can be made more rigorous which is done for example in the post "A Discrete Model Atmosphere" where the boundary layer is also taken care of. Since both the pressure and the "effective blanket thickness" are both proportional to the mass it is very easy to mix these up and confuse correlation with causation.

I am not completely alien to the concept of a pressure induced effect though. If we take the sun as an extreme example, I guess no physicist would consider treating the core plasma as an ideal gas.


Some answers to Doug Cotton's comments

"(a) There is no issue about the top of atmosphere or the stratosphere or thermosphere. These are regions where new absorption of incident Solar radiation dominates the much slower process of diffusion of kinetic energy. Also, the thermal gradient (aka "lapse rate') is -g/Cp where Cp is specific heat. But specific heat is only constant at a constant pressure and temperature. So the gradient approaches a limit of zero and never goes negative."

The latter part of this comment is the most confusing since it seems to assume that Cp approaches infinity. I cannot imagine the physical conditions under which it would do that. 

"(b) The paper by Coombes et al simply is not based on the Second Law of Thermodynamics. They quite incorrect assume thermal equilibrium is implied by that law. It is not, and nothing in the law implies that it must be. The law states the thermodynamic equilibrium will evolve in a state of maximum accessible entropy."

In order to understand this comment I assume that Cotton advocates a second law of thermodynamics stating that

The spontaneous tendency of any thermodynamic system is to evolve towards an equilibrium characterized by a maximum accessible entropy

As I argued in the very beginning there exists a definition of temperature which makes the conditions "maximum entropy" and "constant temperature" logically equivalent. The quantity which I guess that Cotton is more interested in is the "kinetic" temperature. I state that

Cotton is correct in asserting that the second law he assumes valid does not imply a constant kinetic temperature for all systems.

However

Analysis has shown that for the canonical ensemble of an ideal gas in a gravity field it does.


Summary

What appears to be missing in Cottons arguments is some kind of physical model, especially one that incorporates the solar forcing and takes care of the atmospheric boundary level. Moreover I conclude that his claim that the adiabatic lapse rate maximizes the Gibbs entropy of a column of air in a gravity field are unsubstantiated.      

söndagen den 28:e april 2013

Issues parallel to AGW

For the first time in this blogg's history I will go off topic. Or maybe not. Some time ago I became aware of another scientific issue, the HIV-AIDS hypothesis, which ressembles the AGW issue in many respects. The big HIV hype occurred at around 1990 which was before the internet age. That might explain why the dissidents in this field of science is less known to the general public. However, since there is now a pretty well-developed AGW skeptic network on the internet, I thought we might give our fellow deniers in that other field some extra airing. Medicine is not my area of expertise, hence I will not be able to contribute anything original and instead leave it to you to judge for yourself and find additional sources of information. 


This is something of an introduction/teaser



A more technical presentation questioning the very existence of a retro-virus HIV can be found below