Henry’s Pool Tables on Global Warming/Cooling
February 21, 2013 in climate change
I took a random sample of weather stations that had daily data
In this respect random means any place on earth, with a weather station with complete or almost complete daily data, subject to the given sampling procedure decided upon and given in 2) below.
I made sure the sample was globally representative (most data sets aren’t!!!) ……
a) The amount of weather stations taken from the NH must be equal to the amount weather stations taken from the SH
b) The sample must balance by latitude (as close to zero as possible)
c)The sample must also balance 70/30 in or at sea/ inland
d) longitude does not matter, as in the end we are looking at average yearly temps. which includes the effect of seasonal shifts and irradiation + earth rotates once every 24 hours). So balancing on longitude is not required.
e) all continents included (unfortunately I could not get reliable daily data going back 38 years from Antarctica,so there always is this question mark about that, knowing that you never can get a “perfect” sample)
f) I made a special provision for months with missing data, not to put in a long term average, as usual in stats, but to rather take the average of that particular month’s preceding year and year after. This is because we are studying weather patterns which might change over time.
As an example here you can see the annual average temperatures for New York JFK:
You can copy and paste the results of the first 4 columns in excel.
Note that in this particular case you will have to go into the months of the years 2002 and 2005 to see in which months data are missing and from there apply the correction as indicated by me + determine the average temperature for 2002 and 2005 from all twelve months of the year.
g) I did not look only at means (average daily temp.) like all the other data sets, but also at maxima and minima… …
I determined at all stations the average change in temp. per annum from the average temperature recorded, over the period indicated (least square fits). The figure reported is the value before the x.
the end results on the bottom of the first table (on maximum temperatures),
clearly showed a drop in the speed of warming that started around 38 years ago, and continued to drop every other period I looked//…
I did a linear fit, on those 4 results for the drop in the speed of global maximum temps,
ended up with y=0.0018x -0.0314, with r2=0.96
At that stage I was sure to know that I had hooked a fish:
I was at least 95% sure (max) temperatures were falling. I had wanted to take at least 50 samples but decided this would not be necessary which such high correlation.
On same maxima data, a polynomial fit, of 2nd order, i.e. parabolic, gave me
y= -0.000049×2 + 0.004267x – 0.056745
That is very high, showing a natural relationship, like the trajectory of somebody throwing a ball…
projection on the above parabolic fit backward, ( 5 years) showed a curve:
happening around 40 years ago. You always have to be careful with forward and backward projection, but you can do so with such high correlation (0.995)
ergo: the final curve must be a sine wave fit, with another curve happening, somewhere on the bottom…
Now, I simply cannot be clearer about this. The only bias might have been that I selected stations with complete or near complete daily data. But even that in itself would not affect randomness in my understanding of probability theory.
Either way, you could also compare my results (in the means table) with that of Dr. Spencers, or even that reported by others and you will find same 0.14 /decade since 1990 or 0.13/decade since 1980.
In addition, you can put the speed of temperature change in means and minima in binomials with more than 0.95 correlation. So, I do not have just 4 data for a curve fit, I have 3 data sets with 4 data each.They each confirm that it is cooling. And my final proposed fit for the drop in maximum temps. shows it will not stop cooling until 2039.