The greatest evil is not now done in those sordid "dens of crime" (or) even in concentration camps and labour camps. In those we see its final result. But it is conceived and ordered... in clean, carpeted, warmed and well-lighted offices, by (those) who do not need to raise their voices. Hence, naturally enough, my symbol for Hell is something like the bureaucracy of a police state or the office of a thoroughly nasty business concern."
― C.S. Lewis, The Screwtape Letters
Let me begin by saying that I buy the notion of human-caused global warming.
Now let me say that it bothers me when people conflate weather and climate. By definition, climate is almost always very, very stable. In contrast, weather is by definition, very, very variable.
To state flatly that an extreme weather phenomenon is caused by global warming, is simply unprovable. Sure, it sort-of makes sense that if one puts more energy into a system, that one might see something come of that. But with the plethora of variables at work in any global temperature math model, I doubt if you could come up with "proof" for any single weather outbreak.
" ... a man of notoriously vicious and intemperate disposition."
Leaving aside the proper usage of the word 'proof', it is just silly to use a single modern data point when one has access to lots of modern data points. If one is using probability to evaluate the degree of change that might be occurring, I would want a probability envelope for 'before' and another for 'after.' How much has the mean shifted? What is the probability that she shift is just noise? I would think this would have been done, or if it hasn't, there would be a reason why it hasn't.
While I'm watching the exchange with interest, I'm disappointed that no one is attempting to use real world observation and data. It remains an abstract exercise in technicalities entirely independent of observational data.
Are you saying this calculation is not valid? If you accept the formula given in Wiki, take the limit of SX1X2 st N1 goes to infinity. You apply L'Hopital's rule and get SX1X2 = SX1.
The first equation becomes t = (X1-X2) / (SX1/N1^0.5)
How is that invalid?
Last edited by Mikebert; 01-13-2014 at 07:55 AM.
Suppose you are given a coin that is either an ordinary coin or a highly biased coin that rolls heads practically all of the time (tails has never been observed in thousands of filps). You roll it once and it comes up tails. Are you saying you can conclude nothing from that demonstration?
Nawww. Get ready for polar vortex, the sequel: Action packed. Polar bears on the Potomac.
and
Ditto.Originally Posted by bad dog
http://www.bovada.lv/
MBTI step II type : Expressive INTP
There's an annual contest at Bond University, Australia, calling for the most appropriate definition of a contemporary term:
The winning student wrote:
"Political correctness is a doctrine, fostered by a delusional, illogical minority, and promoted by mainstream media, which holds forth the proposition that it is entirely possible to pick up a piece of shit by the clean end."
He might be disappointed. 1983 saw similar arctic weather patterns and we had the slowest hurricane season of the satellite era. Of course these arctic conditions aren't exactly rare either (contrary to all of the media hype). 1985 had a cold spell and also a busy hurricane season. 1996 was also a busy hurricane season with a winter cold snap. That said, '96 was also absent Al Nino and La Nina similar to 2013 and yet we had a Hurricane season well below average this past year.
Of course it was a slow year for weather in general. Tornados were at their lowest in decades. Always remember: when folks around you are talking about "extreme weather" be sure to ask them which of the two extremes they happen to be referring to.
The thing I might guess at is melting in the Arctic. With this much cold air coming south, riding on a jet stream that is curling north and south more than usual, I suspect it is warmer way up north than usual. Less thick winter ice might be building up. Thus, late next summer, there might be more blue water up there.
I vaguely remember a very cold Siberia a while back, followed by a media storm that the north pole was melting. The North Pole has continued melting, but the rate hasn't been quite so alarming. I still have a vague superstitious belief in the conservation of energy. If it is cold here, it must be warmer somewhere.
But there is so much going on, so many interactions, that I can't presume that my guess as to what might happen next will be any better than the next person's.
1. Yes, the winter of 1983-1984 was bad here. The Siberian Express was going and we had snow on the ground for 6 weeks in the latter part of Jan all through Feb.
2. Before that, winter of 1976-1977, all lakes froze over.
3. Summer of 1980 - major heat wave
4. Summer of 2010 - major heat wave.
Weather forecasts are like poker. Often times you're not going to make your hand and end up with a busted straight or flush.Originally Posted by Accuweather
It's more than that.Of course it was a slow year for weather in general. Tornados were at their lowest in decades. Always remember: when folks around you are talking about "extreme weather" be sure to ask them which of the two extremes they happen to be referring to.
1. Lot's of severe thunderstorms .... or not.
2. heat waves - easy a month's worth of record highs
3. cold waves - also easy a month's worth of record lows/ lakes freeze over here.
4. Snowfall exceeds 12 inches/season ... or no snow.
5. etc.
MBTI step II type : Expressive INTP
There's an annual contest at Bond University, Australia, calling for the most appropriate definition of a contemporary term:
The winning student wrote:
"Political correctness is a doctrine, fostered by a delusional, illogical minority, and promoted by mainstream media, which holds forth the proposition that it is entirely possible to pick up a piece of shit by the clean end."
Do you understand the difference between population and sample? The assumption is equal variance of the populations from which the samples are drawn. So yes the single result is drawn from a population that has a specific variance. And the control sample is drawn from a population that has a specific variance. The null hypothesis is that the treatment has no effect, that is, both are drawn from the same population, in which case the variance is necessarily equal.
Yes. Do you?
Is that assumption valid for your single storm sample? How can you possibly know?The assumption is equal variance of the populations from which the samples are drawn.
The actual test for determining that the variances are equal enough to even use a t-test require chi-square analysis. The degrees of freedom for a chi-square are n-1 for each sample.
But what is the variance for your sample? That's what the statistical confidence is based on.So yes the single result is drawn from a population that has a specific variance.
Is your control set a sample or is it the population itself?And the control sample is drawn from a population that has a specific variance.
Nope. Have you actually looked at the test statistic? The calculation makes use of the sample variances not population variance.The null hypothesis is that the treatment has no effect, that is, both are drawn from the same population, in which case the variance is necessarily equal.
The test is also for mean differences. What exactly is a mean based on a single value? Are you really still trying to claim that a single measure is a sample set? The t-test is also based on the assumption of random sampling. In what universe is your picking one particular storm a random sample?
Last edited by Vandal-72; 01-15-2014 at 04:08 AM.
What do you mean is it valid? That is the null hypothesis!
Look the null hypothesis is there is no effect. If there is no effect, then that means the storm in question comes the same population as the controls. The assumption of equal variance necessarily holds if the null hypothesis is correct.
Now if the probability calculated for the single value is sufficiently low you reject the null hypothesis. This means the assumption of equal variance is NOT valid. The variance of the population from which the single storm came is probably a lot larger.
But what does a larger variance mean? It means that extreme events are more likely. But this is precisely one of the notions being considered.
This statement suggest to me that your approach to statistics is rote. You learned to use tools and when to apply them, but you don't know the reasoning behind the tools and the rules.The actual test for determining that the variances are equal enough to even use a t-test require chi-square analysis. The degrees of freedom for a chi-square are n-1 for each sample.
It is a sample from a semi-infinite population. The population is assumed to have a normal distribution with standard deivation sigma. A random sample from this population will have a t-distribution with standard deviation s.Is your control set a sample or is it the population itself?
As the sample size gets larger, t-distribution --> normal distribution and s --> sigma. So for large sample saize you do have a very good handle on the population distribution. A single value that is many sigmas out is unlikley to belong to that population.
I went through this already. I showed that for a sufficiently large control population and a null hypothesis of no effect (meaning equal variances holds) the t statistic is independent of treatment sample variance. In this case a sample variance is not needed.Nope. Have you actually looked at the test statistic? The calculation makes use of the sample variances not population variance.
It the value. Mean is the sum of all member values of the set divided by the number of members, which is defined for a set of one.What exactly is a mean based on a single value?
In the storm situation the key observation is the occurence of a special storm of probability p within a specific window of time chosen in advance during which there are N storms. p is such that the probability that any one of the N storms would be special (Np) is less than 0.05.
It is assumed that any window of time will contain a random sampling of storms, and so the window chosen will have such a random sample.
Vandal, your approach to statistics led you to say this:
I asked: Are you saying there is no difference between someone being dealt a royal flush out of the blue and one getting dealt one immediately after expressing the intent to get one?
You answered: Statistically? Yeah, there is no difference.
Practically every week someone gets a royal flush. Happens all the time. But nobody predicts it will happen to them. The entire basis of games of chance like poker, coin flips or lotteries is that it is impossible to predict the outcome.
Intuitively you must know this, so why did you say there was no difference? That make no sense.
In hold'em your chance of getting pockets aces is 1/220.
OK. What about this business of "running bad" in hold'em? You know, getting trash hands like 10/4, 7/2, K/3, 2/3, etc. ? I got this trash for 3 days straight at Lake Charles when I lived in Houston.The actual test for determining that the variances are equal enough to even use a t-test require chi-square analysis. The degrees of freedom for a chi-square are n-1 for each sample.
52 cards, taken 2 at a time.But what is the variance for your sample? That's what the statistical confidence is based on.
If my session is long enough, I should get a sample of all 169 unique hold'em hands.Is your control set a sample or is it the population itself?
If you pay me, I can log all hands dealt to me on Bovada. That would be a sample and you can do population stuff with the hands I get dealt. I also extend my offer to Mikebert. Uh, do y'all want to know which hands I play and which ones I fold as well?Nope. Have you actually looked at the test statistic? The calculation makes use of the sample variances not population variance.
Bovada is an online site, so my cards are random based on a computer random number generator. Y'all are writing lots of posts about "storm". What I'd like to know is if said storm is [tropical,subtropical,mid latitude,polar]The test is also for mean differences. What exactly is a mean based on a single value? Are you really still trying to claim that a single measure is a sample set? The t-test is also based on the assumption of random sampling. In what universe is your picking one particular storm a random sample?
I'd think hold'em hands would be easier 'cause your sample is just 52 cards. You don't have to worry about stuff like sunspots fouling up your samples!
Bad dog and I want to know more about hold'em anyhow.
MBTI step II type : Expressive INTP
There's an annual contest at Bond University, Australia, calling for the most appropriate definition of a contemporary term:
The winning student wrote:
"Political correctness is a doctrine, fostered by a delusional, illogical minority, and promoted by mainstream media, which holds forth the proposition that it is entirely possible to pick up a piece of shit by the clean end."
I'm wary of encouraging them, Rags. We don't want to be held responsible for all the variance they are going to encounter.
Poker players predict cards all the time. Sometimes they even get lucky.
The odds of calling a hand and then getting it? No greater than the odds of getting the hand in the first place. The call (a completely random guess) has zero influence over the deal (a physical event). Welcome to game theory.
No it isn't. The null hypothesis of a two sample t-test is that the population means for the control group and the population mean for the treatment group are equal. That is not the same thing as what you are saying.
No. By definition a treatment group is a different population!If there is no effect, then that means the storm in question comes the same population as the controls.
Went to bed last night troubled by your last post. I knew it was wrong but I wasn't happy with my response to it. On my way to work this morning, it coalesced for me.The assumption of equal variance necessarily holds if the null hypothesis is correct.
1 - Your treatment of the two sample t-test is in fact just a one sample t-test. Using a single value instead of a sample mean for x-bar2 coupled with it's zero value for the variance of sample two just makes your test a one sample t-test! In a one sample test you are trying to determine if the mean of a sample set is significantly different from an arbitrarily chosen value. This can be a very useful tool for certain situations but one thing it absolutely can not do is determine treatment effects because the single value is arbitrary. Even if you use the wind speed of a natural storm you arbitrarily chose the sigma level for the requisite storms minimum wind speed.
Basically what you are doing is claiming that the value of a particular storm is significantly different from the mean for all storms. Well duh! That tells you nothing about any treatment effect.
2 - You have completely abandoned understanding of what the variables in the two sample t-test statistic are! X-bars are estimators for the unknown population mean. How on this earth can you claim that the wind speed of a single storm is a good estimator for the population mean of all post warming storms! And that is what a two sample t-test really is. Are the mean values from two populations different? We use independent sample means as estimators for those unknowable population means.
3 - Even if we allow you to run your "not really a" two sample test and you reject the null hypothesis, that does not mean that the alternative hypothesis automatically gets accepted! Your result may say reject null but there are actually two possible conclusions. A: "Reject null and accept alternative." B: "Reject null and do not accept alternative." Option B basically means that your test was so poorly designed or the data was so random that you can not conclude anything about your original question.
Guess how you increase confidence in accepting the alternative hypothesis . . . large sample sizes. With a test sample size of one, you have zero confidence in accepting the alternative. That's why proper statistical studies require either equal sample sizes between the two samples or if that isn't possible you get at least enough measurements for one group to reduce your beta appropriately. That's why my original complaint still stands! You can not conclude anything about a treatment effect from a test sample size of one.
You are still spouting statistical nonsense.
No you don't, because you aren't running an actual two sample t-test. All you have done is calculated a probability. That isn't a test of any sort of hypothesis.Now if the probability calculated for the single value is sufficiently low you reject the null hypothesis.
Except you are cherry picking from the treatment group by completely ignoring the fact that variance works both ways. More larger storms and more weaker storms.This means the assumption of equal variance is NOT valid. The variance of the population from which the single storm came is probably a lot larger.
But what does a larger variance mean? It means that extreme events are more likely. But this is precisely one of the notions being considered.
And you never learned what the variables in the equations actually are. You are trying to apply mathematical techniques to variables in equations without taking into account what the variables actually represent.This statement suggest to me that your approach to statistics is rote. You learned to use tools and when to apply them, but you don't know the reasoning behind the tools and the rules.
Storm speeds are normally distributed? Since when?It is a sample from a semi-infinite population. The population is assumed to have a normal distribution with standard deivation sigma.
Fine. Your treatment sample is not part of that population by definition, however.A random sample from this population will have a t-distribution with standard deviation s.
As the sample size gets larger, t-distribution --> normal distribution and s --> sigma. So for large sample saize you do have a very good handle on the population distribution. A single value that is many sigmas out is unlikley to belong to that population.
Yeah, you turned a two sample t-test that can be used to identify treatment effects into a one sample t-test that can not. It's statistical fraud.I went through this already. I showed that for a sufficiently large control population and a null hypothesis of no effect (meaning equal variances holds) the t statistic is independent of treatment sample variance. In this case a sample variance is not needed.
Yeah, like I said earlier, I didn't like some of my first response. I hadn't yet recognized your fraudulent use of a one sample t-test in place of the necessary two sample t-test. You can disregard this part of my critique, it has been superseded.It the value. Mean is the sum of all member values of the set divided by the number of members, which is defined for a set of one.
What if everyone starts predicting a royal flush with every hand? Do you conclude collusion when the two inevitably match up?
You are concluding collusion based on imagined evidence. I prefer to use actual evidence. I may may suspect collusion but you have not presented any actual evidence so such.The entire basis of games of chance like poker, coin flips or lotteries is that it is impossible to predict the outcome.
Intuitively you must know this, so why did you say there was no difference? That make no sense.
(This side discussion is very illuminating concerning your obsession with a single low probability storm wind speed as somehow special with absolutely no corroborating evidence.)