Open Side Menu Go to the Top
Register
Variance question Variance question

12-01-2015 , 12:54 PM
^^^ finally, someone gets it.
Variance question Quote
12-01-2015 , 01:08 PM
Quote:
Originally Posted by BrianTheMick2
You can also show that the minimum SD is ~3.26 (using only whole numbers for the individual data points).

Not sufficient information to calculate a range for the population, but SD is definitely greater than zero.
I'd be interested in seeing the numbers you used for the 3.26. I tried to think of how to approach that for sample values but decided it was too much work. How likely (ballpark) is the actual data for the numbers you picked?

Population sd certainly greater than 0. But if you allow the data to be extremely unlikely for a population (with values still consistent with the data) then I believe my last example below shows 0 as a greatest lower bound for a population sd.


PairTheBoard


Quote:
Originally Posted by PairTheBoard
As I thought about it some more, my upper bound could be made a lot bigger without violating masque's condition requiring only values consistent with the data. The data would just be a lot more likely for the one I gave.

I suggested:

69 apples on the tree.

1 apple with $2 in gold
1 apple with $132
1 apple with $270
1 apple with $365
65 apples with $0

That has a sd of about 56.

If you consider the singleton apple as something of an anomaly (why just 1 that day?) the data for the other 3 days is not at all terribly unlikely for that distribution.

However, the sd can be a lot larger if you let the data be less likely by doing this:

132 apples on the tree.

1 apples with $2 in gold
22 apples with $132
22 apples with $270
22 apples with $365
65 apples with $0


And the sd can be as small as you like by choosing N>=65 large in:

N+4 apples on the tree.

1 apple with $2 in gold
1 apple with $132
1 apple with $270
1 apple with $365
N apples with $0



PairTheBoard
Variance question Quote
12-01-2015 , 01:40 PM
Quote:
Originally Posted by PairTheBoard

And the sd can be as small as you like by choosing N>=65 large in:

N+4 apples on the tree.

1 apple with $2 in gold
1 apple with $132
1 apple with $270
1 apple with $365
N apples with $0
Actually, I think this example with N large for the "magical" tree (or orchard) may be the most reasonable of all. Reason being that the whole thing smells like a setup engineered by some guy who has salted a few apples with some gold trying to pull some kind of con job. Anybody want to buy an apple orchard with "magical" golden apples? What's your bid based on the "data".


PairTheBoard
Variance question Quote
12-01-2015 , 02:04 PM
So if in some world critical situation you imagine there is a 50% chance the distribution doesnt have outliers making possible the estimation within at least a factor of 3 (instead of offering nothing say) and 50% it has outliers rendering the data impossible to use through CLT (something that i had said from the beginning of the talk on outliers and the example of the 10 6s i provided) people will just come out and say no i am not offering any guess...

Sure whatever.


Because we cant simply answer that every time we apply a method it works under the conditions it is supposed to work (i already specified that through Berry Esseen inequality when it works. Put any distribution to the inequality and see how close it is for n=25) and then the estimate is this ....+- (or */)some big error/factor that comes from only 3sum-4 points and therefore have a guess for the distribution if CLT convergence over 20-27 is decent (and most textbook distributions that do not have well separated disjoint parts will pass it ). If you can afford to be wrong half an order of magnitude to buy yourself some time maybe, you wont take that guess. Its better to take guess 2 orders of magnitude away in random right?

Or we cant then make other assumptions and introduce bimodal thinking and see what parameters fit best then if we could afford a second guess too? Or make a combo guess taking the geometric average of 2 guesses...(Eg CLT and then bimodal 2 bell curves in weights)


Basically if we had a real life problem and we were unable to collect any more data, all scientists would give up and say no we wont make any prediction.

If you had for example 3 guesses to make, the first one wouldn't be the CLT idea. Of course not!


Sure. Because in science we only claim things we are 100% certain they are facts when we are totally cornered and have no other choice to improve the effort. As if we dont know that all progress is done in steps towards more correct proposals. The problem is not to dare and make an estimate, the problem is to be arrogant about how accurate it can be when you make it.


If some curious guy collecting data had these 3-4 they would not amuse themselves to get an idea that works for a ton of distributions (until tomorrow comes)...that have to do eg with diffusion of materials in objects of size that its distribution is maybe known (ie apples) and which do not have disjoint far away spikes with no big widths.

If that was poker sessions we wouldnt imagine there could be CLT convergence fast to Gaussian in sums. Because i suppose a guy has winrate differences between sessions of couple hundred hands and other opponents/tables that near term eclipse the natural volatility of the game lol! Sure. Because every 100 hands a game sd of 80bb is not a ton of times larger than any win-rate the guy could have ie 0.1bb/h or 0.2bb/h or 0.03bb/h leading to only 1/5-1/10th the size of the sd to worry about in any multimodal scenario (therefore not having any disjoint/outlier issue that would fail the summation approximation).

We would simply give up and say no i wont offer any estimate. When you are on Mars alone and have no help you wont take a guess in the end if all else is lost. You will not make the best guess you could make if the method was valid. No you wont, you will stay silent and die unwilling to attempt anything.


Well we are different people then.

Last edited by masque de Z; 12-01-2015 at 02:20 PM.
Variance question Quote
12-01-2015 , 02:31 PM
Quote:
Originally Posted by PairTheBoard
I'd be interested in seeing the numbers you used for the 3.26. I tried to think of how to approach that for sample values but decided it was too much work. How likely (ballpark) is the actual data for the numbers you picked?

Population sd certainly greater than 0. But if you allow the data to be extremely unlikely for a population (with values still consistent with the data) then I believe my last example below shows 0 as a greatest lower bound for a population sd.


PairTheBoard
I used the means of each group (adjusted to keep the numbers whole*), so:

Qty : value
8 : $6
12 : $7
3 : $12
18 : $13
1 : $2
13 : $13
14 : $14

So we did, in fact, calculate the sample SD range. It is somewhere between 3.26** and 56.23 using the actual numbers provided and with no unnecessary (read: unwarranted) assumptions.

*out of an abundance of laziness. You can get it lower if you use the actual sample means instead of working with whole numbers.

**a bit lower than that if we feel like using those pesky decimals
Variance question Quote
12-01-2015 , 05:04 PM
Quote:
Originally Posted by BrianTheMick2
I used the means of each group (adjusted to keep the numbers whole*), so:

Qty : value
8 : $6
12 : $7
3 : $12
18 : $13
1 : $2
13 : $13
14 : $14

So we did, in fact, calculate the sample SD range. It is somewhere between 3.26** and 56.23 using the actual numbers provided and with no unnecessary (read: unwarranted) assumptions.

*out of an abundance of laziness. You can get it lower if you use the actual sample means instead of working with whole numbers.

**a bit lower than that if we feel like using those pesky decimals

Yea, I thought of doing that. I vetoed the idea because with that sample population is would be extremely unlikely to produce the actual data (assuming apples chosen randomly). For my 56 sd example, if you throw out the singleton as an anomaly, the 20,21, and 27 sums are not terribly unlikely for the sample population I gave.

It does work though.


PairTheBoard
Variance question Quote
12-01-2015 , 07:20 PM
Quote:
Originally Posted by PairTheBoard
Yea, I thought of doing that. I vetoed the idea because with that sample population is would be extremely unlikely to produce the actual data (assuming apples chosen randomly). For my 56 sd example, if you throw out the singleton as an anomaly, the 20,21, and 27 sums are not terribly unlikely for the sample population I gave.

It does work though.


PairTheBoard
That is just because you lack the heart of a champion.
Variance question Quote
12-02-2015 , 01:57 PM
Quote:
Originally Posted by BrianTheMick2
I used the means of each group (adjusted to keep the numbers whole*), so:

Qty : value
8 : $6
12 : $7
3 : $12
18 : $13
1 : $2
13 : $13
14 : $14

So we did, in fact, calculate the sample SD range. It is somewhere between 3.26** and 56.23 using the actual numbers provided and with no unnecessary (read: unwarranted) assumptions.

*out of an abundance of laziness. You can get it lower if you use the actual sample means instead of working with whole numbers.

**a bit lower than that if we feel like using those pesky decimals
Quote:
Originally Posted by PairTheBoard
Yea, I thought of doing that. I vetoed the idea because with that sample population is would be extremely unlikely to produce the actual data (assuming apples chosen randomly). For my 56 sd example, if you throw out the singleton as an anomaly, the 20,21, and 27 sums are not terribly unlikely for the sample population I gave.

It does work though.
Of course the assumption that the apples are falling at random independently of the days may be a poor assumption. It may be that when the tree is shook apples with about the same gold content are primed to fall at the same time. If that's the case then the data could be quite likely for BTM's proposed sample population above. And if you picked an apple randomly from the population that fell the sd would be as he calculates, about 3.26.


PairTheBoard
Variance question Quote
12-02-2015 , 02:38 PM
Quote:
Originally Posted by PairTheBoard
Of course the assumption that the apples are falling at random independently of the days may be a poor assumption. It may be that when the tree is shook apples with about the same gold content are primed to fall at the same time. If that's the case then the data could be quite likely for BTM's proposed sample population above. And if you picked an apple randomly from the population that fell the sd would be as he calculates, about 3.26.


PairTheBoard
All the cool kids like nonmarkovian processes.
Variance question Quote
12-02-2015 , 03:07 PM
And exactly why would the apples with nearly the same value fall at the same time (what physics supports this as the weight of gold is tiny for even 100$ - only 2 grams to offer any correlation to things that affect how easily something drops, of course none of this makes any physics sense as story, its metaphorical). But even if they did do that why then some 13 fall on 2 different days almost down the middle but all 6 or 7 or 12 or 14 the same day?

Of course we need to specify from the beginning if the tree produces this gold (or the more appropriate real object here behind this symbolism) out of a common process or it was just randomly placed there without any connection and if what we are doing is trying to ask and estimate the sd of the apples dropped or the apples of that tree in general, as in the apples that this mechanism delivers based on some process that is common.

If no process is known or can be thought to lead to distribution then all we can is look at what was received and then of course only mathematical bound arguments can be offered for the range based on the extremes of all configurations regarding sd.

Since the OP doesnt post to explain things further one may just imagine all possible interpretations of the question to cover all cases.

The other thing one can do also is to start considering integer partitions of the observed sums of 20,21,1,27 elements each. It would be nice if one could weight these partitions based on some method that correlates probability with size of gold (or clustering) but thats another issue (entropy arguments or energy of gold configurations lol ie partition functions ). Of course trying to find all the partitions of eg 132 in 20 is a terribly hard task.

Last edited by masque de Z; 12-02-2015 at 03:35 PM.
Variance question Quote
12-02-2015 , 04:37 PM
Exactly why would they necessarily follow a Gaussian distribution instead of any sort of highly skewed distribution?

I just calculated the min and double checked the max that had already been calculated by someone else. Nothing more, nothing less.

Oh, and they were salted by a guy who has been trying to sell the damn tree for years. He isn't very good at quality control, leading to the results we obtained.
Variance question Quote
12-02-2015 , 05:03 PM
Exactly why the F do you define yourself as a permanent troll?

Exactly why you insist to say something i never did? I never gave them a Gaussian distribution at origin. I considered only the cases of distributions that converge fast for the sums to near normal for the maximization of likelihood to be computable so that the problem could have an answer in some cases when such guess would be important to be dared offered (a ton of distributions even highly skewed ones like the exponential or Pareto or the parabolic satisfy it at n=25 and of course all the skewed ala Maxwell Boltzmann type). I even never argued that the CLT will always converge fast (i tried to offer a criterion for when that may happen as a condition on the answer from the 2 moments of the distribution) and i was the first to offer a "10 dice all 6 pays 6^10 but the rest just the sums of the dices" counterexample of problematic distribution for this approach which is an extreme case of outlier.

One can hope this is the last troll hit for the day, let alone week, but like hell it will be...

Last edited by masque de Z; 12-02-2015 at 05:12 PM.
Variance question Quote
12-02-2015 , 05:14 PM
If it isn't a Gaussian distribution, then you can't use the method you used to estimate/calculate the SD. That means you assumed it. Period. Doesn't even matter whether you realize you were assuming it. You were.

Why do you constantly ****ing define yourself as someone incapable of learning something new by being wrong?!? I know super sensitive insecure teenagers who have less trouble with the concept of learning new things when they are wrong. It is how you improve. Do you want to die knowing nothing more than you do today?!? Does that sound like a good life to you?!?
Variance question Quote
12-02-2015 , 05:56 PM
Really? Where exactly is it assumed that its normal to do the calculation? It is property of sums that the avg of the sum is the sum of the averages . It is property of a sum that the variance of the sum is the sum of the variances (in the case of independent random variables ie not correlated). This is true for any distribution (that these are defined), it doesnt have to be normal.

When the convergence to normal is satisfactory at n=20-27 (for these functions it is and most of the famous ones qualify - those that dont i gave you criteria) the likelihood function is very close to 3 gaussian products over the sample sum results. When you maximize that to get the sd you get the prescription for the sd calculation. Where is in all this the assumption the starting is normal? Nowhere.

Retract what you said then.

Furthermore point to a case here in this thread that i was wrong and i refuse to learn from it. It is you that doesnt understand what i was doing all along. I set very clearly the conditions under which i was doing what i was doing from the start of the thread and i spotted also cases of failure.

If i died tomorrow knowing only what i know today it is still a very good life thank you. Your description simplistically of how i learn is idiotic and inflammatory actually. How did i learn all that i know? Every day that passes i typically learn something because i keep myself busy with problems and new ideas daily. I regularly edit my own posts probably more than anyone here if you did a study of the % of edits i have. I consistently focus on revisiting what i say to fix things. So how the f you dare caricature my process of learning like that? What are you here some boss of others? You consistently treat everyone with a sarcastic never satisfied with their ideas attitude.

You are a politician. You distort things to fit your arguments and you selectively pick what you like in what someone has said. See how wrong you are here regarding how i used the normal function in this thread. I never started with a normal distribution. The closest i did was the sum of 2 weighted distant normals to produce easier convolutions. My argument never depended on a starting normal . Is there any chance you will own the lies you have said about what i did?

Last edited by masque de Z; 12-02-2015 at 06:11 PM.
Variance question Quote
12-02-2015 , 07:20 PM
Quote:
Originally Posted by masque de Z
And exactly why would the apples with nearly the same value fall at the same time (what physics supports this as the weight of gold is tiny for even 100$ - only 2 grams to offer any correlation to things that affect how easily something drops, of course none of this makes any physics sense as story, its metaphorical). But even if they did do that why then some 13 fall on 2 different days almost down the middle but all 6 or 7 or 12 or 14 the same day?
With your great imagination if you can't come up with ideas I think it's because you're not trying. Off the top of my head, the gold could have infused apples during some unusual brief event. The amount infused could have depended on the state of maturation of the apple at the time. Thus apples coming ripe and dropping off at the same time were at the same stage of maturation at the time of gold infusion and therefore have about the same amount of gold in each of them.

Or the con man could be injecting the gold each morning as he pinches off the stems making them ready to drop. His gold injection mechanism is set somewhat differently each day.

We're really in no position to speculate on the physics of gold in magical apples.



Quote:
Originally Posted by masque de Z
If no process is known or can be thought to lead to distribution then all we can is look at what was received and then of course only mathematical bound arguments can be offered for the range based on the extremes of all configurations regarding sd.

I think this is pretty much where we're at and this is what we've done. So the answer to OP's question, "Can we know/estimate the variance" of individual apples is pretty much a "No", according to what most people mean by the term "estimate".


PairTheBoard
Variance question Quote
12-02-2015 , 07:22 PM
The problem is to come up with a method that doesnt imagine any creative things to fix the results but they come naturally because the universe makes it possible that way more often than other cases for certain starting profiles. Like some statistical physics argument.

Here is another possibly interesting way to attack this problem. We step away from distributions each apple obeys for now and consider arrangements.

Imagine this tree had only 69 apples to talk about. Each apple has a value of gold in integers (lets settle for integers for now) that is ai.

Now imagine all possible ai such that they are a solution to a1+...a20=132, a21+a22+...a41=270, a42=2,a43+..a69=365. (other reordering may be still a solution of course)

And imagine for each such solution set Sk={a1k,a2k,...a69k} a probability to shake the tree of 69 made apples (all made initially ) and obtain the results we did in the order we received them calling it pk (all apples drop with the same probability or we pick them one by one randomly). The highest pk solution wins. If there are ties all are considered equally possible.

One may even find the sd of each solution and create a density function for the sd based on this probability (across all its possible values).

Solutions seen so far will be part of that system. That is clearly a supercomputer project of course.


The answer to the OP depends on what exactly he/she is asking. Distribution or arrangement etc. If it is distribution we do not know what convergence this distribution has to be sure in applying the CLT method although we can use it to test many distributions (that pass) if we have a subset of possible ones for other reasons not given yet. But as i said if forced in a life death decision on the common distribution interpretation of the question i choose the CLT guess instead of not offering anything.

If no distribution is in place and its just some arrangement out of all possible arrangements i still want to consider together with the bounds you provided the distribution of the sd itself based on such prescription like what i suggested in this post (or maybe a better suggestion in the future to assign some weight on all these solutions).

Last edited by masque de Z; 12-02-2015 at 07:41 PM.
Variance question Quote
12-02-2015 , 07:26 PM
Don't listen to PTB, he wanted to chop the tree down.
Variance question Quote
12-02-2015 , 07:30 PM
Quote:
Originally Posted by lastcardcharlie
Don't listen to PTB, he wanted to chop the tree down.

Variance question Quote
12-02-2015 , 07:56 PM
Quote:
Originally Posted by masque de Z
Really? Where exactly is it assumed that its normal to do the calculation? It is property of sums that the avg of the sum is the sum of the averages . It is property of a sum that the variance of the sum is the sum of the variances (in the case of independent random variables ie not correlated). This is true for any distribution (that these are defined), it doesnt have to be normal.

When the convergence to normal is satisfactory at n=20-27 (for these functions it is and most of the famous ones qualify - those that dont i gave you criteria) the likelihood function is very close to 3 gaussian products over the sample sum results. When you maximize that to get the sd you get the prescription for the sd calculation. Where is in all this the assumption the starting is normal? Nowhere.

Retract what you said then.

Furthermore point to a case here in this thread that i was wrong and i refuse to learn from it. It is you that doesnt understand what i was doing all along. I set very clearly the conditions under which i was doing what i was doing from the start of the thread and i spotted also cases of failure.

If i died tomorrow knowing only what i know today it is still a very good life thank you. Your description simplistically of how i learn is idiotic and inflammatory actually. How did i learn all that i know? Every day that passes i typically learn something because i keep myself busy with problems and new ideas daily. I regularly edit my own posts probably more than anyone here if you did a study of the % of edits i have. I consistently focus on revisiting what i say to fix things. So how the f you dare caricature my process of learning like that? What are you here some boss of others? You consistently treat everyone with a sarcastic never satisfied with their ideas attitude.

You are a politician. You distort things to fit your arguments and you selectively pick what you like in what someone has said. See how wrong you are here regarding how i used the normal function in this thread. I never started with a normal distribution. The closest i did was the sum of 2 weighted distant normals to produce easier convolutions. My argument never depended on a starting normal . Is there any chance you will own the lies you have said about what i did?
I am working on getting a new computer up and running (which will help with these sorts of questions), but you keep claiming that you get a convergence to normal at 20-27 given an unknown underlying distribution. You simply don't. Highly skewed distributions do not converge to a Gaussian distribution with such a small sum. I gave a link showing so (and it was only a wee bit skewed compared to the possibilities). Did you not read it for fear of discovering that you can be more right tomorrow? Have you goofed around with pareto distributions for fun and profit so as to learn more?

I'll get around to the rest shortly. It is well past Beer-O'clock and I'll have to get around to vetting your first attempt to quantify the population variance from 3 data points before I can fully answer. You did, of course, use n-1=2 in your initial calculations and establish a 95% confidence range, right?
Variance question Quote
12-02-2015 , 08:04 PM
Quote:
Originally Posted by masque de Z
The problem is to come up with a method that doesnt imagine any creative things to fix the results but they come naturally because the universe makes it possible that way more often than other cases for certain starting profiles. Like some statistical physics argument.

Here is another possibly interesting way to attack this problem. We step away from distributions each apple obeys for now and consider arrangements.

Imagine this tree had only 69 apples to talk about. Each apple has a value of gold in integers (lets settle for integers for now) that is ai.

Now imagine all possible ai such that they are a solution to a1+...a20=132, a21+a22+...a41=270, a42=2,a43+..a69=365. (other reordering may be still a solution of course)

And imagine for each such solution set Sk={a1k,a2k,...a69k} a probability to shake the tree of 69 made apples (all made initially ) and obtain the results we did in the order we received them calling it pk (all apples drop with the same probability or we pick them one by one randomly). The highest pk solution wins. If there are ties all are considered equally possible.

One may even find the sd of each solution and create a density function for the sd based on this probability (across all its possible values).

Solutions seen so far will be part of that system. That is clearly a supercomputer project of course.

I suspect my upper bound 56-sd configuration would actually win that game.



PairTheBoard
Variance question Quote
12-02-2015 , 08:30 PM
Quote:
Originally Posted by PairTheBoard
I suspect my upper bound 56-sd configuration would actually win that game.



PairTheBoard
I also imagined that this is possible due to the simple nature of the solution with all pieces in 4 say, the other 0, giving it substantial chance to lead to the observed (no accidents as often, mostly 2 the problem but not only). Other more complex solutions may have too many non zero elements to survive combinations that are so specialized to avoid failure. But i am not exactly certain on possibly nearby cases involving 1s or most importantly 2s and 3(?) big numbers or what happens in some counterintuitive way down the road to be more rigorous.

Last edited by masque de Z; 12-02-2015 at 08:44 PM.
Variance question Quote
12-02-2015 , 08:47 PM
Quote:
Originally Posted by PairTheBoard
I suspect my upper bound 56-sd configuration would actually win that game.



PairTheBoard
IIRC it was 56.27 or something. Simple logic dictates that the maximum possible answer will be the maximum possible answer. No need for a super computer at all.
Variance question Quote
12-02-2015 , 08:51 PM
Quote:
Originally Posted by BrianTheMick2
IIRC it was 56.27 or something. Simple logic dictates that the maximum possible answer will be the maximum possible answer. No need for a super computer at all.
The supercomputer is used for the sd density/distribution function construction unless it becomes easier upon analysis to bound it quickly near the neighborhood of the top. A bit rigorous answer on the 2s argument is still of some merit to study because we like to have a way to improve the chance the single one is a 2 and not the 0s. This may mess up the big ones that are now to be selected a few 2s lower than the maximum so that the other pieces add up to them but it may make things worse too. But i mean if all were 3 big ones and then the rest only 2s no 0s but the big ones are actually like 132-19*2, 270-2*20, 365-26*2 type things, this makes it easy for the single to be 2 very often. Unless i miss something this reduced sd significantly from the 56 level because the top ones are notsmaller size and the 0s are elevated to 2 and are closer to the avg.

So why be so negative on a bit more thorough study?

Last edited by masque de Z; 12-02-2015 at 09:00 PM.
Variance question Quote
12-02-2015 , 09:05 PM
Quote:
Originally Posted by masque de Z
The supercomputer is used for the sd density/distribution function construction unless it becomes easier upon analysis to bound it quickly near the neighborhood of the top. A bit rigorous answer on the 2s argument is still of some merit to study because we like to have a way to improve the chance the single one is a 2 and not the 0s. This may mess up the big ones that are now to be selected a few 2s lower than the maximum so that the other pieces add up to them but it may make things worse too. But i mean if all were 3 big ones and then the rest only 2s no 0s but the big ones are actually like 132-19*2, 270-2*20, 365-26*2 type things, this makes it easy for the single to be 2 very often. Unless i miss something this reduced sd significantly from the 56 level because the top ones are notsmaller size and the 0s are elevated to 2 and are closer to the avg.

So why be so negative on a bit more thorough study?
He already calculated the maximum. It is the actual maximum. Showing that there are numbers smaller than the maximum tells you nothing more about the maximum.

Same for the minimum. It can be calculated.
Variance question Quote
12-02-2015 , 09:14 PM
You really are now blinded by so much pathetic hatred that you do not even understand what i say here anymore (the alternative of persistent malicious trolling is so unbearable to imagine). F*cking get it together man and stop being so negative on anything others say or i promise to permanently ignore answering to your posts. There is nothing wrong with you as a person according to how i see you other than this permanently aggressive trolling or tilting others property. Do you like picking up fights in real life?


We are talking about how often a solution wins the proposed comparison of likelihood test.

Dont you agree that a combination like 66 2s and 94,230,313 is better in that sense than a 132,270,365,2 and 65 0s which gave the maximum sd?

That gives a lower sd solution set a higher probability to be observed when the apples are randomly picked one by one no? (you need to get lucky on the 2 to be selected vs the so many 0s for example on top of having the big ones alone each. Having all 2s no zeros and the reduced 3 big ones make it more possible now, in fact a lot more possible. But that solution has lower sd.

Its more like 46.8 i think if i am not missing something.

Last edited by masque de Z; 12-02-2015 at 09:21 PM.
Variance question Quote

      
m