Two Plus Two Publishing LLC Two Plus Two Publishing LLC
 

Go Back   Two Plus Two Poker Forums > Other Topics > Science, Math, and Philosophy

Notices

Science, Math, and Philosophy Discussions regarding science, math, and/or philosophy.

Reply
 
Thread Tools Display Modes
Old 12-17-2008, 04:37 PM   #136
Carpal \'Tunnel
 
gumpzilla's Avatar
 
Join Date: Feb 2005
Posts: 13,973
Re: Ask a probabilist

So if the proof is wrong, why does this argument still produce a correct answer? Perhaps pairwise independence is sufficient for this example, though I would find that surprising and nontrivial. Do you have an example off the top of your head of a case where independence and pairwise independence would produce different results?

Also, I couldn't agree more with the goal of reducing the fraction of traffic in this forum based on repeating the same stupid religious arguments, so thumbs up there.
gumpzilla is offline   Reply With Quote
Old 12-17-2008, 06:36 PM   #137
Carpal \'Tunnel
 
Join Date: May 2004
Location: Dallas
Posts: 8,980
Re: Ask a probabilist

Jason,

What are your thoughts on the parameterization of the gamma/exponential distributions, and why can't everyone just agree on one standard?
Justin A is offline   Reply With Quote
Old 12-17-2008, 06:45 PM   #138
Fly
veteran
 
Join Date: Jan 2006
Location: placing balls into cells
Posts: 2,343
Re: Ask a probabilist

Jason,

For k <<<< N, it seems reasonable that

P(A_1 and A_2 and .... and A_k) ~= P(A_1)*P(A_2)*.....*P(A_k)

But for k ~= N, this would break down.

Could the argument be made to work for k ~= N if we look at N as infinitely large?
Fly is offline   Reply With Quote
Old 12-17-2008, 06:47 PM   #139
Carpal \'Tunnel
 
Join Date: May 2004
Location: Dallas
Posts: 8,980
Re: Ask a probabilist

Quote:
Originally Posted by gumpzilla View Post
Do you have an example off the top of your head of a case where independence and pairwise independence would produce different results?
Here's a page with an example of both ways, one where there is pairwise independence but not mutual, and vice versa.

http://www.morris.umn.edu/~sungurea/...ependence.html
Justin A is offline   Reply With Quote
Old 12-18-2008, 03:27 PM   #140
old hand
 
Join Date: Sep 2004
Posts: 1,875
Re: Ask a probabilist

Quote:
Originally Posted by gumpzilla View Post
Do you have an example off the top of your head of a case where independence and pairwise independence would produce different results?
Yes: the two most important theorems in probability theory. The law of large numbers requires only pairwise independence, but the central limit theorem requires independence. (There are examples of pairwise independent, identically distributed sequences for which the central limit theorem fails.)
jason1990 is offline   Reply With Quote
Old 12-18-2008, 03:37 PM   #141
old hand
 
Join Date: Sep 2004
Posts: 1,875
Re: Ask a probabilist

Quote:
Originally Posted by Justin A View Post
What are your thoughts on the parameterization of the gamma/exponential distributions, and why can't everyone just agree on one standard?
I have never had a problem with this. Context has always been sufficient for me to understand what I am reading. When writing, I usually will say something like, "Let X be exponentially distributed with mean ..." That solves the problem right away.

The reason there are two "standards" is that both the mean and its reciprocal have very natural interpretations. As you may know, these distributions are intimately connected to the Poisson process. The mean of an exponential is the mean interarrival time of the associated Poisson process; its reciprocal is the rate of arrivals.
jason1990 is offline   Reply With Quote
Old 12-18-2008, 03:55 PM   #142
old hand
 
Join Date: Sep 2004
Posts: 1,875
Re: Ask a probabilist

Quote:
Originally Posted by Fly View Post
For k <<<< N, it seems reasonable that

P(A_1 and A_2 and .... and A_k) ~= P(A_1)*P(A_2)*.....*P(A_k)
This is even true for k ≈ n. The problem is that the given "proof" does not prove it.

In the course of using inclusion-exclusion in the "standard" proof, we are led to the formula

P(A_1 ∩ ... ∩ A_n) = \sum_{i=0}^n (-1)^i/i!

On the other hand,

P(A_1)...P(A_n) = (1 - 1/n)^n
= \sum_{i=0}^n (nCi)(-1)^i/n^i
= \sum_{i=0}^n [n!/((n-i)!n^i)] (-1)^i/i!

Are these values close to one another for large n? Well, of course. They are both close to 1/e. But more specifically, we can just look directly at how they differ. The second expression has an additional factor in front of each term. It can be written as

(n/n)((n - 1)/n)((n - 2)/n)...((n - (i + 1))/n).

For terms that correspond to small i, this additional factor is close to 1, so it makes a negligible difference in the final value. For terms that correspond to moderate and large values of i, the extra factor is very small. However, those terms are quite negligible in the overall summation, so changing them (or even eliminating them altogether) has very little effect on the final value.

So in the end, the approximation is seen to be valid. But the given "proof" does not demonstrate this, not even on a heuristic level. I see no easy way to prove this without using inclusion-exclusion.
jason1990 is offline   Reply With Quote
Old 12-18-2008, 05:59 PM   #143
old hand
 
Join Date: Sep 2004
Posts: 1,875
Re: Ask a probabilist

Quote:
Originally Posted by Justin A View Post
Here's a page with an example of both ways, one where there is pairwise independence but not mutual, and vice versa.

http://www.morris.umn.edu/~sungurea/...ependence.html
This is not possible. Mutual independence -- more commonly called just "independence" -- implies pairwise independence.

Quote:
Definition: A collection of events -- possibly more than just two of them -- are mutually independent if and only if for any finite subset A_1, ..., A_n of the collection we have

So, for example, if you want check that A_1, A_2, A_3, and A_4 are independent, it is not enough to check that

P(A_1 ∩ A_2 ∩ A_3 ∩ A_4) = P(A_1)P(A_2)P(A_3)P(A_4).

You must also check that

P(A_1 ∩ A_2 ∩ A_3) = P(A_1)P(A_2)P(A_3),
P(A_1 ∩ A_2 ∩ A_4) = P(A_1)P(A_2)P(A_4),
P(A_1 ∩ A_3 ∩ A_4) = P(A_1)P(A_3)P(A_4),
P(A_2 ∩ A_3 ∩ A_4) = P(A_2)P(A_3)P(A_4),
P(A_1 ∩ A_2) = P(A_1)P(A_2),
P(A_1 ∩ A_3) = P(A_1)P(A_3),
P(A_1 ∩ A_4) = P(A_1)P(A_4),
P(A_2 ∩ A_3) = P(A_2)P(A_3),
P(A_2 ∩ A_4) = P(A_2)P(A_4), and
P(A_3 ∩ A_4) = P(A_3)P(A_4).

As you can see, pairwise independence is built right into the definition of (mutual) independence. The conclusion at the bottom of the page you linked to is incorrect.
jason1990 is offline   Reply With Quote
Old 12-18-2008, 06:28 PM   #144
old hand
 
Join Date: Feb 2007
Posts: 1,660
Re: Ask a probabilist

Quote:
Originally Posted by jason1990 View Post
Yes: the two most important theorems in probability theory. The law of large numbers requires only pairwise independence, but the central limit theorem requires independence. (There are examples of pairwise independent, identically distributed sequences for which the central limit theorem fails.)
Is there a deep reason for this (beyond the fact that independence makes the fourier analysis in the CLT work out the way you want it to)?
blah_blah is offline   Reply With Quote
Old 12-18-2008, 07:03 PM   #145
Carpal \'Tunnel
 
vhawk01's Avatar
 
Join Date: Feb 2006
Location: GHoFFANMWYD
Posts: 28,087
Re: Ask a probabilist

Cool thread. Maybe I'll do something like this next year. It would probably be more interesting in about 5 years but who knows if I'll still be posting here.
vhawk01 is offline   Reply With Quote
Old 12-18-2008, 08:02 PM   #146
Carpal \'Tunnel
 
Join Date: May 2004
Location: Dallas
Posts: 8,980
Re: Ask a probabilist

Quote:
Originally Posted by jason1990 View Post
I have never had a problem with this. Context has always been sufficient for me to understand what I am reading. When writing, I usually will say something like, "Let X be exponentially distributed with mean ..." That solves the problem right away.

The reason there are two "standards" is that both the mean and its reciprocal have very natural interpretations. As you may know, these distributions are intimately connected to the Poisson process. The mean of an exponential is the mean interarrival time of the associated Poisson process; its reciprocal is the rate of arrivals.
Unfortunately some of my professors don't always state the mean and just say something like X is exp(2) on our homework. It's impossible to figure out from context and I usually have to ask for clarification.

I don't get to the poisson process until next semester so I wasn't aware of that reasoning, gonna look it up right now for a primer.

Edit/ I just realized that must be why one of my professors always uses lambda for the exponential distribution, and he's the only one who's always consistent using it as the reciprocal of the mean.

Quote:
Originally Posted by jason1990 View Post
This is not possible. Mutual independence -- more commonly called just "independence" -- implies pairwise independence.


So, for example, if you want check that A_1, A_2, A_3, and A_4 are independent, it is not enough to check that

P(A_1 ∩ A_2 ∩ A_3 ∩ A_4) = P(A_1)P(A_2)P(A_3)P(A_4).

You must also check that

P(A_1 ∩ A_2 ∩ A_3) = P(A_1)P(A_2)P(A_3),
P(A_1 ∩ A_2 ∩ A_4) = P(A_1)P(A_2)P(A_4),
P(A_1 ∩ A_3 ∩ A_4) = P(A_1)P(A_3)P(A_4),
P(A_2 ∩ A_3 ∩ A_4) = P(A_2)P(A_3)P(A_4),
P(A_1 ∩ A_2) = P(A_1)P(A_2),
P(A_1 ∩ A_3) = P(A_1)P(A_3),
P(A_1 ∩ A_4) = P(A_1)P(A_4),
P(A_2 ∩ A_3) = P(A_2)P(A_3),
P(A_2 ∩ A_4) = P(A_2)P(A_4), and
P(A_3 ∩ A_4) = P(A_3)P(A_4).

As you can see, pairwise independence is built right into the definition of (mutual) independence. The conclusion at the bottom of the page you linked to is incorrect.
That's what I get for being a googletard. Thanks for the clarification.

Last edited by Justin A; 12-18-2008 at 08:17 PM.
Justin A is offline   Reply With Quote
Old 12-19-2008, 09:10 AM   #147
Cooler than Sammy Hagar
 
Join Date: Aug 2005
Location: Salt Lake City
Posts: 19,743
Re: Ask a probabilist

Quote:
Originally Posted by vhawk01 View Post
Cool thread. Maybe I'll do something like this next year. It would probably be more interesting in about 5 years but who knows if I'll still be posting here.
I can do one in...

God, kill me now.
madnak is offline   Reply With Quote
Old 12-19-2008, 02:03 PM   #148
old hand
 
Join Date: Sep 2004
Posts: 1,875
Re: Ask a probabilist

Quote:
Originally Posted by blah_blah View Post
Is there a deep reason for this (beyond the fact that independence makes the fourier analysis in the CLT work out the way you want it to)?
Well, the law of large numbers is about convergence to a constant. A random variable is constant if and only if its variance is zero. And the variance of a sum is completely determined by the pairwise joint distributions of the summands.

A more idea-level explanation might be the following. The law of large numbers is a first-order approximation, S_n ≈ nμ. The central limit theorem is a second-order approximation, S_n ≈ nμ + n^{1/2}σZ. Higher order approximations typically require more structure on the object being approximated. For instance, when x is small, f(x) ≈ f(0) requires only that f be continuous at 0, whereas f(x) ≈ f(0) + f'(0)x requires that f be differentiable at 0.
jason1990 is offline   Reply With Quote
Old 12-19-2008, 03:26 PM   #149
old hand
 
Join Date: Feb 2007
Posts: 1,660
Re: Ask a probabilist

Quote:
Originally Posted by jason1990 View Post
A more idea-level explanation might be the following. The law of large numbers is a first-order approximation, S_n ≈ nμ. The central limit theorem is a second-order approximation, S_n ≈ nμ + n^{1/2}σZ. Higher order approximations typically require more structure on the object being approximated. For instance, when x is small, f(x) ≈ f(0) requires only that f be continuous at 0, whereas f(x) ≈ f(0) + f'(0)x requires that f be differentiable at 0.
But this additional structure is largely (mostly?) manifested in the additional moments that you require in the central limit theorem as well.

Anyways, it was a bit of a silly question, but thanks for the reply
blah_blah is offline   Reply With Quote
Old 12-20-2008, 01:59 PM   #150
adept
 
Join Date: Feb 2006
Posts: 858
Re: Ask a probabilist

Quote:
Originally Posted by thylacine View Post
So I take it that you're not too enamored with the idea that every time I flip a coin reality splits into two equally real realities?
i just wrote up a little script that generates a random number between 0 and 1 with 1000 decimal digits.

every time i run this script, am i really splitting the universe into 10^1000 equally real realities?

would make creating universes a little trivial, dont you think? its nice in theory, but highly improbable in reality.
CanadaLowball is offline   Reply With Quote

Reply
      

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



All times are GMT -4. The time now is 04:49 AM.


Powered by vBulletin®
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO 3.6.0 ©2011, Crawlability, Inc.
Copyright 2008-2010, Two Plus Two Interactive