Open Side Menu Go to the Top
Register
Ask a probabilist Ask a probabilist

12-20-2008 , 03:53 PM
Quote:
Originally Posted by CanadaLowball
i just wrote up a little script that generates a random number between 0 and 1 with 1000 decimal digits.
Nuh-uh.
Ask a probabilist Quote
12-20-2008 , 04:19 PM
Quote:
Originally Posted by CanadaLowball
i just wrote up a little script that generates a random number between 0 and 1 with 1000 decimal digits.

every time i run this script, am i really splitting the universe into 10^1000 equally real realities?

would make creating universes a little trivial, dont you think? its nice in theory, but highly improbable in reality.
I agree with you completely in every possible universe.
Ask a probabilist Quote
12-21-2008 , 11:38 PM
Quote:
Originally Posted by blah_blah
But this additional structure is largely (mostly?) manifested in the additional moments that you require in the central limit theorem as well.
Not really. Compare the L² weak law of large numbers with the central limit theorem.

Quote:
L² weak law: Let X_1, X_2, ... be uncorrelated random variables with E[X_i] = μ and Var(X_i) ≤ C. Let S_n = X_1 + ... + X_n. Then (S_n)/n → μ in L² and in probability.
Quote:
CLT: Let X_1, X_2, ... be independent and identically distributed with E[X_i] = μ and Var(X_i) = σ². Let S_n = X_1 + ... + X_n. Then (S_n - nμ)/(σn^{1/2}) → Z in distribution, where Z is a standard normal.
Both theorems require a second moment. (We cannot even talk about L² convergence without it.) The additional requirement is in the joint structure of the sequence. Being uncorrelated is a much weaker condition that (mutual) independence.
Ask a probabilist Quote
12-22-2008 , 04:08 AM
I was mostly thinking of the SLLN, i.e., that S_n/n\to\mu a.s. under the assumption that the X_i's are iid and EX_i<\infty. Here the assumptions are strictly weaker than in the most basic form of the CLT, but we still can get a strong result.


Your point about weak laws is interesting and well-taken though. If we relax the condition that the X_i's are iid, then (clearly) we need more (uniformly bounded) moments AND can't get a.s. convergence. Are there any strong laws which do not require the X_i's to be iid (as you mentioned, independence is a very powerful condition)?

I am currently on vacation and have no references with me, so I hope I haven't said anything that is too silly or too factually incorrect.
Ask a probabilist Quote
12-22-2008 , 03:49 PM
Quote:
Originally Posted by blah_blah
Your point about weak laws is interesting and well-taken though. If we relax the condition that the X_i's are iid, then (clearly) we need more (uniformly bounded) moments
Actually, we don't.

Quote:
Originally Posted by blah_blah
AND can't get a.s. convergence.
Actually, we can.

Quote:
Originally Posted by blah_blah
Are there any strong laws which do not require the X_i's to be iid (as you mentioned, independence is a very powerful condition)?
Of course. That is why I brought up the example in the first place. Just your basic strong of law of large numbers, which you can find in Durrett, is this:

Quote:
Strong law of large numbers. Let X_1, X_2, ... be pairwise independent identically distributed random variables with E|X_i| < ∞. Let EX_i = μ and S_n = X_1 + ... + X_n. Then S_n/n → μ a.s. as n → ∞.
But actually, if we look at the proof of this theorem, we see that the hypotheses can be relaxed quite a bit, although it makes for a theorem which is a little awkward to state. Given a sequence of random variables, X_1, X_2, ..., let us define the truncated sequence, Y_1, Y_2, ..., as follows: Y_k = X_k if |X_k| ≤ k; and Y_k = 0 otherwise. With this notation, here is a more general strong law:

Quote:
Strong law of large numbers. Let X_1, X_2, ... be nonnegative random variables with EX_i < ∞. (Any sequence whose positive and negative parts satisfy the hypotheses of this theorem will also satisfy this strong law.)

We do not require them to be pairwise independent, but we do require that, for each i and j, Y_i and Y_j are uncorrelated. (Some truncation is required, since correlation can only be defined for random variables with a second moment.)

We do not require that they be identically distributed, but we do require that EX_i = μ for all i, and that there exists a constant C such that P(X_i > x) ≤ CP(X_1 > x) for all i and x. (This condition prevents mass from escaping to infinity. It plays a role similar to the condition Var(X_i) ≤ C in the weak law.)

Under these conditions, if S_n = X_1 + ... + X_n, then S_n/n → μ a.s. as n → ∞.

Last edited by jason1990; 12-22-2008 at 04:00 PM.
Ask a probabilist Quote
12-22-2008 , 04:26 PM
Jason: I'm just smart enough to know I'm a dumb-ass so if this seems like a ******ed question, please ignore.

I'm trying to model a standard raise/fold table for myself, and I can't for the life of me seem to figure out what hand I can unexpoitably shove from the SB.

If all starting hands average Q7o, then when UTG folds his hand, the average staring hand for all the players goes up if we assume UTG is always playing top hands, and always folding everthing else.

As it folds around the table, the value of each unopened hand increases and becomes signfigant as we move closer and closer to the BB.

My experience in poker tells me that QT is the first hand favored over the BB, but when I tried to model it, I came up with a nonsensical answer. I used the simplest method I could think of - assigning a whole number value to each starting hand with Q7o having the value of 0, but now I realize I can't use that because the value of the hands do not increase incrementally (JTo is stronger than J9s is stronger than J9.)

So....what hand can I unexpoitably shove from the SB?
Ask a probabilist Quote
12-22-2008 , 04:50 PM
Quote:
Originally Posted by jason1990
Of course. That is why I brought up the example in the first place. Just your basic strong of law of large numbers, which you can find in Durrett, is this:
I'm really embarassed to say that I (apparently) forgot that the SLLN holds for pairwise independent identically distributed L^1 random variables and not just i.i.d. ones. In that light, you probably can write off much of my last few posts. (I even can think of why pairwise independence is all that is necessary, now).

Is there a good reference for various versions of the LLN and CLT? I think Stroock's book has some versions, but, again, I don't have it at hand.
Ask a probabilist Quote
12-23-2008 , 03:53 PM
Quote:
Originally Posted by unrealzeal
If all starting hands average Q7o, then when UTG folds his hand, the average staring hand for all the players goes up if we assume UTG is always playing top hands, and always folding everthing else.
It looks like you are talking about the bunching effect, or card-removal effect. This has been discussed a lot in these forums. You can find a lot of information here about it, much more than I could give you in any brief answer. I would suggest you start with this thread, and also check out the threads that it links to.
Ask a probabilist Quote
12-23-2008 , 04:07 PM
Quote:
Originally Posted by blah_blah
Is there a good reference for various versions of the LLN and CLT? I think Stroock's book has some versions, but, again, I don't have it at hand.
I do not know of one, off the top of my head. I think it would be a nice little project to take several books, Durrett, Chung, Stroock, Shiryaev, etc., and put all the different versions you can find in one set of notes. That would be pretty handy.
Ask a probabilist Quote
01-10-2009 , 09:42 AM
Suppose the probability of an event happening is an irrational number. Is it possible to outline briefly how Bayesians and frequentists interpret this statement?
Ask a probabilist Quote
01-11-2009 , 11:37 AM
Quote:
Originally Posted by lastcardcharlie
Suppose the probability of an event happening is an irrational number. Is it possible to outline briefly how Bayesians and frequentists interpret this statement?
Unless I misunderstand your question, they interpret it the same way they would if it were rational. You might enjoy reading an old post of mine about probability interpretations. The subjective and logical interpretations are the two flavors of "Bayesianism." For convenience, here is the relevant part:

Quote:
First: how do we define "probability of an event"? We define it as a limit of a ratio of events we interested in to total number of outcomes of a trial when number of trials goes to infinity.
In Euclidean geometry, the word "point" is undefined. Rather than being explicitly defined, it is characterized by the axioms of geometry. A point is any object which satisfies those axioms. Similarly, the probability of an event is simply a number between 0 and 1 which satisfies the axioms of probability. There is no mathematical definition beyond that.

What you are wondering about is the interpretation of probability. There are many competing philosophical interpretations of probability. The four major competitors are the frequency, propensity, subjective, and logical interpretations.

Here is a very loose description of the four interpretations. Imagine we flip a biased coin and we wonder about the probability of heads. According to the frequency interpretation, we can only talk about this probability if the flip is part of a long sequence of flips. In that case, the probability of heads is simply the limiting ratio of the number of heads to the number of flips. This is what you mentioned in your OP. If the coin is only flipped once and then subsequently destroyed, then according to the frequency interpretation, it does not make sense to talk about the probability of heads.

The propensity interpretation, however, does not require a long sequence of flips. Suppose we claim the probability of heads is 0.6. According to the propensity interpretation, this means the coin (or rather, the entire experimental setup, including the coin, the flipper, the air in the room, etc.) has a propensity for producing, in the long run, 6 heads for every 10 flips. It does not matter if the coin is destroyed after one flip. The probability of 0.6 refers to the potential frequency that would arise if we could flip it many times.

Both the frequency and propensity interpretations regard the probability of heads as describing some real physical property. With frequency, it is a property of the sequence. With propensity, it is a property of the experimental setup. The subjective and logical interpretations, on the other hand, do not regard probabilities as representing physical realities. Instead, they represent degrees of belief.

Suppose Joe says the probability of heads is 0.6. According to the subjective interpretation, this means that if Joe was offered a choice between these two bets:

(a) win $4 on heads, lose $6 on tails,
(b) lose $4 on heads, win $6 on tails,

then Joe would be indifferent as to which bet to take. The probability of 0.6 represents Joe's personal betting preferences regarding this event. Another person, say Jack, may have different preferences. Jack might, for instance, say the probability of heads is 0.3. In the subjective interpretation, neither is right or wrong. The statements they are making are not contradictory. They are simply subjective. This looks like it might be a totally useless interpretation, but there is one caveat which somewhat fixes this subjectivity. In the subjective interpretation, Joe's and Jack's probabilities must be "coherent," which means that they must obey the axioms of probability. These axioms ensure that if the coin is flipped many times, then Joe and Jack will no longer disagree about the probability of heads. Their subjective opinions will get closer and closer to each other, and in the limit of infinitely many flips, they will exactly agree on the probability of heads. In fact, their subjective opinions will match the frequency of heads to total coin flips. However, if the coin is flipped only once, then their opinions may differ dramatically, and (in the subjective interpretation) no one can say who is right or wrong, because in fact neither is right or wrong.

The logical interpretation is similar to the subjective. Probabilities represent degrees of belief, but they are not meant to be subjective. In the logical interpretation, probabilities arise because we have uncertainty about some proposition or event. This uncertainty exists because we have only partial information about the thing in question. In the logical interpretation, it is postulated that there exists some ideal form of reasoning that can be applied to this partial information which will yield a probability. In other words, if Joe and Jack have the same information about the coin flip, then they should arrive at the same degree of belief. In the logical interpretation, probabilities do not belong to the person stating the probability, but rather to the information which that person possesses.
Ask a probabilist Quote
01-11-2009 , 01:37 PM
So suppose there's a sequence of coin flips, and let x_n = (no of Heads after nth flip)/n. If I understand correctly, a frequentist says P(Heads) = lim(x_n). But what if the sequence (x_n) has no limit? Could such a "coin" exist?
Ask a probabilist Quote
01-11-2009 , 07:00 PM
I haven't read all the questions, so I apologize if this has been asked already.

If we know that an event has happened exactly once in the last 10 billion years, is it reasonable to expect it to happen again over the next 10 billion years? Or is there not enough data? What about if we know something occurs once every 100,000 years and it's overdue by 20,000 years. If it's a cataclysmic event, should we be worried?

And I also have the same question about life. We know it has happened at least once on one of the hundreds of billions of potential places that can support some type of life. What's more realistic? That earth is probably the only planet that hosts life? Or that it is very improbable that earth is the only planet that supports life.

I guess my question really comes down to: When there is only 1 occurrence of an event, can we come up with reasonable odds on a 2nd event? Or is once always insufficient data? Thanks.
Ask a probabilist Quote
01-12-2009 , 12:57 PM
Quote:
Originally Posted by Lestat
If we know that an event has happened exactly once in the last 10 billion years, is it reasonable to expect it to happen again over the next 10 billion years?
I know I'm not a probabilist, but:

Jerry Yang won the WSOP exactly once in the last 10 billion years. Is it reasonable to expect that Jerry Yang will win the WSOP again in the next 10 billion years?
Ask a probabilist Quote
01-12-2009 , 02:09 PM
Quote:
Originally Posted by madnak
I know I'm not a probabilist, but:

Jerry Yang won the WSOP exactly once in the last 10 billion years. Is it reasonable to expect that Jerry Yang will win the WSOP again in the next 10 billion years?
Well, I would say yes, but only because we can be a bit more precise on what the odds are of winning an event. If he's a winning tournament player then it can't more than 1 in however many entrants. If he's a superior player, then it's less. If he's terrible, then it's more. But the odds of winning a single tournament with say 500,000 people can never be less than 1 in 10 billion.

I'm referring to things like cataclysmic collisions with comets when we have no idea what the odds are, yet we do know that such a collision has happened at least once. Or something like life. Or even a cataclysmic volcano eruption which we expect to happen once every 100,000 years, but has not happened in the last 140,000 years. Can we ever put an 'expectation' on these events?

Like in craps (or any game relying on odds), nothing is ever overdue. The streets are littered with cab drivers who went broke thinking, "Gee, banker has won 12 times in a row. Surely it can't win the next 5!). So with the volcano example, even though we expect it to happen once ever 100,000 years, is it correct to become concerned in year 140,000 if it hasn't erupted again? Instinct says no. But I don't know nothing about probability.
Ask a probabilist Quote
01-12-2009 , 02:56 PM
Quote:
Originally Posted by Lestat
Well, I would say yes, but only because we can be a bit more precise on what the odds are of winning an event. If he's a winning tournament player then it can't more than 1 in however many entrants. If he's a superior player, then it's less. If he's terrible, then it's more. But the odds of winning a single tournament with say 500,000 people can never be less than 1 in 10 billion.
We're not discussing whether the chance is greater than 1/109 this year. We're discussing whether the probability is high that it will happen in the next 109 years. If that one seems too likely, then how about this - Hannibal invaded Italy exactly once in the last 109 years. Is it likely that Hannibal will invade Italy in the next 109 years?

Quote:
I'm referring to things like cataclysmic collisions with comets when we have no idea what the odds are, yet we do know that such a collision has happened at least once. Or something like life. Or even a cataclysmic volcano eruption which we expect to happen once every 100,000 years, but has not happened in the last 140,000 years. Can we ever put an 'expectation' on these events?
These are all differing events with differing underlying assumptions. If you're asking about a comet collision, you don't just know that the event has happened at least once.
Ask a probabilist Quote
01-13-2009 , 12:58 PM
Quote:
Originally Posted by lastcardcharlie
So suppose there's a sequence of coin flips, and let x_n = (no of Heads after nth flip)/n. If I understand correctly, a frequentist says P(Heads) = lim(x_n). But what if the sequence (x_n) has no limit?
I suppose, then, by the frequentist's definition, the probability remains undefined.

Quote:
Originally Posted by lastcardcharlie
Could such a "coin" exist?
I think that with some practice, I could learn to flip a coin in a way that produces such a sequence.
Ask a probabilist Quote
01-13-2009 , 01:16 PM
Quote:
Originally Posted by Lestat
When there is only 1 occurrence of an event, can we come up with reasonable odds on a 2nd event? Or is once always insufficient data?
Sometimes we do not even need 1 occurrence. A trivial example would be rolling any die shaped like a platonic solid. We can deduce the probabilities through symmetry alone, without ever needing to observe any actual die rolls. A less trivial example is this. Using statistical mechanics, I believe we can calculate the (astronomically small) probability that all the air in the room I am in happens to randomly "wander" over into the corner of the room. I do not think an event like this has ever happened in the history of the universe.

In order to determine probabilities, we need information about the phenomenon in question; the nature of this information will depend on the phenomenon. This information might or might not include past observations of 1 or more similar phenomena.

Regarding your specific question about life, I have made several comments already in this thread, starting with post #226.
Ask a probabilist Quote
03-02-2009 , 03:40 PM
This must have been discussed in this or another thread but I missed it. That spinner situation where each has three numbers on it:
Spinner A: 1, 5, 9
Spinner B: 2, 6, 7
Spinner C: 3, 4, 8
and when they play HU the highest number spun wins. Then A > C > B > A. Meaning that the relation "is better than" is not necessarily transitive. What, if any, are the most important consequences of this in probability?
Ask a probabilist Quote
03-03-2009 , 07:25 PM
Quote:
Originally Posted by lastcardcharlie
This must have been discussed in this or another thread but I missed it. That spinner situation where each has three numbers on it:
Spinner A: 1, 5, 9
Spinner B: 2, 6, 7
Spinner C: 3, 4, 8
and when they play HU the highest number spun wins. Then A > C > B > A. Meaning that the relation "is better than" is not necessarily transitive. What, if any, are the most important consequences of this in probability?
I don't know anything about 'consequences' but isn't there a similar situation in poker where say AKo > JTs > 22 > AKo if the two hands are heads-up. Perhaps it doesn't mean anything other than that 'better than' isn't an equivalence relation since it's not transitive?
Ask a probabilist Quote
03-04-2009 , 01:16 PM
Quote:
Originally Posted by Pyromantha
Perhaps it doesn't mean anything other than that 'better than' isn't an equivalence relation since it's not transitive?
Well "better than" isn't symmetric anyway. Let me rephrase the question: if I'm teaching it in class, what do I say other than "here's a real neat probability trick, kids"?
Ask a probabilist Quote
03-04-2009 , 06:55 PM
I'm working on a math presentation that I need to do and I chose to discuss how an ICM calculator calculates a player's equity. I plugged the numbers into an online ICM calculator (to check them) and my answers to the probability that all players will place 2nd and all answers match. However, my nerves are off from the calculator's answers when I try to solve for probability of Player A placing 3rd.

Player A : t5000
Player B : t1500
Player C : t2000
Player D : t5000

Pr (X 1st) Pr (X 2nd)
Player A : t5000 = .370370370 | .328573457 1st: $1,000
Player B : t1500 = .111111111 | .150042626 2nd: $600
Player C : t2000 = .148148148 | .192810459 3rd: $400
Player D : t5000 = .370370370 | .328573457 4th- 9th : $0


A vs. B
t5000 = .769230769
t6500

A vs. C
t5000 = .714285714
t7000

A vs. D
t5000 = .5
t10000


(Pr(A 3rd)) =(Pr(B 1st))(Pr(C 2nd))(Pr(A beats D for 3rd)) +
(Pr(B 1st))(Pr(D 2nd))(Pr(A beats C for 3rd)) +
(Pr(C 1st))(Pr(B 2nd)(Pr(A beats D for 3rd)) +
(Pr(C 1st))(Pr(D 2nd))(Pr(A beats B for 3rd)) +
(Pr(D 1st))(Pr(B 2nd))(Pr(A beats C for 3rd)) +
(Pr(D 1st))(Pr(C 2nd))(Pr(A beats B for 3rd)) =

(Pr(A 3rd)) = (.111111111 )(.192810459)(.5) +
(.111111111 )(.328573457)(714285714) +
(.148148148 )(.150042626)(.5) +
(.148148148 )(.328573457)(.769230769) +
(.370370370 )(.150042626)(714285714) +
(.370370370 )(.192810459)(.769230769) =


This gives me an answer of .188665479 while the online calculator gives something around .21 . This then leads to a $10 difference in final equity.

Thanks. I hope you can help.
Ask a probabilist Quote
03-04-2009 , 09:05 PM
i am taking a probability and stats. class for my poly sci major next year. i need to get A's in them, yet i am terrible in math. what can i do to prepare myself for these classes?
Ask a probabilist Quote
03-04-2009 , 09:52 PM
Quote:
Originally Posted by jason1990


I am no expert on philosophical issues related to probability, but I have read and thought about this topic a little bit and can offer my opinions.

But feel free to ask anything and I will do my best.

Ok you left the door open, not sure if this can be answered but I'm curious .

How could someone, or can someone, mathematically prove/disprove the existence of god/bigfoot/aliens, or any "question of existence". I know this sounds stupid but as an undergrad I majored in Biochem, though I found my math and especially probability and stastistics classes fascinating. I hear people say you can't prove a negative, or prove that something doesn't exist but only if it does.

FWIW I don't believe in any of this (reasons not for this thread), and for example in the bigfoot case, I argue with ppl that the odds that someone would of found a carcass, bones, hair, feces, teeth, eaten pray, bite patterns on said prey, burial sites (ppl say this is why we haven't found a body), by now is very high. Also, the amount of caloric intake to support such a creature described is immense and estimated upwards of 10K a day. Many big foot believers say they live off "berries and vegetation" within the woods. I find this statement to be outrageous and no way could such an enviroment support such a being.

If we have discovered the existence of dinosauers, but even with active scientific and amateur projects to discover bigfoot, have yet to find ONE piece of verifiable evidence with something still currently living says to me that the existence of such a creature is highly unlikely.

Or am I missing the point of this thread? Smart ppl ITT
Ask a probabilist Quote
03-06-2009 , 08:45 AM
Hi guys. I thought I'd throw in something that has been bothering me for a while. Hope you can help.

I'm wondering about degrees of freedom (d.o.f) and its use in determining standard deviation, specifically, why do we divide by the d.o.f. in the sample standard deviation, and not the population one?

I understand that d.o.f. is the number of independent quantities that can vary. The argument in most textbooks seems to be that since the sample mean is known, only n-1 of the quantities can vary, so we divide the sum of the squared deviations by the d.o.f.

It seems to me that this explanation would also apply to the population. If we knew n-1 values, and the population mean, shouldn't we be dividing by the d.o.f. too?

As a sub question, why do we need to divide by the d.o.f. at all? It is my understanding that in calculating the variance, we are getting an average squared deviation. To get an average don't we need to divide by n?
Ask a probabilist Quote

      
m