Open Side Menu Go to the Top
Register
Ask a probabilist Ask a probabilist

07-28-2008 , 04:35 PM
Quote:
Originally Posted by blah_blah
do you play poker?
Not anymore.

Quote:
Originally Posted by blah_blah
how would you solve the following problem?

Let X_i be iid and such that X_i converges almost surely to X. Then X is a.s. constant.
Convergence a.s. implies convergence in probability, which implies {X_i} is Cauchy in probability. Hence, P(|X_i - X_j| > \epsilon) --> 0 as i,j --> \infty. But this probability does not depend on i and j, so it must be identically 0. Hence, X_i = X_j a.s. for all i,j. Since the only random variable that is independent of itself is a constant, the whole sequence is just a constant sequence.
Ask a probabilist Quote
07-28-2008 , 04:42 PM
Quote:
Originally Posted by thylacine
What is the most general form of the Lovasz Local Lemma? Reference?
Sorry, never heard of it until now. All I can do is copy and paste from Wikipedia:

Quote:
For other versions, see Alon and Spencer (2000).
Ask a probabilist Quote
07-28-2008 , 05:40 PM
Quote:
Originally Posted by jason1990
Convergence a.s. implies convergence in probability, which implies {X_i} is Cauchy in probability. Hence, P(|X_i - X_j| > \epsilon) --> 0 as i,j --> \infty. But this probability does not depend on i and j, so it must be identically 0. Hence, X_i = X_j a.s. for all i,j. Since the only random variable that is independent of itself is a constant, the whole sequence is just a constant sequence.
nifty solution -- it's better than the one i know. i want to make sure i have the details right:

X_i-X_j has the same characteristic function for all i,j, and the characteristic function determines the distribution (e.g. by the inversion formula) so P(|X_i-X_j|>\varepsilon) = 0 for all \varepsilon. is there a better way to see the first part? (e.g., without characteristic functions)

By independence, we get that EX_1^2 = (EX_1)^2. But EX_1^2 \geq (E|X_1|)^2 by the schwarz inequality, and equality holds iff X_1 is constant a.s.

I think the rest is clear, thanks.

Last edited by blah_blah; 07-28-2008 at 05:47 PM.
Ask a probabilist Quote
07-28-2008 , 05:54 PM
Quote:
Originally Posted by LongLiveYorke
I flip a coin and record whether or not I get heads or tails. I do this many times and calculate the average number of flips before I get the pattern HTH. Call this number A. I do it again and calculate the average number of flips before I get HTT. Call this number B.

Is A==B, A > B or A < B?
This is a cool problem that was given to our second year stats class at UofT.

It turns out that the solution given by my professor involved solving 3 equations and 3 unknowns. However, I learned of a simpler solution that only requires 2 equations and 2 unknowns.

Lets work out the HTH case as the other sequence {HTT} can be solved analagously.

A is the expected number of flips until HTH occurs.
B is the expected number of flips until HTH occurs given that you start off with an H.

We have the following equations:

A = 1/2*(1+A) + 1/2*(1+B)
B = 2*1/4 + 1/2*(1+B) + 1/4*(2+A)

A can be written as a sum of expectations given that the first flip is a tail or heads. If the first flip is H, then TH can occur by the third toss with expectation 2*1/4. Or, we may flip another H by the second flip in which case the expectation that it occurs some point after is 1/2*(1+B). Finally, we may flip two consecutive tails on the second and third toss in which case the expectation is 1/4*(2+A)

Solving the system we get that A = 10 and B = 8
Ask a probabilist Quote
07-28-2008 , 06:06 PM
i don't think the solution is any simpler just because you have one less equation. you have to do a little more work to come up with your second equation, whereas if you use 3 equations it's more or less mindless to write down what they are.

let X_0 = your A, X_1 = your B, X_2 = expected number of flips until HTH occurs starting with HT

then

X_0 = 1/2(1+X_0) + 1/2(1+X_1)
X_1 = 1/2(1+X_1) + 1/2(1+X_2)
X_2 = 1/2(1) + 1/2(1+X_0)

the second problem is the same except that the equation for X_2 is X_2 = 1/2(1)+1/2(1+X_1).

here's my other solution.

we condition on the first T to get

X_0 = 1/2(1+X_0) + \sum_{n\geq 3} n/2^n + (n+X_0)/2^n

which yields X_0 = 5/2 + 3X_0/4, or X_0=10.
Ask a probabilist Quote
07-28-2008 , 09:48 PM
Could you show the form of a simple 2,3, or n dimensional SDE and then provide a simple intuitive explanation of what the terms are describing in the sense of local probability flows? If so, could you give a real world example where you might think you know how the local probability should be flowing and thereby just write down the corresponding SDE to model the situation?

My thought has been that the first order term is describing the local linear drift while the second order term is describing the local shape of expansion of "point masses" of probabilty. I was never able to get a teacher to say something like this though. All I could ever get to such requests for an intuitive concept was the intricate technical definition with laborious proof of the appropriate stochastic limit to the sequence of appropriate discrete jump paths. Or something like that.

Thanks,

PairTheBoard
Ask a probabilist Quote
07-28-2008 , 10:27 PM
Quote:
Originally Posted by blah_blah
your answer is all i really wanted what's your opinion of courant and UCLA for probability?
I would have put Courant and UBC over Columbia and Stanford on Jason's original list but otherwise agree with it. Prior to hiring Berger, I would have said that UCLA was noticeably below the other schools on his list and that it didn't really have a good culture for young probabilists. Noam is both very strong and energetic, which helps, but unless you are interested in the sort of research that he and Marek do I wouldn't go there.
Ask a probabilist Quote
07-29-2008 , 12:59 AM
SHELDON ROSS!!!!!!!!!
Ask a probabilist Quote
07-29-2008 , 01:21 AM
Quote:
Originally Posted by blah_blah
nifty solution -- it's better than the one i know. i want to make sure i have the details right:

X_i-X_j has the same characteristic function for all i,j, and the characteristic function determines the distribution (e.g. by the inversion formula) so P(|X_i-X_j|>\varepsilon) = 0 for all \varepsilon. is there a better way to see the first part? (e.g., without characteristic functions)
Characteristic functions are probably as good as anything. I would have just said that the law of (X_i,X_j) (that is, the probability measure on R^2 induced by (X_i,X_j)) is just the product measure m*m, where m is the law of X_1. For each \varepsilon, there exists a Borel set A such that

P(|X_i - X_j| > \varepsilon) = P((X_i,X_j) in A) = m*m(A),

and this does not depend on i or j.

Quote:
Originally Posted by blah_blah
By independence, we get that EX_1^2 = (EX_1)^2. But EX_1^2 \geq (E|X_1|)^2 by the schwarz inequality, and equality holds iff X_1 is constant a.s.
You want to be careful here that you are not assuming the existence of any moments. You could also say that, by independence, F(x) = P(X_1 <= x) satisfies F(x) = (F(x))^2. Hence, for each x, F(x) = 0 or F(x) = 1. Since F is nondecreasing and right-continuous, there exists a constant c such that F(x) = 0 for x < c and F(x) = 1 for x >= c.
Ask a probabilist Quote
07-29-2008 , 02:06 AM
Quote:
Originally Posted by jason1990
You want to be careful here that you are not assuming the existence of any moments.
ok, now i feel silly
Ask a probabilist Quote
07-29-2008 , 02:20 AM
what drew you to math as opposed say physics, engineering or computer science?
Ask a probabilist Quote
07-29-2008 , 03:35 AM
Quote:
Originally Posted by PairTheBoard
Could you show the form of a simple 2,3, or n dimensional SDE
Here is a simple 2-dimensional SDE:

dX = m(X) dt + s(X) dB.

The solution, X, will be a process taking values in R^2; B is a 2-dimensional Brownian motion. The function m maps R^2 to R^2, and the function s maps a point in R^2 to a 2 x 2 matrix. The above equation is shorthand for the pair of equations

dX_1 = m_1(X) dt + s_{11}(X) dB_1 + s_{12}(X) dB_2,
dX_2 = m_2(X) dt + s_{21}(X) dB_1 + s_{22}(X) dB_2.


(Think of X, m(X), and dB as 2 x 1 column vectors; dt is a scalar; and s(X) is a 2 x 2 matrix.) In turn, these are shorthand for the integral equations

X_i(t) = X_i(0) + \int_0^t m_i(X(s)) ds
+ \int_0^t s_{i1}(X(s)) dB_1(s)
+ \int_0^t s_{i2}(X(s)) dB_2(s),

where the last two integrals are Ito integrals.

Quote:
Originally Posted by PairTheBoard
and then provide a simple intuitive explanation of what the terms are describing in the sense of local probability flows?
I am not sure what you mean by the above term in bold. Does this refer to the way the probability density function of the solution changes with time? Under suitable assumptions on m and s, the solution X(t) has a density function f(x,t) that satisfies the Kolmogorov forward equation (also called the Fokker–Planck equation).

Personally, I do not find this to be a very intuitive way of thinking about the dynamics described by the SDE. I prefer to think about the paths of X, rather than the densities f(x,t). Informally, I would write

dX/dt = m(X) + s(X) W(t),

where W(t) = dB/dt. The "white noise" term W(t) exists only as a generalized function, but you can think of it informally as a stationary, R^2-valued process that gives an infinitesimal random push at time t to X. The push has no preferred direction and is independent of the pushes given prior to time t. The usual example is a pollen grain on the surface of still water being pushed by collisions with the surrounding water molecules.

In this case, the term m(X) gives a deterministic drift, and the term s(X) serves to modify the magnitude of the noise term and, possibly, give it a preferred direction. (Here, "direction" does not mean a vector, but a line through the origin. The modified noise term will still have mean 0.) In this sense, the term s(X) is changing the shape of the density of the noise term. But this is all completely informal. Rigorously, the noise term is not even a random variable (in the usual sense), so there is no density whose shape can change.

Quote:
Originally Posted by PairTheBoard
If so, could you give a real world example where you might think you know how the local probability should be flowing and thereby just write down the corresponding SDE to model the situation?
There should be some nice examples in Oksendal that illustrate what I described above. If not, you should be able to cook something up yourself. Maybe you could model a pollen grain on a gently drifting pool of water whose temperature varies spatially.
Ask a probabilist Quote
07-29-2008 , 03:53 AM
Would you rather live forever or die?
Ask a probabilist Quote
07-29-2008 , 04:18 AM
Quote:
Originally Posted by furyshade
what drew you to math as opposed say physics, engineering or computer science?
Originally, as an undergrad, it was probably just the fact that math was what I perceived as my strongest subject. But I did always enjoy my math classes much more than any others, so I am sure that was a factor on some level.

Personally, I have a hard time understanding a lot of physicists, engineers, and computer scientists. Maybe it's the skeptic in me, but I need to see the rigor before I feel I truly understand. I had a math professor in grad school who was originally a physicist. He was an older guy and said he was considering going back into physics after all these years. He explained to us that he originally left physics for math so that he could figure out what all those physicists were actually talking about. Well, now he knew, so he was ready to return.

I also like the flexibility I have. I can work on mathematical models in any field of my choice. Not only that, I am free to play with the models in any way I like, as long as it is mathematically interesting. And all of my research can be done with a pen and paper. That I like.
Ask a probabilist Quote
07-29-2008 , 04:18 AM
Quote:
Originally Posted by furyshade
what drew you to math as opposed say physics, engineering or computer science?
I expect the reaction of most math people will be "lol".
Ask a probabilist Quote
07-29-2008 , 04:23 AM
Quote:
Originally Posted by Fly
I expect the reaction of most math people will be "lol".

ummmm... no
Ask a probabilist Quote
07-29-2008 , 05:08 AM
Quote:
Originally Posted by Max Raker
Would you rather live forever or die?
I've seen some Twilight Zones where the guy gets a seemingly amazing gift, then there's a twist, and the so-called gift becomes a fate worse than death. So out of risk-aversion, I think I would rather keep reality as it is and die the death I've got coming to me.

However, had you asked:

Quote:
Would you rather be an immortal in the universe of the Highlander films, getting in all kinds of sword-fighting adventures as you wander the globe and battle the forces of darkness, or die?
I think I'd go for it.
Ask a probabilist Quote
07-29-2008 , 06:27 AM
This is x-post from the probability forum but it's pretty quiet so i'll try if you could answer me...

I was in a live tournament, and when the cards were being dealt one guy had his second card flip over and it was a Q. After he had been given new card, he checked them and said "i have a Q, but i dont know which of these cards was the first one, so it's 50-50 that i would've gotten a pair of queens". Now i thought his logic was false and it would be more than 50% to get the pair, as when the first cards were dealt there were four queens in the deck and on the second card only three. But i couldnt prove it better than that and have no idea where to start if i had to count the correct odds for catching the pair, so what's the best way to do it?

Thanks,

Saku
Ask a probabilist Quote
07-29-2008 , 07:11 AM
Quote:
Originally Posted by KSakuraba
I was in a live tournament, and when the cards were being dealt one guy had his second card flip over and it was a Q. After he had been given new card, he checked them and said "i have a Q, but i dont know which of these cards was the first one, so it's 50-50 that i would've gotten a pair of queens".
The result is correct. For simplicity, imagine that his original two cards were the top two cards in the deck, and his replacement card is the third from the top. We first learn

A = Card 2 is a Q.

Then we learn

B = Cards 1 and 3 are a Q and a non-Q, in some order.

Given A and B, we want the probability of

C = Card 1 is a Q,

which is

P(C | A and B) = P(A and B and C)/P(A and B)
= P(Q, Q, non-Q)/[P(Q, Q, non-Q) + P(non-Q, Q, Q)].

We now compute

P(Q, Q, non-Q) = (4/52)*(3/51)*(48/50), and
P(non-Q, Q, Q) = (48/52)*(4/51)*(3/50).

But both of these are just (4*3*48)/(52*51*50), so they are equal and the final answer is 1/2.
Ask a probabilist Quote
07-29-2008 , 01:07 PM
How much physics do you know or have you tried to learn?

[It is interesting to me because I know tons of people who are interested in learning physics but do not know enough math so I am curious about people who know enough math but simply aren't that interested]
Ask a probabilist Quote
07-29-2008 , 01:10 PM
Quote:
Originally Posted by jason1990
Quote:
Originally Posted by David Sklansky
"This statement is a lie" is a Russell paradox. What about "This statment is 90% to be a lie"?
In my opinion, the above sentence in bold is not even meaningful, let alone a paradox.
I still think the sentence is meaningless for reasons that have nothing to do with what I am about to post. But, for fun, I am going to pretend everything in sight is sensible and be very loose with the "meaning" of things.

Suppose we construct, on some probability space, an event S and a [0,1]-valued random variable X that satisfy P(S | X = p) = p for all p in [0,1]. (Or, more rigorously, P(S | X) = X.) Then the event {X = p} might be loosely translated into English as "The probability of S is p". In fact, this is just a probability mixture model, and it is what we might use to model a bent coin, for example. (With S = "The coin lands heads" and X having some distribution on [0,1], perhaps uniform.)

Now suppose we constructed such an S and X, and then discovered that S = {X = 0}. Then, in informal language, we would have

S = "The probability of S is 0."

or

S = "The probability of this event is 0."

or

S = "The probability of this event not occurring is 1."

One might consider this the probabilistic version of "This statement is a lie." Does such a construction exist?

Quote:
Does there exist an event S and a random variable X, defined on the same probability space, such that P(S | X) = X and S = {X = 0}?
Spoiler:
No. Suppose S and X exist with the above properties. Then

X = P(S | X) = P(X = 0 | X) = 1_{X = 0} a.s.,

where 1_A denotes the indicator function of the event A (that is, 1_A = 1 if A occurs, and 1_A = 0 if A does not occur).

But if X = 0, then 1_{X = 0} = 1; and if X != 0, then 1_{X = 0} = 0. Hence, X can never equal 1_{X = 0}, which is a contradiction.

Now, let us suppose, as before, that we construct S and X such that P(S | X = p) = p for all p. But this time, we discover that S = {X = 0.1}. Then, in informal language, we would have

S = "The probability of S is 0.1."

or

S = "The probability of this event is 0.1."

or

S = "The probability of this event not occurring is 0.9."

One might consider this a version of "This statement is 90% to be a lie." Does such a construction exist?

Quote:
Does there exist an event S and a random variable X, defined on the same probability space, such that P(S | X) = X and S = {X = 0.1}?
Spoiler:
Yes. Take any probability space and let S be the empty set and X the random variable that is identically 0.

In fact, this is the only example, modulo a set of measure zero. To see this, suppose S and X satisfy the above conditions. Then

X = P(S | X) = P(X = 0.1 | X) = 1_{X = 0.1} a.s.

In other words, P(X = 1_{X = 0.1}) = 1. This implies that X = 0 a.s. (Why?) Hence, S = {X = 0.1} is a set of measure zero. (That is, P(S) = 0.)

So, informally, we have shown that it is possible to construct the event "The probability of this event not occurring is 0.9." And moreover, the event must necessarily have probability 0.

On a different note, only 2 more posts to go. Actually, I'm just assuming that my title will change at 1000 posts. I don't really know for sure, since I've never paid much attention to the titles. And if it does change, I don't even know what it's supposed to change to. I think it would be pretty cool if it changed to "probabilist".
Ask a probabilist Quote
07-29-2008 , 02:30 PM
Quote:
Originally Posted by Max Raker
How much physics do you know or have you tried to learn?
I know enough first-year physics to speak sensibly when I present examples to my calculus students. I have read some mathematical discussions of special relativity, and they seemed quite comprehensible to me at the time. I bought a book on general relativity once, but never got beyond the first few pages. I decided that I wanted to go back and thoroughly relearn basic differential geometry before picking it up again. Unfortunately, I will probably never have time to do that, unless I find a way to incorporate stochastic differential equations on manifolds into my research. I would actually like to do that in the near future.

I did, however, spend a good deal of time trying to learn the mathematics of quantum mechanics. That was a little easier, since I use a lot of functional analysis in my work. Ultimately, I wanted to get a fairly complete understanding of the core mathematics, and then try to understand some of the various interpretations. But other things came up and that whole project got cut short some time back. I intend to pick it up again soon, though.
Ask a probabilist Quote
07-29-2008 , 04:41 PM
do you play poker very seriously? if so do you think being a probabilist has given you a deeper understanding of many concepts compared to other players at your level, whatever level that is. i guess more simply, do you think knowing advanced math in your field is really relevent to poker?
Ask a probabilist Quote
07-29-2008 , 05:17 PM
Do you have some cool probability trick/story/funny/puzzle I can use to play mindgames on my friends to drive them crazy?

Last edited by tame_deuces; 07-29-2008 at 05:23 PM.
Ask a probabilist Quote
07-29-2008 , 06:25 PM
Do you (or any of your colleagues) have any strong opinions on the "controversy" in theoretical physics currently with respect to string theory?
Ask a probabilist Quote

      
m