Open Side Menu Go to the Top
Register
Ask a probabilist Ask a probabilist

08-17-2008 , 09:43 PM
This is a far more pedestrian question than the rest. For a person with no expertise or higher education in math, I still find myself very interested in it - and even moreso in logic and some physics. This lead me to get into the show Numb3rs.

From someone who actually understands all the jargon they throw around on the show, how much of it has some fidelity to actual math concepts and solutions? How much is just rhetoric and mumbo-jumbo? And do a lot of math guys like this show, or is it like when we see shows try to do poker-based episodes and it's so dumbed-down that it's laughable? (interestingly, Numb3rs has done both poker and blackjack based episodes)
Ask a probabilist Quote
08-18-2008 , 12:16 PM
Quote:
Originally Posted by shronkj
This lead me to get into the show Numb3rs.
My initial impression when it came out was that Numb3rs is to mathematics what Indiana Jones movies are to archaeology.

Charlie Eppes has an unrealistically broad knowledge of mathematics, but I don't think the show gets too outrageous about that. To me, it feels like the show has less math now than earlier. It seems to have started focusing more on the crime drama aspect. My wife (also a mathematician) and I like the show, but we also like a lot of other crime shows. I have a colleague that loves the show, but he says it is because of the characters and the relations between the Eppes brothers and their father. I have another colleague that hates it. I don't know any mathematicians who love it for the math.

Some of the stuff they say makes me laugh. But I tend to try and defend the show against my wife who very frequently criticizes the math parts. The stuff they say is usually vague enough to allow me to put up a reasonable defense. And they do use professional consultants to try to keep it realistic.

One of the biggest laughs I got was once when Charlie had some formula on a chalkboard. A rival of his entered, saw it, immediately recognized it, and went on to give a detailed criticism. What was funny is that the formula was some totally mundane thing that was completely meaningless out of context, like \sum a_i b_i, or something like that. Also, I don't know anyone who does so much actual work on multiple, large chalkboards. I think most of us use paper, but that would not be as compelling to watch on TV.
Ask a probabilist Quote
08-18-2008 , 11:04 PM
Quote:
Originally Posted by jason1990
My initial impression when it came out was that Numb3rs is to mathematics what Indiana Jones movies are to archaeology.

Charlie Eppes has an unrealistically broad knowledge of mathematics, but I don't think the show gets too outrageous about that. To me, it feels like the show has less math now than earlier. It seems to have started focusing more on the crime drama aspect. My wife (also a mathematician) and I like the show, but we also like a lot of other crime shows. I have a colleague that loves the show, but he says it is because of the characters and the relations between the Eppes brothers and their father. I have another colleague that hates it. I don't know any mathematicians who love it for the math.

Some of the stuff they say makes me laugh. But I tend to try and defend the show against my wife who very frequently criticizes the math parts. The stuff they say is usually vague enough to allow me to put up a reasonable defense. And they do use professional consultants to try to keep it realistic.

One of the biggest laughs I got was once when Charlie had some formula on a chalkboard. A rival of his entered, saw it, immediately recognized it, and went on to give a detailed criticism. What was funny is that the formula was some totally mundane thing that was completely meaningless out of context, like \sum a_i b_i, or something like that. Also, I don't know anyone who does so much actual work on multiple, large chalkboards. I think most of us use paper, but that would not be as compelling to watch on TV.
All very interesting to a casual fan.

From my limited math knowledge, it seems like all Charlie's "solutions" lately (past two season really) have just been different ways of saying "I'll write an equation that will sift through the data."

I think it's becoming apparent, even to regular viewers like me, that they're running out of ways to have the math solve the crime. Like your colleague, I also really like the father/sons relationships and I still really like their more abstract discussions that focus on logic and reasoning, but the actual a+b=thebutlerdidit parts seem a little strained lately.

Any other specifics that were completely laughable?

Last edited by shronkj; 08-18-2008 at 11:10 PM.
Ask a probabilist Quote
08-19-2008 , 12:15 PM
Quote:
Originally Posted by shronkj
From my limited math knowledge, it seems like all Charlie's "solutions" lately (past two season really) have just been different ways of saying "I'll write an equation that will sift through the data."
I agree, and this is a great way of putting it.

Quote:
Originally Posted by shronkj
Any other specifics that were completely laughable?
Nothing jumps out at me off the top of my head. I do remember being a little bothered by the way he talked about Brownian motion in the episode, Rampage. But I would have to watch the episode again to give you any details.
Ask a probabilist Quote
09-06-2008 , 03:12 PM
Let A and B be two events. Prove the following relations by the elementwise method.
(a) (A-[A intersect B]) union B = A union B
(b) (A union B) - [A intersect B] = [A intersect not B] union [not A intersect B]

I'm trying for a stats/math minor and am having difficulty figuring out elementwise method. Can you help lead me to wards how to solve this problem, and how it should be done?
Ask a probabilist Quote
09-06-2008 , 03:38 PM
Quote:
Originally Posted by imjoshsizemore
Let A and B be two events. Prove the following relations by the elementwise method.
(a) (A-[A intersect B]) union B = A union B
(b) (A union B) - [A intersect B] = [A intersect not B] union [not A intersect B]

I'm trying for a stats/math minor and am having difficulty figuring out elementwise method. Can you help lead me to wards how to solve this problem, and how it should be done?
(a) Let x be in LHS. x is in (A-[A interect B]) OR x is in B.
Must show x is in RHS. If x is in B done. If x is not in B, x must be in
(A-[A intersect B]) since x is in RHS. That means x is in A (just not that part of A which intersects with B). Therefore x is either in B and done, OR in A and also done. x in LHS ==> x in RHS.

Must also show x in RHS ==> x in LHS. Let x be in RHS. Then x is in A OR x is in B. If x in B then x in LHS and done. If x not in B then x is in A (and x is not in B), since x is in RHS. x not in B implies x not in [A intersect B]. So x is in A and x is not in [A intersect B]. Thus, x is in (A - [A intersect B]) and therefore x is in RHS. So done and done.

I'll leave you to unravel things in (b).

PairTheBoard
Ask a probabilist Quote
09-06-2008 , 07:17 PM
Annie Duke played a blackjack dealer on an episode of Numb3rs.
Ask a probabilist Quote
09-06-2008 , 08:36 PM
An attempt at b? :\

x is either an element of A-AB or B-AB. If x is an element of A-AB, then x is an element of A(Bnot), so x is an element of the union of ABnot and AnotB.

If x is an element of B-AB, then x is an element of AnotB., so x is an element of the union of ABnot and AnotB
Ask a probabilist Quote
09-07-2008 , 09:59 AM
Quote:
Originally Posted by imjoshsizemore
Let A and B be two events. Prove the following relations by the elementwise method.
(a) (A-[A intersect B]) union B = A union B
(b) (A union B) - [A intersect B] = [A intersect not B] union [not A intersect B]

I'm trying for a stats/math minor and am having difficulty figuring out elementwise method. Can you help lead me to wards how to solve this problem, and how it should be done?
I will also illustrate (a), in my own words. First, let me try to use some more readable notation:

(a) (A \ (A ∩ B)) ∪ B = A ∪ B.

To prove this, we want to separately prove two different statements.

(a1) (A \ (A ∩ B)) ∪ B ⊂ A ∪ B,
(a2) A ∪ B ⊂ (A \ (A ∩ B)) ∪ B.

Step 1: proof of (a1). The so-called "elementwise method" means we first assume the premise

(P) x ∈ (A \ (A ∩ B)) ∪ B,

and then use (P) to prove the conclusion

(C) x ∈ A ∪ B.

Okay, so let us assume (P). By the definition of the symbol "∪", this means that x ∈ A \ (A ∩ B) or x ∈ B. So let us treat these two cases separately.

Case 1. x ∈ A \ (A ∩ B)
In this case, we assume that x ∈ A \ (A ∩ B). By the definition of the symbol "\", this means that x ∈ A and x ∉ (A ∩ B). So we know that x ∈ A. But this implies that x ∈ A ∪ B, which is (C). So we have proved (C).

Case 2. x ∈ B
In this case, we assume x ∈ B. But this implies x ∈ A ∪ B, which is (C). So again we have proved (C).

In summary, (P) implies either Case 1 or Case 2 is true. In both cases, (C) is true. We have therefore proved that (P) implies (C). This is logically equivalent to (a1). So we have proved (a1).

Step 2: proof of (a2). As before, we first assume

(P') x ∈ A ∪ B,

and then use (P') to prove

(C') x ∈ (A \ (A ∩ B)) ∪ B.

So let use assume (P'). This means x ∈ A or x ∈ B.

Case 1. x ∈ A
This case is a little tricky, so we will break it down into two subcases.

Subcase 1.1. x ∈ A and x ∈ B
In this subcase, we assume x ∈ A and x ∈ B. But x ∈ B implies x ∈ (A \ (A ∩ B)) ∪ B, which is (C').

Subcase 1.2. x ∈ A and x ∉ B
In this subcase, we assume x ∈ A and x ∉ B. But this implies that x ∉ A ∩ B. So we can say that x ∈ A and x ∉ A ∩ B. But by definition, this means x ∈ A \ (A ∩ B), which implies x ∈ (A \ (A ∩ B)) ∪ B, and this is (C').

Case 2. x ∈ B
In this case, we assume x ∈ B. But this implies x ∈ (A \ (A ∩ B)) ∪ B, which is (C').

In all cases (C') is true, so we have proved that (P') implies (C'), which means we have proved (a2). Since (a) is logically equivalent to "(a1) and (a2)," Steps 1 and 2, combined, give a complete proof of (a), and we are finished.

It may be useful to take a big-picture survey of what we did to prove (a). We proved it by separately proving (a1) and (a2) in two distinct steps. In each step, we proved something of the form

"left" ⊂ "right".

We did this by assuming that x ∈ "left", and then using this assumption to prove that x ∈ "right". This is the skeleton that all problems like this will have.

The details contained within the two separate steps will depend very much on what "left" and "right" actually look like. You will not always have to break the individual steps into multiple cases/subcases, but this is a common and useful technique.

Last edited by jason1990; 09-07-2008 at 10:06 AM.
Ask a probabilist Quote
12-17-2008 , 05:17 PM
In an attempt to elevate the non-religious content of this forum, I thought I would resurrect this old thread. I have said it before, but it is worth repeating that probability, depending on the context, can be considered a science, a branch of mathematics, and a subject of deep philosophical thought. It also is (or should be) of extreme importance and interest to any community of gamblers.

Since I am not a big fan of the no-content bump, I thought I would take this opportunity to answer the following question, which I received by PM:

Quote:
I recently encountered an interesting solution to the derangements problem using probability theory but the solution has a dangerous step that I would appreciate your input on.

Q. What is the probability that a permutation of [n] chosen at random has no fixed points?

Let A_i be the event that point i is not fixed. P( A_i ) = (n-1)/n. Clearly, A_i and A_j are not independent but as the size of the set increases, the dependence between them weakens. If we let n go to inf, we can treat the events as if they were independent.

P(no fixed) = lim (n-> inf) [ ( (n-1)/n ) ^ n ] = lim (n-> inf) [ ( 1 - 1/n ) ^ n ] = 1/e

Is it a valid technique to treat the events as independent in the limit, if so when can this be applied ?
This "proof" looks clever and leads to the right answer. But of course, that does not confer upon it any degree of credibility. Let us first look at this piece:

Quote:
Let A_i be the event that point i is not fixed. P( A_i ) = (n-1)/n. Clearly, A_i and A_j are not independent but as the size of the set increases, the dependence between them weakens. If we let n go to inf, we can treat the events as if they were independent.
This is not very precise, but it is correct. It is easy to calculate both P(A_i ∩ A_j) and P(A_i)P(A_j). When we do this, we see that they are very close. Their difference is approximately 1/n³.

Now, the next step.

Quote:
P(no fixed) = lim (n-> inf) [ ( (n-1)/n ) ^ n ]
This notation is a little unclear, but I think the intent is understandable. The goal is to prove that

\lim_{n → ∞} P(no fixed points) = 1/e.

The author is trying to use here the "fact" that

P(A_1 ∩ A_2 ∩ ... ∩ A_n) ≈ P(A_1)P(A_2)...P(A_n) = ((n - 1)/n)^n.

But this supposed "fact" relies on a fallacy. The first part of the proof argued that

P(A_i ∩ A_j) ≈ P(A_i)P(A_j) for all i and j.

This is true, as we mentioned above. In probability, we call this pairwise independence. But pairwise independence does not imply independence. So we cannot automatically conclude (even approximately) that the same thing is true for more than two events.

So this "proof" is not valid. The problem with it is not simply a lack of rigor, or faulty notation. The problem is that it relies on the false premise that pairwise independence implies independence. The standard, correct, probabilistic proof uses inclusion-exclusion, and the Taylor series expansion for e^{-1}.
Ask a probabilist Quote
12-17-2008 , 05:37 PM
So if the proof is wrong, why does this argument still produce a correct answer? Perhaps pairwise independence is sufficient for this example, though I would find that surprising and nontrivial. Do you have an example off the top of your head of a case where independence and pairwise independence would produce different results?

Also, I couldn't agree more with the goal of reducing the fraction of traffic in this forum based on repeating the same stupid religious arguments, so thumbs up there.
Ask a probabilist Quote
12-17-2008 , 07:36 PM
Jason,

What are your thoughts on the parameterization of the gamma/exponential distributions, and why can't everyone just agree on one standard?
Ask a probabilist Quote
12-17-2008 , 07:45 PM
Jason,

For k <<<< N, it seems reasonable that

P(A_1 and A_2 and .... and A_k) ~= P(A_1)*P(A_2)*.....*P(A_k)

But for k ~= N, this would break down.

Could the argument be made to work for k ~= N if we look at N as infinitely large?
Ask a probabilist Quote
12-17-2008 , 07:47 PM
Quote:
Originally Posted by gumpzilla
Do you have an example off the top of your head of a case where independence and pairwise independence would produce different results?
Here's a page with an example of both ways, one where there is pairwise independence but not mutual, and vice versa.

http://www.morris.umn.edu/~sungurea/...ependence.html
Ask a probabilist Quote
12-18-2008 , 04:27 PM
Quote:
Originally Posted by gumpzilla
Do you have an example off the top of your head of a case where independence and pairwise independence would produce different results?
Yes: the two most important theorems in probability theory. The law of large numbers requires only pairwise independence, but the central limit theorem requires independence. (There are examples of pairwise independent, identically distributed sequences for which the central limit theorem fails.)
Ask a probabilist Quote
12-18-2008 , 04:37 PM
Quote:
Originally Posted by Justin A
What are your thoughts on the parameterization of the gamma/exponential distributions, and why can't everyone just agree on one standard?
I have never had a problem with this. Context has always been sufficient for me to understand what I am reading. When writing, I usually will say something like, "Let X be exponentially distributed with mean ..." That solves the problem right away.

The reason there are two "standards" is that both the mean and its reciprocal have very natural interpretations. As you may know, these distributions are intimately connected to the Poisson process. The mean of an exponential is the mean interarrival time of the associated Poisson process; its reciprocal is the rate of arrivals.
Ask a probabilist Quote
12-18-2008 , 04:55 PM
Quote:
Originally Posted by Fly
For k <<<< N, it seems reasonable that

P(A_1 and A_2 and .... and A_k) ~= P(A_1)*P(A_2)*.....*P(A_k)
This is even true for k ≈ n. The problem is that the given "proof" does not prove it.

In the course of using inclusion-exclusion in the "standard" proof, we are led to the formula

P(A_1 ∩ ... ∩ A_n) = \sum_{i=0}^n (-1)^i/i!

On the other hand,

P(A_1)...P(A_n) = (1 - 1/n)^n
= \sum_{i=0}^n (nCi)(-1)^i/n^i
= \sum_{i=0}^n [n!/((n-i)!n^i)] (-1)^i/i!

Are these values close to one another for large n? Well, of course. They are both close to 1/e. But more specifically, we can just look directly at how they differ. The second expression has an additional factor in front of each term. It can be written as

(n/n)((n - 1)/n)((n - 2)/n)...((n - (i + 1))/n).

For terms that correspond to small i, this additional factor is close to 1, so it makes a negligible difference in the final value. For terms that correspond to moderate and large values of i, the extra factor is very small. However, those terms are quite negligible in the overall summation, so changing them (or even eliminating them altogether) has very little effect on the final value.

So in the end, the approximation is seen to be valid. But the given "proof" does not demonstrate this, not even on a heuristic level. I see no easy way to prove this without using inclusion-exclusion.
Ask a probabilist Quote
12-18-2008 , 06:59 PM
Quote:
Originally Posted by Justin A
Here's a page with an example of both ways, one where there is pairwise independence but not mutual, and vice versa.

http://www.morris.umn.edu/~sungurea/...ependence.html
This is not possible. Mutual independence -- more commonly called just "independence" -- implies pairwise independence.

Quote:
Definition: A collection of events -- possibly more than just two of them -- are mutually independent if and only if for any finite subset A_1, ..., A_n of the collection we have

So, for example, if you want check that A_1, A_2, A_3, and A_4 are independent, it is not enough to check that

P(A_1 ∩ A_2 ∩ A_3 ∩ A_4) = P(A_1)P(A_2)P(A_3)P(A_4).

You must also check that

P(A_1 ∩ A_2 ∩ A_3) = P(A_1)P(A_2)P(A_3),
P(A_1 ∩ A_2 ∩ A_4) = P(A_1)P(A_2)P(A_4),
P(A_1 ∩ A_3 ∩ A_4) = P(A_1)P(A_3)P(A_4),
P(A_2 ∩ A_3 ∩ A_4) = P(A_2)P(A_3)P(A_4),
P(A_1 ∩ A_2) = P(A_1)P(A_2),
P(A_1 ∩ A_3) = P(A_1)P(A_3),
P(A_1 ∩ A_4) = P(A_1)P(A_4),
P(A_2 ∩ A_3) = P(A_2)P(A_3),
P(A_2 ∩ A_4) = P(A_2)P(A_4), and
P(A_3 ∩ A_4) = P(A_3)P(A_4).

As you can see, pairwise independence is built right into the definition of (mutual) independence. The conclusion at the bottom of the page you linked to is incorrect.
Ask a probabilist Quote
12-18-2008 , 07:28 PM
Quote:
Originally Posted by jason1990
Yes: the two most important theorems in probability theory. The law of large numbers requires only pairwise independence, but the central limit theorem requires independence. (There are examples of pairwise independent, identically distributed sequences for which the central limit theorem fails.)
Is there a deep reason for this (beyond the fact that independence makes the fourier analysis in the CLT work out the way you want it to)?
Ask a probabilist Quote
12-18-2008 , 08:03 PM
Cool thread. Maybe I'll do something like this next year. It would probably be more interesting in about 5 years but who knows if I'll still be posting here.
Ask a probabilist Quote
12-18-2008 , 09:02 PM
Quote:
Originally Posted by jason1990
I have never had a problem with this. Context has always been sufficient for me to understand what I am reading. When writing, I usually will say something like, "Let X be exponentially distributed with mean ..." That solves the problem right away.

The reason there are two "standards" is that both the mean and its reciprocal have very natural interpretations. As you may know, these distributions are intimately connected to the Poisson process. The mean of an exponential is the mean interarrival time of the associated Poisson process; its reciprocal is the rate of arrivals.
Unfortunately some of my professors don't always state the mean and just say something like X is exp(2) on our homework. It's impossible to figure out from context and I usually have to ask for clarification.

I don't get to the poisson process until next semester so I wasn't aware of that reasoning, gonna look it up right now for a primer.

Edit/ I just realized that must be why one of my professors always uses lambda for the exponential distribution, and he's the only one who's always consistent using it as the reciprocal of the mean.

Quote:
Originally Posted by jason1990
This is not possible. Mutual independence -- more commonly called just "independence" -- implies pairwise independence.


So, for example, if you want check that A_1, A_2, A_3, and A_4 are independent, it is not enough to check that

P(A_1 ∩ A_2 ∩ A_3 ∩ A_4) = P(A_1)P(A_2)P(A_3)P(A_4).

You must also check that

P(A_1 ∩ A_2 ∩ A_3) = P(A_1)P(A_2)P(A_3),
P(A_1 ∩ A_2 ∩ A_4) = P(A_1)P(A_2)P(A_4),
P(A_1 ∩ A_3 ∩ A_4) = P(A_1)P(A_3)P(A_4),
P(A_2 ∩ A_3 ∩ A_4) = P(A_2)P(A_3)P(A_4),
P(A_1 ∩ A_2) = P(A_1)P(A_2),
P(A_1 ∩ A_3) = P(A_1)P(A_3),
P(A_1 ∩ A_4) = P(A_1)P(A_4),
P(A_2 ∩ A_3) = P(A_2)P(A_3),
P(A_2 ∩ A_4) = P(A_2)P(A_4), and
P(A_3 ∩ A_4) = P(A_3)P(A_4).

As you can see, pairwise independence is built right into the definition of (mutual) independence. The conclusion at the bottom of the page you linked to is incorrect.
That's what I get for being a googletard. Thanks for the clarification.

Last edited by Justin A; 12-18-2008 at 09:17 PM.
Ask a probabilist Quote
12-19-2008 , 10:10 AM
Quote:
Originally Posted by vhawk01
Cool thread. Maybe I'll do something like this next year. It would probably be more interesting in about 5 years but who knows if I'll still be posting here.
I can do one in...

God, kill me now.
Ask a probabilist Quote
12-19-2008 , 03:03 PM
Quote:
Originally Posted by blah_blah
Is there a deep reason for this (beyond the fact that independence makes the fourier analysis in the CLT work out the way you want it to)?
Well, the law of large numbers is about convergence to a constant. A random variable is constant if and only if its variance is zero. And the variance of a sum is completely determined by the pairwise joint distributions of the summands.

A more idea-level explanation might be the following. The law of large numbers is a first-order approximation, S_n ≈ nμ. The central limit theorem is a second-order approximation, S_n ≈ nμ + n^{1/2}σZ. Higher order approximations typically require more structure on the object being approximated. For instance, when x is small, f(x) ≈ f(0) requires only that f be continuous at 0, whereas f(x) ≈ f(0) + f'(0)x requires that f be differentiable at 0.
Ask a probabilist Quote
12-19-2008 , 04:26 PM
Quote:
Originally Posted by jason1990
A more idea-level explanation might be the following. The law of large numbers is a first-order approximation, S_n ≈ nμ. The central limit theorem is a second-order approximation, S_n ≈ nμ + n^{1/2}σZ. Higher order approximations typically require more structure on the object being approximated. For instance, when x is small, f(x) ≈ f(0) requires only that f be continuous at 0, whereas f(x) ≈ f(0) + f'(0)x requires that f be differentiable at 0.
But this additional structure is largely (mostly?) manifested in the additional moments that you require in the central limit theorem as well.

Anyways, it was a bit of a silly question, but thanks for the reply
Ask a probabilist Quote
12-20-2008 , 02:59 PM
Quote:
Originally Posted by thylacine
So I take it that you're not too enamored with the idea that every time I flip a coin reality splits into two equally real realities?
i just wrote up a little script that generates a random number between 0 and 1 with 1000 decimal digits.

every time i run this script, am i really splitting the universe into 10^1000 equally real realities?

would make creating universes a little trivial, dont you think? its nice in theory, but highly improbable in reality.
Ask a probabilist Quote

      
m