Open Side Menu Go to the Top
Register
Ask a probabilist Ask a probabilist

11-12-2009 , 08:28 PM
Quote:
Originally Posted by PairTheBoard
Is it the case that when you put the electron in the non time dependent potential the initial value of the wave function, f(x), doesn't matter because under those conditions the wave function evolves to a stationary solution (except for phase) which is the same for all initial values f(x)?


PairTheBoard
I'm not exactly sure what you mean by f(x), is the solution to the full Schrodinger eq f(x) times a time dependent component? If thats the case, it does matter what f(x) is because we need that to compute observables.
Ask a probabilist Quote
11-12-2009 , 09:38 PM
Originally Posted by PairTheBoard
"Is it the case that when you put the electron in the non time dependent potential the initial value of the wave function, f(x), doesn't matter because under those conditions the wave function evolves to a stationary solution (except for phase) which is the same for all initial values f(x)?"



Quote:
Originally Posted by Max Raker
I'm not exactly sure what you mean by f(x), is the solution to the full Schrodinger eq f(x) times a time dependent component? If thats the case, it does matter what f(x) is because we need that to compute observables.
I mean f(x) as in jason1990's post #322, the initial value of the wave function at time t=0 when "you put the electron in the non time dependent potential". Does the wave function then evolve to a stationary solution at times t>0? Does that stationary solution depend on f(x)? If it doesn't, then you've answered jason1990's request for an experimental way to produce an actual calculatable wave function. If the stationary solution does depend on the initial value for the wave function, f(x), then you haven't answered his question because without an actual calculatable f(x) to start with you don't get an actual calculatable stationary solution by putting the electron in the non time dependent potential.


PairTheBoard
Ask a probabilist Quote
11-13-2009 , 01:24 AM
Quote:
Originally Posted by PairTheBoard
Originally Posted by PairTheBoard
"Is it the case that when you put the electron in the non time dependent potential the initial value of the wave function, f(x), doesn't matter because under those conditions the wave function evolves to a stationary solution (except for phase) which is the same for all initial values f(x)?"





I mean f(x) as in jason1990's post #322, the initial value of the wave function at time t=0 when "you put the electron in the non time dependent potential". Does the wave function then evolve to a stationary solution at times t>0? Does that stationary solution depend on f(x)? If it doesn't, then you've answered jason1990's request for an experimental way to produce an actual calculatable wave function. If the stationary solution does depend on the initial value for the wave function, f(x), then you haven't answered his question because without an actual calculatable f(x) to start with you don't get an actual calculatable stationary solution by putting the electron in the non time dependent potential.


PairTheBoard
If the potential is not time dependent than H(x,t) can (maybe) be written in a different form which allows me to actually solve it. Obv I can't solve it in the abstract where H is a arbitrary function of x and t. Maybe we are still not on the same page....
Ask a probabilist Quote
11-13-2009 , 08:57 AM
Quote:
Originally Posted by Max Raker
If the potential is not time dependent than H(x,t) can (maybe) be written in a different form which allows me to actually solve it. Obv I can't solve it in the abstract where H is a arbitrary function of x and t. Maybe we are still not on the same page....

I don't think we need a nice closed form solution to the wave function. A numeric solution would be fine for me and I think for jason1990. What we want is a calculatable solution. But you can't calculate a solution to the wave function if you don't have an initial value f(x) for it. Look at jason1990's request again.

Quote:
My understanding was this. If the wave function at time t = 0 is f(x), then the wave function at time t, denoted by Ψ(x,t), can be found by solving the initial value problem
where H is the Hamiltonian operator. In other words, we need the initial wave function to obtain the wave function at a later time. The Schrodinger equation just tells us how the wave function evolves. In order to calculate something explicit, we need to know what it is evolving from. Is this right? If so, I am just wondering how the experimenters can set things up so as to produce a specific given initial wave function.
We want to be able to "calculate something explicit". I don't think it matters whether we do that numerically or by way of a nice closed form solution using seperable techniques in a special case. The problem is we can't get anything explicit without starting out with something explicit, an explicit f(x).

My thought was that in an explicit non time dependent potential the wave function might evolve to a stationary solution which might not depend on the initial condition f(x). But if the solution in this case - stationary or not - does depend on f(x), you cannot get an explicit solution without an explicit initial value f(x). So you are back where you started from. You've not provided an experimental set up where the wave function is actually known. Of course we don't need you to show an actual Hamiltonian for your special case. We understand in principle you would have one. What you haven't shown is having in principle an actual initial value f(x) from which your nice solution can actually be calculated using your actual Hamiltonian.

PairTheBoard
Ask a probabilist Quote
11-13-2009 , 02:05 PM
I think the main key is that we should probably ignore post 322. If H is some completely unknown differential operator, we can't prescribe a set way to solve the equation. What we should do is specify the Hamiltonian, which is always known in an experimental setup and then show that the solution will be separable and then actually solve it.
Ask a probabilist Quote
11-13-2009 , 03:14 PM
Quote:
Originally Posted by jason1990
Is this right? If so, I am just wondering how the experimenters can set things up so as to produce a specific given initial wave function.
Probably the most common way of doing something like this is by making some sort of measurement of the system as a form of preselection. A standard textbook example would be something like the Stern-Gerlach experiment, where a thermal beam of particles is collimated and fired into a magnet oriented to have some non-zero gradient along a particular direction. The particles will feel a force on that axis, and whether it is positive or negative depends on the projection of the spin along that axis. If you go far enough away from this magnet, you can resolve the beams that come out and select one of them. Under the standard interpretation of QM, the measurement process leads to a reduction of the wavefunction from a superposition of allowed spin projections to precisely the spin projection that you have measured. Thus, as far as that degree of freedom is concerned, you've prepared the state. (What's going on with the other degrees of freedom generally depends on the Hamiltonian you're considering, and whether what you're measuring is entangled with the rest of the system. In simple or idealized cases you don't necessarily care but for some things it's a big deal.)

I think the ability to do it that cleanly is fairly rare, but projective measurement of that sort is probably the canonical way of preparing a particular system in a given state. There are other tricks that work when you're making the same measurement many times. A fairly common one is to work at temperatures that are low compared to the energy scales involved in your system, when the thermal distributions are very heavily weighted towards occupation of the ground state. From the ground state, you can perform manipulations to move the state into another state of your choosing. These manipulations will fail when your starting state happens to be one of the low-probability starting points that was not in the ground state, but you can take that into account.
Ask a probabilist Quote
11-14-2009 , 03:02 AM
Quote:
Originally Posted by gumpzilla
Probably the most common way of doing something like this is by making some sort of measurement of the system as a form of preselection...

...There are other tricks that work when you're making the same measurement many times. A fairly common one is to work at temperatures that are low compared to the energy scales involved in your system, when the thermal distributions are very heavily weighted towards occupation of the ground state. From the ground state, you can perform manipulations to move the state into another state of your choosing...
This seems to make sense. So through ground states (what PairTheBoard called stationary solutions), measurement-induced collapses (is "reduction" a better word than "collapse"?), and some other manipulations, we can create our desired wave function.

This seems to really limit the wave functions we can create. In principle, every L2 function with unit norm is a wave function, but I imagine most of them cannot be created in this way. Is that right?

Here is a concrete example. Suppose I want to study a free particle in one dimension, so that the Hamiltonian is
I would like my initial wave function to be
where
How can we set this up experimentally? Is it even possible?

Last edited by jason1990; 11-14-2009 at 03:12 AM.
Ask a probabilist Quote
11-29-2009 , 08:24 PM
Do you have an opinion on financial economics, such as the Black/Schooles equation?
Ask a probabilist Quote
12-04-2009 , 11:59 AM
Quote:
Originally Posted by river_tilt
Do you have an opinion on financial economics, such as the Black/Schooles equation?
I have two opinions.

Learning mathematical finance is the best way to learn the basics of stochastic differential equations.

Quants who lack a coherent understanding of the concept of randomness, no matter how technically and mathematically skilled they may be, should not be trusted with my money.
Ask a probabilist Quote
12-28-2009 , 12:29 AM
Hi,

Let's say you have 2 groups A and B of 1000 poker players.

The average winrate for group A is equal to that of group B.

You randomly pick 20 players out of the 1000 from group A. The best player from those 20 players has a winrate of 2.00 BB/100 over 10k hands with a standard deviation of 20BB/100. This is player X

You randomly pick 500 players out of the 1000 from group B. The best player from those 500 players has a winrate of 2.01BB/100 over 10k hands with the same standard deviation of 20BB/100. This is player Y


In summary, players X and Y have played the same number of hands and their winrates have the same standard deviation however player Y has a slightly higher winrate.

However, I would not say that player Y necessarily rates to be better because I believe it is more likely that player Y has been lucky.

How would you take this into account from a statistical/probabilist point of view?

thank you
Ask a probabilist Quote
01-02-2010 , 02:58 AM
Quote:
Originally Posted by lastcardcharlie
What's the solution to the Two Envelopes Problem?
Buy one big envelope and save on postage
Ask a probabilist Quote
01-06-2010 , 10:21 AM
If someone would ask me "what is the use of trying to generalize the Borel-Cantelli lemma?", how would you explain it? I know it has to do with events that are almost sure to happen but I can't quite see the link. I'm asking because when I give a talk about my thesis I want to start with saying something about why this research is necessary.
Ask a probabilist Quote
01-08-2010 , 11:15 AM
Quote:
Originally Posted by mastertop101
Let's say you have 2 groups A and B of 1000 poker players.

...

How would you take this into account from a statistical/probabilist point of view?
This is not much different from your previous question, and the answer is the same. This is best dealt with using Bayesian methods.
Ask a probabilist Quote
01-08-2010 , 11:24 AM
Quote:
Originally Posted by Styhn
If someone would ask me "what is the use of trying to generalize the Borel-Cantelli lemma?", how would you explain it? I know it has to do with events that are almost sure to happen but I can't quite see the link. I'm asking because when I give a talk about my thesis I want to start with saying something about why this research is necessary.
Borel-Cantelli deals with the question of whether or not a certain sequence of events will happen infinitely often. Perhaps you could try to find an example from mathematical finance which uses Borel-Cantelli, and begin with that. Or maybe just make one up.
Ask a probabilist Quote
01-11-2010 , 09:59 AM
Quote:
Originally Posted by jason1990
Borel-Cantelli deals with the question of whether or not a certain sequence of events will happen infinitely often. Perhaps you could try to find an example from mathematical finance which uses Borel-Cantelli, and begin with that. Or maybe just make one up.
Well, I've seen the BC-lemma in action before (for instance in the proof of the strong law of large numbers) so I know that it's being used a lot in proving theorems but I'm asking the question on a more fundamental level.

As I now understand it, the lemma gives conditions for when a sequence of events happens infinitely often. So, if we have a sequence {A_n} then if the sum of these A_n converge then only a finite number of A_n occur and if the sum goes to infinity and the A_n are independent then the A_n occur infinitely often. Right?

But why is this useful? Why are mathematicians so interested in generalizing this lemma? For my thesis I'm reading several articles which deal with relaxing the independence condition or try to find out what conditions must hold in order to have P(A_n) \in [0,1] etcetera... Is the only reason we want to do this that "we can find more sequences of events for which P(A_n i.o.) is 0 or 1 (or in [0,1])"? Why is this lemma so important to probability theory to warrant so much interest?

I'm sorry if I'm asking silly questions but I want to be able to explain why it is useful to investigate this lemma and why we want to be able to generalize it. I'd like to be able to give a more fundamental answer than "it has applications in this or this area".

Last edited by Styhn; 01-11-2010 at 10:07 AM.
Ask a probabilist Quote
01-13-2010 , 01:30 PM
Quote:
Originally Posted by Styhn
Well, I've seen the BC-lemma in action before (for instance in the proof of the strong law of large numbers) so I know that it's being used a lot in proving theorems but I'm asking the question on a more fundamental level.

As I now understand it, the lemma gives conditions for when a sequence of events happens infinitely often. So, if we have a sequence {A_n} then if the sum of these A_n converge then only a finite number of A_n occur and if the sum goes to infinity and the A_n are independent then the A_n occur infinitely often. Right?
Correct.

Quote:
Originally Posted by Styhn
But why is this useful?
One reason is that it is a zero-one law. If I have a sequence of independent events, then even without knowing any probabilities, I can say that P(A_n i.o.) is either 0 or 1. It cannot be something in between. So, for instance, if I can find some way to prove that the probability is positive, then it must be 1.

Quote:
Originally Posted by Styhn
Why are mathematicians so interested in generalizing this lemma?
It is a very foundational lemma in probability theory. Many things (including the strong law of large numbers) use it in their proofs. When we expand the scope of BC, we are also hoping to expand the scope of the many things which rely on it.

Quote:
Originally Posted by Styhn
For my thesis I'm reading several articles which deal with relaxing the independence condition or try to find out what conditions must hold in order to have P(A_n) \in [0,1] etcetera... Is the only reason we want to do this that "we can find more sequences of events for which P(A_n i.o.) is 0 or 1 (or in [0,1])"?
Well, I would not phrase it this way, because it makes it sound like we are looking for sequences. Rather, we typically already have the sequence and we want to know its properties.

But more than that, as I wrote above, it is not just about finding more sequences to apply BC to (which is what I mean by expanding the scope of BC). It is about expanding the scope of all those things that rely on BC. Many times, in the course of doing some research, we are trying to prove something and we realize that BC can be used to prove it. But then we must check if BC applies in our particular case. The broader the hypotheses in BC, the better chance we have of being able to use it.

Quote:
Originally Posted by Styhn
Why is this lemma so important to probability theory to warrant so much interest?
Again, I would say it is because of the foundational nature of it that I described above.
Ask a probabilist Quote
01-19-2010 , 07:37 AM
HI

lets say i own a roulette table with a bet of 2500 only, standard roulette table with normal payouts, one zero.

how much of a bankroll would i need to make sure i have a 0.1% chance of ruin?

If you could post the formula you used to it would be greatly appreciated.

also wots the chance that with a 1.3 million bankroll i would go broke when the betting is 3000?

also is variance increased by having 3 tables instead of 1?

thanks in advance
Ask a probabilist Quote
01-19-2010 , 07:42 AM
Your help has been very useful Jason, thanks.
Ask a probabilist Quote
01-29-2010 , 01:38 AM
I haven't read through most of the thread, but I'm taking a graduate level class in Stochastic Differential Equations this semester. I haven't had any background in measure-theoretic probability, and although I feel like I understand the material on martingales and Brownian motion (we haven't gone much farther), I feel lost when reading the probability review, even though I have read it quite a few times. Which books would you recommend I read to get a better grasp of probability?
Ask a probabilist Quote
02-06-2010 , 01:03 PM
Hi Jason, great threads happening here!

I would like to ask you if the study of Bayesian Probabiltity does that fall under your profession as a probabilist? And if so, by applying Bayesian Probability does that help towards the mathematics of poker?

Also, I am considering to undertake a mathematical science degree and to hopefully use this mathematical approach and apply it to areas of gambling towards - poker, blackjack, sports betting and slot machines. Do you think this approach could provide the benefits towards gambling by know the statistical mathematics behind it?

Cheers
Chippiie
Ask a probabilist Quote
03-22-2010 , 12:10 PM
Can an event with probability zero occur?
Ask a probabilist Quote
03-23-2010 , 01:17 PM
Quote:
Originally Posted by lastcardcharlie
Can an event with probability zero occur?
I wrote about this back in 2008. See this post and its follow-ups.

Here are some additional comments. The phrase "an event with probability zero" is well-defined, mathematically. Given a probability space (Ω, F, P), an "event with probability zero" is any element A ∈ F such that P(A) = 0. However, the phrase "A occurs" is not well-defined, mathematically. The events are not occurring. They are just sitting there, in the collection F.

Let us, therefore, try to rigorously define the concept of "occurring". Let us say that an "occurrence assignment" is a map O: F → {0,1} -- if O(A) = 0, then A "did not occur"; and if O(A) = 1, then A "did occur" -- which is logically consistent, i.e.
  1. O(Ø) = 0,
  2. if A ⊂ B and O(A) = 1, then O(B) = 1.
  3. if O(A) = 1 and O(B) = 1, then O(A ∩ B) = 1,
  4. for all A ∈ F, either O(A) = 1 or O(Ac) = 1.
Note that these imply that for all A ∈ F, O(A) = 1 iff O(Ac) = 0.

Occurrence assignments always exist. Given any fixed ω ∈ Ω, we can define Oω(A) = 1A(ω). In other words, A occurs if and only if ω ∈ A. But are all occurrence assignments of this form? I think not. I believe it can be proven (using Zorn's lemma) that if GF satisfies (i) Ω ∈ G, (ii) A ∈ G and A ⊂ B implies B ∈ G, and (iii) A ∈ G and B ∈ G implies A ∩ B ∈ G, then there exists an occurrence assignment O such that O(A) = 1 for all A ∈ G.

Now that the phrase "A occurs" is rigorously defined, let us reconsider the original question. Suppose we want to believe that events of probability zero cannot occur, and events of probability one must occur. To show that this position is sensible, we must show that given any arbitrary probability space (Ω, F, P), it is always possible to find an occurrence assignment O such that P(A) = 0 implies O(A) = 0, and P(A) = 1 implies O(A) = 1. Can we prove this? Yes, if I am right about the Zorn's lemma idea above. We simply take G = {A ∈ F: P(A) = 1}.

For example, consider Ω = [0,1], F the Borel subsets of Ω, and P the uniform measure. Can we find an occurrence assignment O such that P(A) = 0 implies O(A) = 0, and P(A) = 1 implies O(A) = 1? Well, not if we restrict our attention to occurrence assignments of the form Oω(A) = 1A(ω). For each of these, we will have P({ω}) = 0, but O({ω}) = 1. However, Zorn's lemma should guarantee the existence of an occurrence assignment such that O({w}) = 0 for all ω ∈ [0,1], even if we cannot give an explicit description of it.

For some related info, see this Wiki link.
Ask a probabilist Quote
03-23-2010 , 01:34 PM
Quote:
Originally Posted by kidpokes
HI

lets say i own a roulette table with a bet of 2500 only, standard roulette table with normal payouts, one zero.

how much of a bankroll would i need to make sure i have a 0.1% chance of ruin?

If you could post the formula you used to it would be greatly appreciated.

also wots the chance that with a 1.3 million bankroll i would go broke when the betting is 3000?

also is variance increased by having 3 tables instead of 1?

thanks in advance
Since my time is limited, I will probably not have a chance to answer this one. I recommend you post this question in the probability forum, where there are several members who are quite qualified to answer this for you.
Ask a probabilist Quote
03-23-2010 , 01:40 PM
Quote:
Originally Posted by Myrmidon7328
I haven't read through most of the thread, but I'm taking a graduate level class in Stochastic Differential Equations this semester. I haven't had any background in measure-theoretic probability, and although I feel like I understand the material on martingales and Brownian motion (we haven't gone much farther), I feel lost when reading the probability review, even though I have read it quite a few times. Which books would you recommend I read to get a better grasp of probability?
You might try this book. But realize that developing a full understanding of the foundations of measure theoretic probability theory is no small task.
Ask a probabilist Quote
03-23-2010 , 01:49 PM
Quote:
Originally Posted by chippiie
I would like to ask you if the study of Bayesian Probabiltity does that fall under your profession as a probabilist? And if so, by applying Bayesian Probability does that help towards the mathematics of poker?
There is no such thing as "Bayesian probability". There is only probability, and it is inherently Bayesian. Anything that one might call "non-Bayesian" necessarily falls under the umbrella of statistics.

Quote:
Originally Posted by chippiie
Also, I am considering to undertake a mathematical science degree and to hopefully use this mathematical approach and apply it to areas of gambling towards - poker, blackjack, sports betting and slot machines. Do you think this approach could provide the benefits towards gambling by know the statistical mathematics behind it?
You might like this book. I believe it will be released this summer.
Ask a probabilist Quote

      
m