Open Side Menu Go to the Top
Register
The Official Math/Physics/Whatever Homework questions thread The Official Math/Physics/Whatever Homework questions thread

02-05-2014 , 05:12 AM
I look forward to bombarding this thread with my wngineering math questions once semester starts
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 10:02 AM
Quote:
Originally Posted by BruceZ
Check out 73/81 = 0.9012345679012345679...

(no 8).
Was this in response to me?
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 10:57 AM
Can anyone help me get started on figuring out the asymptotic bounds of this recurrence? We spent most of our time using the Master Method but it doesn't seem to be applicable.



I tried drawing out the tree but that got kind of confusing.

My intuition is that it's going to be O(n*log(n)) but honestly that's just a guess
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 11:14 AM
Substitute

let

now


I'm not yet sure if an explicit expression can be found for U(m), but the recurrence for it looks much simpler to me.

Edit: now of course let

then


Hence V(m) is of course a partial sum of a hyperexponentially convergent series,

Last edited by coon74; 02-05-2014 at 11:43 AM.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 11:50 AM


or, if you prefer the 'semi-closed' form with the summation sign as mathematicians do,


Edit: oops, the third formula of the previous post should certainly read

but of course you've understood me.

So the key thing was the substitution of variables exploiting the fact that

Last edited by coon74; 02-05-2014 at 12:04 PM.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 12:22 PM
Thank you very much, that makes a lot of sense. So actually an algorithm that is represented by this recurrence is Θ(n)?
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 12:49 PM
Quote:
Originally Posted by Acemanhattan
Was this in response to me?
No, just a curious fact I stumbled across. It's related to this:

1/81 = 0.012345679012345679...
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 01:38 PM
Quote:
Originally Posted by SenorKeeed
Thank you very much, that makes a lot of sense. So actually an algorithm that is represented by this recurrence is Θ(n)?
Yes (surprisingly). Or do you see a mistake of mine?

Speaking in programmers' language, m is the depth of the recursion tree. So I guess rewriting the sought function as a function of the tree depth is a common method used to solve such recurrences, though I've never studied computer science in such depth.

So whenever a recurrence expresses T(n) in terms of T(sqrt(n)), it's a good idea to switch variables from n to log(log(n)); when T(n) is expressed in terms of T(n/a), switch from n to log(n) (the Master method).

Last edited by coon74; 02-05-2014 at 01:48 PM.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 01:57 PM
No, it's just surprising, like you say. But looking through the code that the recurrence comes from I see that if you call the function with n=1000 the first function call will do 31 operations and call itself 31 times. Then each of those calls will do 5 operations and call itself 5 times. Then those will do 2 operations and call itself 2 times...so it very quickly runs out of work to do even though the first branching could be very very large. But then it will run faster than an algorithm that only branches into 2 subproblems like something represented by T(n)=2T(n/2)+n .
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 02:08 PM
Quote:
Originally Posted by coon74
So whenever a recurrence expresses T(n) in terms of T(sqrt(n)), it's a good idea to switch variables from n to log(log(n)); when T(n) is expressed in terms of T(n/a), switch from n to log(n) (the Master method).
ha! Of course the Master method is a change of variables as you describe but it didn't quite sink in as such when the professor described it in lecture. He did a brief derivation of it but seemed to more stress just memorizing the three cases and applying them. Thanks for your help.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 02:12 PM
Keeed,

I haven't actually gone through the arithmetic of what c74 just posted, so it's quite possible that this is linear. But you can get better than nlog(n) straight from the Master Theorem.

T(n^2) = nT(n) + n

Let n^2 = 2^m, so m = 2 log_2(n)
So n = (2^m)^1/2 = 2^(m/2)

Call S(m):=T(2^m)

So we get S(m) = 2^(m/2)S(m/2) + 2^(m/2)

Now let 2^m U(m) = S(m) [this is to get rid of the 2^m/2]

2^m U(m) = 2^(m/2) 2^(m/2) U(m/2) + 2^(m/2)

= 2^m U(m/2) + 2^(m/2)

U(m) = U(m/2) + 2^(-m/2)

So, U(m) <= U(m/2) + 1. By the Master Theorem,
U(m) = O(log(m)).

So S(m) = O(2^m log m)

So T(2^m) = O(2^m log m)

So T(n) = O(n log log n)
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 02:25 PM
Is log (log n) close enough to a constant that n*(log log n) is functionally no different from a linear algorithm?

Last edited by SenorKeeed; 02-05-2014 at 02:31 PM.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 02:36 PM
How about this one:

T(n) = T(n/2)+ n*lg(n)

T(n) = n*lg(n)+n/2*lg(n/2)+n/4*lg(n/4)+...

T(n) = n*lg(n)+n/4*(lg(n)-1)+n/8*(lg(n)-2)+...

T(n) =n*lg(n) +n/4*lg(n) - n/4 + ...

therefore

T(n) is Θ(n*lg(n))
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 03:32 PM
Quote:
Originally Posted by SenorKeeed
Is log (log n) close enough to a constant that n*(log log n) is functionally no different from a linear algorithm?
lol

So in terms of the people refereeing your paper, O(n) and O(n log log n) are from different planets.

In practice, the asymptotic runtime of your algorithm doesn't matter nearly as much as the constants that big-O already buries because in practice whatever you're running is probably going to run on an input that you can bound in size, so for all intents and purposes, the thing is O(1). But that doesn't really help you, does it?
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 03:38 PM
It's like, I've read a bunch of papers where a function grows as Theta(n* \alpha(n)) where \alpha(n) = inverse Ackermann function of n.

See: http://en.wikipedia.org/wiki/Ackermann_function#Inverse

Basically \alpha(n) <= 4 for any number we could ever possibly want to evaluate, but for academic purposes only, n alpha(n) is superlinear.

That something with that growth exists in "nature" is kind of cool, but utterly useless in terms of algorithms. Would I rather use an algorithm that's O(n a(n)) or one that's O(n)? Well if the former is actually na(n) and the latter is 2^50 * n, I'll take the former.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 05:22 PM
Quote:
Originally Posted by Wyman
lol

So in terms of the people refereeing your paper, O(n) and O(n log log n) are from different planets.
And presumably the people grading my homework and exams as well

Quote:
In practice, the asymptotic runtime of your algorithm doesn't matter nearly as much as the constants that big-O already buries because in practice whatever you're running is probably going to run on an input that you can bound in size, so for all intents and purposes, the thing is O(1). But that doesn't really help you, does it?
Yeah this is what I was asking.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 05:49 PM
Quote:
Originally Posted by SenorKeeed
Yeah this is what I was asking.
My recommendation re:algorithms from several years in industry:

Start with something that works. Don't sweat the runtime. You can usually make it faster from there. But honestly, you might not need to.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 06:31 PM
Been working on this proof for an hour, don't know how to do it. Can anybody help?

Let PosZ = {z ∈ Z : z > 0}. Consider the function f : PosZ → PosZ dened as follows:
• f(1) = 1
• If z ∈ PosZ and z > 1 then f(z) is the largest integer that divides z but is distinct from z. (For
example f(41) = 1 and f(36) = 18.)
Prove that f is surjective.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 06:47 PM
You want to show that every integer n > 1 is the largest integer that divides some distinct z. That's true because every n is the largest distinct integer that divides z=2n.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 07:01 PM
Quote:
Originally Posted by BruceZ
You want to show that every integer n > 1 is the largest integer that divides some distinct z. That's true because every n is the largest distinct integer that divides z=2n.
Thanks, but it's still a little unclear to me. Like I understand intuitively why this is true, but don't know what to write down.

Like would I just say:
Suppose z is in the pos integers and z > 1.

For all z in pos int, there exists some n that is a pos integer that is the largest distinct integer to divide into z.

So, for every f(z) there is some n.

Therefore, f is surjective.

Does that work?
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 07:17 PM
Quote:
Originally Posted by bassflow
For all z in pos int, there exists some n that is a pos integer that is the largest distinct integer to divide into z.

So, for every f(z) there is some n.

Therefore, f is surjective.
That's backwards. You want to show that for every n > 1, n= f(z) for some z. So simply note that for all n > 1, n = f(2n).
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 07:41 PM
Quote:
Originally Posted by Wyman
U(m) = U(m/2) + 2^(-m/2)

So, U(m) <= U(m/2) + 1. By the Master Theorem,
U(m) = O(log(m)).
2^{-m/2}<1 is a very weak estimate. The series of these exponentials - 2^{-1}+2^{-2}+2^{-4}+2^{-8}... - is convergent (so U(m) is bounded), while the series of 1's isn't (the estimate based on it is U(m)=O(log(m)) because its partial sums grow linearly with the depth in the recursion tree).

Otherwise you basically ran the same argument as me.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 07:46 PM
Quote:
Originally Posted by coon74
2^{-m/2}<1 is a very weak estimate. The series of these exponentials - 2^{-1}+2^{-2}+2^{-4}+2^{-8}... - is convergent (so U(m) is bounded), while the series of 1's isn't (the estimate based on it is U(m)=O(log(m)) because its partial sums grow linearly with the depth in the recursion tree).

Otherwise you basically ran the same argument as me.
Yeah, I figured, and obviously that estimate is weak. Just wanted a quick and dirty transform into something I could bang at with the Master Thm.
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 07:47 PM
Quote:
Originally Posted by BruceZ
That's backwards. You want to show that for every n > 1, n= f(z) for some z. So simply note that for all n > 1, n = f(2n).
What about the situation where of f(41) = 1? In that case n does not equal f(2n) right?
The Official Math/Physics/Whatever Homework questions thread Quote
02-05-2014 , 08:04 PM
Quote:
Originally Posted by bassflow
What about the situation where of f(41) = 1? In that case n does not equal f(2n) right?
You already know 1 is mapped to, so we only need to show n = f(2n) for all n > 1.
The Official Math/Physics/Whatever Homework questions thread Quote

      
m