It is impossible to understand the complexity of this repetition

I'm doing a little bit of Master Theorem, and I'm trying to figure out the running time of an algorithm that solves the size problem with a n

recursive solution of 2 subproblems of size n-1

and combines the solutions in constant time.
So the formula is:
T(N) = 2T(N - 1) + O(1)


But I'm not sure how I can state the condition of the main theorem.
I mean we don't T(N/b)

, so b

the main theorem formulas in this case b=N/(N-1)

?
If so, then obviously a > b^k

with k=0

and O(N^z)

, where z=log2

with the base (N/N-1)

, how can I figure it out? Assuming I'm right so far?

+3


source to share


4 answers


ah, enough with hints. the solution is actually pretty simple. z-transform both sides, group the terms, and then reverse z-transform to get a solution.

first consider the problem as

    x[n] = a x[n-1] + c

      

apply z transform to both sides (there are some technicalities regarding ROC, but let's ignore this time)

    X(z) = (a X(z) / z) + (c z / (z-1))

      

solve for X (z) to get

    X(z) = c z^2 / [(z - 1) * (z-a)]

      

now note that this formula can be rewritten as follows:

    X(z) = r z / (z-1) + s z / (z-a)

      

where r = c / (1-a) and s = - ac / (1-a)

In addition, note that

    X(z) = P(z) + Q(z)

      

where P (z) = rz / (z-1) = r / (1 - (1 / z)) and Q (z) = sz / (za) = s / (1 - a (1 / z))

apply the inverse z-transform to get:

    p[n] = r u[n] 

      

and

    q[n] = s exp(log(a)n) u[n]

      

where log denotes the natural log, and u [n] is the step function of unity (Heaviside) (ie u [n] = 1 for n> = 0 and u [n] = 0 for n <0).

Finally, by linearity of the z-transform:

    x[n] = (r + s exp(log(a) n))u[n]

      

where r and s are as defined above.



so let's go back to the original problem,

    T(n) = a T(n-1) + c

      

then

    T(n) = (c/(a-1))(-1+a exp(log(a) n))u[n]

      

where exp (x) = e ^ x, log (x) is the natural logarithm of x, and u [n] is the unit step function.

What does this tell you?

If I make a mistake, T grows exponentially with n. This is effectively an exponentially increasing function under a reasonable assumption that a> 1. The exponent is determined by a (more precisely, by the natural logan a).

Another simplification, note that exp (log (a) n) = exp (log (a)) ^ n = a ^ n:

    T(n) = (c/(a-1))(-1+a^(n+1))u[n]

      

so O (a ^ n) in big O notation.

And now this is an easy way:

put T (0) = 1

    T(n) = a T(n-1) + c

    T(1) = a * T(0) + c = a + c
    T(2) = a * T(1) + c = a*a + a * c + c
    T(3) = a * T(2) + c = a*a*a + a * a * c + a * c + c
    ....

      

Note that this creates a template. namely:

    T(n) = sum(a^j c^(n-j), j=0,...,n)

      

put c = 1 gives

    T(n) = sum(a^j, j=0,...,n)

      

it is a geometric series that is evaluated as:

    T(n) = (1-a^(n+1))/(1-a)
         = (1/(1-a)) - (1/(1-a)) a^n
         = (1/(a-1))(-1 + a^(n+1))

      

for n> = 0.

Note that this formula is the same as above for c = 1 using the z-transform method. Again, O (a ^ n).

+3


source


Don't even think about the Master Theorem. You can only use the Master's theorem when you are given the main theorem when b> 1 from the general form T (n) = aT (n / b) + f (n).



Instead, think of it this way. You have a recursive call that decreases the size of the input, n, by 1 on each recursive call. And on every recursive call, the cost is constant O (1). The size of the input will decrease until it reaches 1. Then you add up all the overhead that you used to make the recursive calls. How many are there? n. So it takes O (2 ^ n).

+1


source


It looks like you cannot formulate this problem in terms of the main theorem.

A good start is to draw a recursion tree to understand the pattern and then prove it using the substitution method. You can also expand the formula a couple of times and see where it leads.

See also this question, which solves two subproblems instead of a

: Time constraint for recursive algorithm with constant combination time

0


source


Maybe you could think of it this way.

when

n = 1, T(1) = 1
n = 2, T(2) = 2
n = 3, T(3) = 4
n = 4, T(4) = 8
n = 5, T(5) = 16

      

It is easy to see that this is a geometric series 1 + 2+ 4+ 8 + 16...

, the sum of which is equal to the first term (ratio^n - 1)/(ratio - 1)

. For this series

1 * (2^n - 1)/(2 - 1) = 2^n - 1.

      

A valid member is here 2^n

, so the function belongs Theta(2^n)

. You can check this by runninglim(n->inf) [2^n / (2^n - 1)] = +ve constant.

Therefore, the function belongs to Big Theta (2^n)

0


source







All Articles