Why doesn't induction always work with Big-O?
I am currently watching ocw video courses on algorithms and in the second lecture I got stuck at the point where the professor is proving the following expression by induction: -
n = O (1)
proof:-
For base case
1=O(1)
suppose n-1 = O(1)
=> 1+(n-1)=O(1)+O(1)
=> n=O(1).
hence it is proved.
But the professor said we cannot enact over Big-O and the above proof is wrong, but why?
source to share
You can use for versus Big-O, but this induction process is wrong if n
is a variable instead of a constant.
When you write 1 = O(1)
, the interpretation will be
if f_1(n) = 1 for all n then f_1(n) = O(1)
which is correct.
Now, if we define f_a(n) = a
for all n
, then by induction we can prove that f_a(n) = O(1)
for all a
, which is also correct.
But if n
it is not a constant, but a variable, you cannot get n = O(1)
as a result of your induction, since there is no k
such thing f_k(n) = n
for everyone n
.
source to share
Mathematical induction is a tool for proving operators parameterized by integers. In this case, it cannot be applied because the study you are considering changes halfway: you take a constant 1 as a base argument and then you accept a function statement f (n) = n-1.
The proof correctly shows that any constant n is in O (1).
source to share
Basically in mathematical induction there is a tool to get a well-controlled step further from a well-known position. It doesn't have to be a step like n → n+1
or n → 2⋅n
, but most likely it is. Most of the time, the problem will be that a well-controlled step has a constant size, so in Big-O it just gets lost.
But in some cases, you can only use Big-O for simplicity when calculating complexity, for example
(n + a + 1) (n-2⋅b) (n 2 -3) = n 4 + O (n 3 ) (for constants a and b)
There are also cases where you might see difficulty in the induction phase.
Example:
Find the complexity f(n) = f(n-1) + f(n-2)
with n & geq; 2 and f(0) = f(1) = 1
.
Step n → n+1
gives
2⋅f (n-1) & leq; f (n + 1) & leq; 2⋅f (n)
so basically the size is doubled for +1
. you need to n
+1
sum up to n
, which gives the first value doubles about n
times. The start ( f(1)
) is at O(1)
, so is at .2n ⋅ O(1)
O(2n)
Finally, there are cases where you can prove complexity by induction.
Example:
Find the difficulty n!
.
Proof: 1 is in O(1)
. Suppose it (n-1)!
is in O(1)
. Then
n! = n⋅ (n-1)! = n * O (1) = O (n).
Hell, n!
it can't be in O(1)
, because it (n+1)!
should have been in O(1)
, but it isn't. Next try:
Suppose it (n-1)!
is in .O(nk)
n! = n⋅ (n-1)! = n ⋅ O (n k ) = O (n k + 1 )
Damn bad luck. If n!
found in , it is assumed inO(nk)
(n+1)!
O ((n + 1) k ) = O (n k + k⋅n k-1 + ... [smaller monomials]) = O (n k )
But for k = n
that it would work, since for n!
in , it is supposed to be in , and actually is in , but it's not a dense upper bound. To prove better complexity, you need to use better methods than induction.O(nn)
(n+1)!
O((n+1)n+1)
n!
O(nn)
So this is more of an example for preventing complexity using induction.
In general, I would not say that there is no complexity that can be proved by induction, but in most cases it goes out of the best ways to prove it.
source to share