Karatsuba Algorithm vs Operator "*"?

Today I heard about the Karatsuba algorithm, the fast multiplication algorithm. I'm curious, what does this "fast" mean in that sense?

Typically, we think of the multiplication operation using the * operator as O (1) when calculating the time complexity of a piece of code, and if this is always the case, then why do we have a faster algorithm with regard to asymptotic notation? Or should * not be considered O (1) when executing on very large numbers, where Karatsuba's algorithm might be useful?

And at the machine level, compilers always do some optimization on *. For example, using bitwise operations to multiply a number by 2 ^ n. Does Karatsuba * algorithm work in real time?

+3


source to share


3 answers


Classic multiplication is O (n 2 ), where n is the number of digits multiplied by a number.

When measuring normal computer code, you are dealing with fixed (usually 32-bit or 64-bit) numbers, so it becomes O (1) (since the size does not change)



Once you start chatting with BigIntegers, it becomes very important.

+3


source


This algorithm applies to long numbers. Longer than the register size in the CPU. This is from Wikipedia:



Karatsuba's algorithm is a fast multiplication algorithm. it was invented by Anatoly Alekseevich Karatsuba in 1960 and published in 1962. This reduces the multiplication of two n-digit numbers to at most 3 n ^ (log_2 3) = 3 n ^ 1.585 single-digit multiplications in general (and exactly n ^ (log_2 3 ) when n is degree 2).

+1


source


The problem with this question is that the operator *

, as you call it, is not an algorithm. He left it entirely up to some combination of compiler (or interpreter) and CPU to determine how this goes with the answer.

I'm not sure where the statement that using the built-in multiplexer is O (1), but that can't be true unless there are some constraints on the input (perhaps such that N should be small enough to fit in a CPU register) or some lookup table is used.

As SLaks says, when the multiplication happens in the CPU (for most processors), the numbers are always the same size - 32 or 64 bits. Even though number 1 can be represented by one bit, it still takes up 32 bits of space (in most implementations)

Big-O notation simply tells you that there is some input size beyond which a more efficient algorithm (in Big-O terms) will be faster than a less efficient one.

Bitshifting cannot be applied to all arbitrary multiplications, so while useful in practice, it can only be algorithmically compared to other methods that only apply to multiplication by powers of two.

Most languages ​​have a special type to handle numbers larger than 32 bits, and it is even possible that they are multiplied *

using the Karatsuba algorithm.

+1


source







All Articles