Javascript 32 bit integer multiplication

I just want to multiply int as an overflow 32 bit integer.

some article easy to like with bitwise operation

function add32bit( a, b )
{
    return (a+b)|0;
}

      

etc.

function mul32bit( a, b )
{
    return (a*b)|0;
}

      

but it doesn't work.

on a 32-bit integer system that allows integer overflow.

calulate

12312311 * 1231231211 = -236858179

      

but with javascript

(12312311 * 1231231211)|0 = -236858180

      

there is a way to completely eliminate.

+3


source to share


1 answer


Decision (?)

Following Richie Frame's hint, I tried to use Karatsuba's algorithm to program a function that multiplies two integers and returns the result as a signed 32-bit integer.

// base algorithm from: https://en.wikipedia.org/wiki/Karatsuba_algorithm
// modified to discard values unneccessary for 32-Bit-operations

function int32_mul(x, y)
{
    // We use B = 2 and m = 16, because it will make sure that we only do multiplications with
    // 16 Bit per factor so that the result must have less than 32 Bit in total (which fits well
    // into a double).
    var bm = 1 << 16;

    x0 = x % bm;
    x1 = (x - x0) / bm;

    y0 = y % bm;
    y1 = (y - y0) / bm;

    // z1 + z0. We can discard z2 completely as it only contains a value out of our relevant bounds.
    // Both z1 and z0 are 32 Bit, but we shift out the top 16 Bit of z1.
    return (((x1 * y0 + x0 * y1) << 16) + (x0 * y0)) | 0;
}

      

I've also done some tests to make sure this works, but I'm not a professional in this field and so I can't guarantee that it will work for all combinations. TBH my brain is a little confused about this problem.

var tests = [
    [ 0, 0, 0 ],
    [ 0, 1, 0 ],
    [ 1, 1 << 8, 256 ],
    [ 1, 1 << 16, 65536 ],
    [ 1, 1 << 24, 16777216 ],
    [ 1, 0x7fffffff, 2147483647 ],
    [ 1, 0x80000000, -2147483648 ],
    [ 1, 0xffffffff, -1 ],
    [ 2, 1 << 8, 512 ],
    [ 2, 1 << 16, 131072 ],
    [ 2, 1 << 24, 33554432 ],
    [ 2, 0x80000000, 0 ],
    [ 2, 0x7fffffff, -2 ],
    [ 2, 0xffffffff, -2 ],
    [ 256, 256, 65536 ],
    [ 65536, 65536, 0 ],
    [ -2, 2, -4 ],
    [ -65536, 65536, 0 ],
    [ -2, -2, 4 ],
    [ -2147483648, 1, -2147483648 ],
    [ -2147483649, 1, 2147483647 ],
    [ 12312311, 1231231211, -236858179 ],
];

for (var i = 0; i < tests.length; ++i)
{
    var test = tests[i];
    if (int32_mul(test[0], test[1]) !== test[2])
    { console.log(test[0], '*', test[1], '!==', test[2]); }
}

      

If someone with more professionalism finds this, please leave a comment and / or inform us about further tests.

Also I cannot tell what values ​​this function takes. I'm sure it will work well with all signed 32-bit integers, but it can also work with 32-bit integers, or even any integer combination that does not exceed 68 bits (the 16 bits we separated + a href = "https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER" rel = "nofollow" title = "MDN: MAX_SAFE_INTEGER"> 52 bits) in general (as negative , and positive, since the sign has its own bit in two-local numbers).

Explanation of the task



There is an explanation why JavaScript is giving us the wrong result. I ran some simple tests using C (play with it at http://ideone.com/ELSgD0 ):

#include <stdio.h>
#include <stdint.h>

int main(void) {
    printf("%d\n", (int32_t)(12312311L * 1231231211L));
    printf("%llu\n", (uint64_t)12312311L * 1231231211L);
    printf("%.32f\n", 12312311. * 1231231211.);
    printf("%.8f\n", 15159301582738621.);
    return 0;
}

      

Output:

-236858179
15159301582738621
15159301582738620.00000000000000000000000000000000
15159301582738620.00000000

      

Javascript uses doubles for all calculations (even integers). From the tests above, I conclude that double cannot have a value 15159301582738621

.

This has to do with how floating point data types such as float (32 bit), double (64 bit), and quad (128 bit) work. Basically they don't store exact values, but values x

and y

values ​​in a form x * 2^y

, which allows them to store both very large values ​​and very small ones. We (humans) use similar syntax for very large and small numbers, for example. 1e9

for one billion or 1e-5

for 0.00001

, whereas e

is a shortcut for * 10^

.

Now say you are calculating 10001 * 10001

, which is obvious, 100020001

but imagine you could only store 5 digits and one measure. To store the result, you need to approach it and use, for example, 1.0002e8

or 10002e4

. As you can see, you have to forget about the 1 at the end. In fact, the problem with doublings in your application is very similar, just on a larger scale, and the base is 2 instead of 10.

The last two operators printf

prove that doublings cannot contain a value 15159301582738621

, which is the exact result of evaluating your example. If you try it with 12312311 * 1231231212

(2 at the end of the second number instead of 1), you will see that not all numbers in this range cannot be stored as double, as this calculation works fine with your function.

+1


source







All Articles