Different computation results in JavaScript than in Java

I have a problem with large numbers I think.

Calculation in Java:

int n = 4451 + 554 * 57;
n = n << 13 ^ n;
System.out.println(n * (n * n * 15731 + 789221) + 1376312589);
=> 587046333

      

In JavaScript:

var n = 4451 + 554 * 57;
n = n << 13 ^ n;
console.log(n * (n * n * 15731 + 789221) + 1376312589);
=> 4.043454188561781e+29

      

What is the problem with the JavaScript version and how can I fix it so that the JavaScript output is identical to the Java output?

EDIT: Tried using https://github.com/jtobey/javascript-bignum but the result is 0

var test = new BigInteger(295120061).multiply( new BigInteger(295120061) 
                                      .multiply(new BigInteger(295120061)) 
                                      .multiply(new BigInteger(15731)) 
                                      .add(new BigInteger(789221)))
                                      .add(new BigInteger(1376312589));

      

=> test = 0

+3


source to share


3 answers


The problem, as @ajb pointed out, is that JavaScript is loosely typed and does double precision floating point arithmetic, whereas we need strict 32-bit integer arithmetic.

There is a function for multiplication for this purpose Math.imul

. It is not yet supported in Internet Explorer, but the linked page contains a replace function that mimics imul

for older browsers, which works by multiplying the top and bottom halves of the numbers separately.

To add, we can OR numbers from 0. This works because any bitwise operation makes JavaScript numbers become 32-bit integers, and ORing from 0 doesn't actually change the value outside of that:

Math.iadd = function(a, b) { return ((a|0) + (b|0))|0; }

      



Now use it:

var n = Math.iadd(4451, Math.imul(554, 57));
n = n << 13 ^ n;
console.log(Math.iadd(Math.imul(n, Math.iadd(Math.imul(Math.imul(n, n), 15731), 789221)), 1376312589));

      

A bit long and messy, but it works. Output 587046333, identical to Java.

+3


source


JavaScript has no integer arithmetic and all numbers are stored as 64-bit floats ( double

in Java). When JavaScript sees a bit-handling operator, like <<

or ^

, it temporarily converts the operands to 32-bit integers to do arithmetic, but then converts them back to 64-bit floats. So the last multiplication is done as a floating point operation in JavaScript. In Java, this is still a whole operation. This code does the same operation in Java (I've tested it now and the result is the same):

int n = 4451 + 554 * 57;
n = n << 13 ^ n;
double x = n;
System.out.println(x * (x * x * 15731 + 789221) + 1376312589);

      



If you want JavaScript code that works the same way as Java, you need a way to do multiplication and addition that works just like Java when you overflow. That is, it should treat the results of all operations as if they were in the range -2 31 and 2 31-1. There is really no reliable way to do this in JavaScript using native arithmetic; even if you give it two values ​​that only have 31 significant bits, when you multiply them, you get 62 significant bits, whereas the JavaScript "number" type only has 52 bits, which means some bits will be lost. There might be a JavaScript library that would allow you to do this kind of precise integer arithmetic, but I'm not an expert in JavaScript, so I don't know what it would be. Maybe someone else will ring.

+5


source


Usage: https://github.com/iriscouch/bigdecimal.js

var n = 4451 + 554 * 57; 
n=n << 13 ^ n; 
var test = new BigDecimal(n).multiply( new BigDecimal(n) 
                                      .multiply(new BigDecimal(n)) 
                                      .multiply(new BigDecimal(15731)) 
                                      .add(new BigDecimal(789221)))
                                      .add(new BigDecimal(1376312589));
    test.intValue()

      

outputs the correct result

+1


source







All Articles