Loss of floating point precision in JavaScript arrays as they get really big?

I am relatively unfamiliar with JavaScript and was recently told that a JavaScript array contains a variable of length

type Number

. This length is automatically updated as the array is updated to the number of elements in the array.

However, I was also told that internally JavaScript uses 64-bit floating point representation for its class Number

. We know that floating point arithmetic cannot accurately represent all integers within its range.

So my question is, what happens with large arrays where length + 1

it cannot accurately represent the next largest integer in the sequence?

+3


source to share


2 answers


Accordingly , the maximum length of the array 4,294,967,295

. Number.MAX_SAFE_INTEGER

- 9,007,199,254,740,991

so you don't have to worry because the engine won't let you go that far, like:

new Array(4294967296); // RangeError: Invalid array length

      

The relevant part of the spec :



  1. from. Let newLen be ToUint32 (Desc. [[Value]]).
    b. If newLen is not equal to ToNumber (Desc. [[Value]]), throw a RangeError exception

So, given our example length 4294967296

:

var length = 4294967296;
var int32length = length >>> 0; // Convert to int32
int32length === 0; // Can't represent this as int32
length !== int32length; // Therefore RangeException

      

+5


source


The maximum length of an array as per ECMA-262 5th Edition specification is bound to an unsigned 32-bit integer due to the ToUint32 abstract operation, so the longest possible array could have 232-1 = 4,294,967,295 = 4.29 billion elements, This corresponds to the Maximum the size of the array in Javascript .



so i think @RGraham is right

0


source







All Articles