Declaring 'long' over 'int' in Java

In Java, if there is int

enough for a field, and if I use long

for some reason, will it cost me more memory? Does it depend on the types?

+2


source to share


7 replies


In Java yes, long is 8 bytes and integer is 4 bytes. This Java tutorial covers primitive data types. If you multiply the number of distributions by a certain amount (say, if you single out five million of these variables), the difference becomes more than negligible. For average use, however, it doesn't matter.

(Anyway, you're using Java anyway, memory anywhere.)



There is a performance consideration in native languages; A 32-bit value can be stored in one register on a 32-bit architecture, but not in a 64-bit value; on 64-bit architectures this is obviously possible. I'm not sure what Java optimizations are doing on their own integers, but it might be true at runtime as well. Alignment issues to worry about as well - you see it more with shorts and bytes.

The best practice is to use the type you want. If the value will never exceed 2 ^ 31, don't use long.

+8


source


Assuming from your previous questions that you want to ask this in the Java realm, the int datatype is four bytes and the long datatype is eight bytes.

However, whether the difference in size really means that the difference in memory usage depends on the situation.



If it is a local variable, it is allocated on the stack. Since the stack is already allocated, using more stack space will not use more memory unless you run out of stack.

If it is a member of a class, it will depend on how the elements are aligned. Sometimes members are not compactly folded in memory, but padding is used so that some members start at an even address. If you, for example, have a byte and an int in a class, then there must be three padding bytes between them, so the int starts at the next address, divisible by four.

+4


source


int

- 32 bit, and long

- 64 bit. long

takes up twice as much memory (which is quite small for most applications).

+2


source


long in Java is 64 bits and int is 32 bits, so obviously longs uses more memory (8 bytes instead of 4 bytes).

+2


source


ifwdev guessed correctly. Java defines int

as a 32-bit signed integer and long

as a 64-bit signed integer. If you declare a variable as long

, then yes, it will take twice as much memory as the same variable declared as int

. In general, this is int

usually the default numeric type, even for values ​​that may be contained in smaller types such as short

. Unless you have a specific reason to ask for values ​​greater than 2^31-1

, use int

.

+1


source


... would it cost me more memory?

You will be used twice as memory

Before worrying about whether you are using more memory or not, you must profile.

To use 1 megabyte extra reel using long rather than int, you will have to declare long variables 262,144 (or use them indirectly in your program).

So, if for some reason you declare one or two long variables when int is to be used, you will be using 4 or 8 bytes more memory. Not too much to worry about (I mean your application might have the worst memory problems)

Taken from Java Tutorial here is the definition of int and long

int . The int data type is a 32-bit two-digit integer integer. It has a minimum value of -2 147 483 648 and a maximum value of 2 147 483 647 (inclusive). For integral values, this data type is usually the default choice unless there is a reason (like above) to choose something else. This data type is likely to be large enough for the numbers your program will use, but if you need a wider range of values, use long instead.

long . The long data type is a 64-bit signed two's complement integer. It has a minimum value of -9,223,372,036,854,775,808 and a maximum value of 9,223,372,036,854,775,807 (inclusive). Use this data type when you need a range of values ​​other than the values ​​provided by int.

But remember: "Premature optimization is the root of all evil" according to Donald Knuth (according to me Copy / Paste is the root of all evil though)

+1


source


If you know your data will conform to a specific data type (say short int

in C), the only reason to use a larger one is the performance right? And if your goal, no matter how marginal your performance is, as a general rule of thumb, you want to use a size that matches your architecture size (so for a normal 32-bit target system, you should use 32-bit).

If you are targeting more than one system, you can use the data type that matches the most commonly used one.

0


source







All Articles