Can Java be faster than C ++ in any situation?
Unlike some people, I'm going to be reasonable and answer the question I know the OP wanted to ask about, although his statement about it could have been better.
Yes, a typical java implementation can be faster than a typical C ++ implementation for real things.Although Java has several disadvantages as a secure, VM language, it is also suitable for some of them.
First, because Java has a very abstract memory management scheme that does not allow manipulation of raw pointers or untyped blocks of memory, it can use a moving garbage collector. In C ++, unexplored memory areas, unions, etc. Violate it. So when the GC doesn't need to run, allocations in Java can just be a pointer. There is no practical way to do this in C ++, as a fully relocatable GC cannot be implemented in a language that supports the types of low-level manipulation that C ++ does.
Also, the virtual machine of a typical Java implementation is a double-edged sword relative to the statically compiled typical C ++ implementation. The VM has some overhead, but it also allows some additional optimizations. For example, let's say you have some kind of virtual function, like this (I apologize for the wrong syntax as I don't use Java regularly):
abstract class Foo {
void stuff() {}
}
class Foo1 extends Foo {
void stuff() {
System.out.println("Foo1");
}
}
class Foo2 extends Foo {
void stuff() {
System.out.println("Foo2");
}
}
// Somewhere in program initialization:
Foo foo;
if(args[0] == "Foo1")
foo = new Foo1();
else foo = new Foo2;
for(int i = 0; i < 1000000; i++)
foo.stuff();
In C ++, a virtual function call on foo.stuff () should probably be done for all 1000,000 iterations. The Java VM can replace this with a direct runtime call, realizing that there is no semantically legal way for foo to bounce for an object of class Foo2.
source to share
Languages โโdon't have speed. A good Java compiler can generate more efficient code than a bad C ++ compiler, and vice versa. So which one is the "fastest"?
Execution speed depends on several factors:
- Compiler. Different compilers generate different output codes from the same input.
- Source. Some transactions are cheap in one language but expensive in others. allocating memory with "new" is much slower in C ++ than in a managed language like C # or Java)
- The system works. (CPUs differ in how quickly they can execute other code. What if your Java compiler generates code that works fine on Core 2, but my C ++ compiler generates code that works well on Phenom? / Li>
But language doesn't really matter. Each language provides certain guarantees that can prevent certain optimizations. But a clever compiler can determine that these guarantees can be safely circumvented in this particular case by doing the optimization anyway. (For example, the Java compiler often tries to eliminate the bounds checking required by the language.) So it depends on the code you are testing (Java code ported to C ++ will probably perform better in the Java version and vice versa), as well as how you compile it and where you run it.
As Martin York says, this is a silly question. Impossible to answer. Of course Java can be faster than C ++ in some situations. For example, if you write really good Java code and very bad C ++ code. Or if you are using a lousy C ++ compiler. Or if any of the millions of other things just affect the Java version so much.
Tell me: Languages โโhave no speed.
Which is faster? English or French? Both are just ways to connect meaning to sounds, or swing on paper.
The same is true for programming languages. A programming language is simply a way of associating semantics with a sequence of characters in one or more files.
source to share
I think there are many questions you can look through that are similar. Take a look here for example: C ++ performance versus Java / C #
source to share
You can see C ++ and java language comparison where several java cases are faster. Also, I've added tables for Java and C ++ and you may see some other cases faster in Java compared to C ++.
Lower numbers are better. For example, finding binary trees is better in java (2.89) secs versus 4.47 secs in C ++.
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=javaxx&lang2=javaxx
Java 6 -Xms64m measurements
Program & Logs Time secs Memory KB Size B N
binary-trees 2.89 39,436 603 16
chameneos-redux 17.01 17,316 1429 6,000,000
fannkuch 11.08 8,996 555 11
fasta 21.40 9,300 1240 25,000,000
k-nucleotide 15.57 83,308 1052 1,000,000
mandelbrot 3.25 11,136 665 3,000
meteor-contest 0.80 14,196 5177 2,098
n-body 14.84 11,652 1424 20,000,000
nsieve 2.22 15,748 296 9
nsieve-bits 5.04 13,468 523 11
partial-sums 9.14 8,600 474 2,500,000
pidigits 1.92 9,112 938 2,500
recursive 6.82 12,180 427 11
regex-dna 7.60 75,192 921 500,000
reverse-complement 1.13 61,124 592 2,500,000
spectral-norm 24.00 12,268 514 5,500
startup 17.23 112 200
sum-file 4.11 10,084 226 21,000
thread-ring 134.99 27,628 530 10,000,000
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=gpp&lang2=gpp
C++ GNU g++ measurements
Program & Logs Time secs Memory KB Size B N
binary-trees 4.47 6,996 541 16
chameneos-redux 16.69 1,004 1729 6,000,000
fannkuch 7.78 844 554 11
fasta 18.72 788 1248 25,000,000
k-nucleotide 7.46 9,304 1380 1,000,000
mandelbrot 3.02 896 1097 3,000
meteor-contest 0.15 792 5311 2,098
n-body 14.62 932 1705 20,000,000
nsieve 2.08 5,764 313 9
nsieve-bits 3.86 3,316 494 11
partial-sums 4.05 852 531 2,500,000
pidigits 1.66 1,052 652 2,500
recursive 2.40 1,008 566 11
regex-dna 5.58 12,704 1588 500,000
reverse-complement 0.54 13,288 810 2,500,000
spectral-norm 23.84 900 442 5,500
startup 0.86 108 200
sum-file 6.47 852 260 21,000
thread-ring 101.28 2,960 626 10,000,000
source to share
If I believe what some Java evangelists are saying, I would answer yes to the first question. I.e. A Java program "can" be faster. Not always, though ... These Java evangelists point out better memory management while avoiding overhead new
etc.
To answer the second question, release mode for a C / C ++ compiler means compiling without debugging information: the latter stores additional information such as line numbers corresponding to the generated code (easier for debugging and error reporting) and avoids optimization ( which can change the order of the code and mess with the information above).
Thus, the release mode is generally faster and smaller. And it might crash when debug mode is running! (Rare, but I've seen it.)
source to share
Release mode means that you create your program because you want to publish it. Usually the compiler will try to make the executable faster and faster. This often means that you need to get rid of the symbol information in order to get a reverse crash trace and use a higher level of optimization. The latter makes compile time slower, so it is not used when building in debug mode.
source to share
Most of the discussions compare applications compiled from source. However, if you have an old library like. from a third party you may find that the old java library is much faster as it will still be able to use the latest instructions / methods, whereas the old library already compiled to native code cannot.
source to share
I think the most interesting thing you can say about this topic is that the modern JVM does some speculative optimizations. Optimizations based on guesses about the stability of some values, with the ability to unoptimize this code if the value should change in the future.
OTOH is nothing inherent in the languages โโin question. I saw a research project a couple of years ago that managed to get an extra 20% of some natively compiled code (maybe even with C ++) by running it in an emulator emulating the same processor as the hardware, but doing some optimizations like VM by code.
source to share