Speed ββup compiled programs using runtime information like JVM?
Java programs can outperform compiled programming languages ββsuch as C for specific tasks. This is because the JVM has runtime information and JIT compiles as needed (I think).
(example: http://benchmarksgame.alioth.debian.org/u32/performance.php?test=chameneosredux )
Is there something like this for a compiled language? (I'm primarily interested in C)
After compiling the source, the developer runs it and tries to simulate a typical workload. The tool collects startup information and then recompiles again based on this information.
source to share
gcc has -fprofile-arcs
from the manpage:
-fprofile-arcs Add code so that program flow arcs are instrumented. During execution the program records how many times each branch and call is executed and how many times it is taken or returns. When the compiled program exits it saves this data to a file called auxname.gcda for each source file. The data may be used for profile-directed optimizations (-fbranch-probabilities), or for test coverage analysis (-ftest-coverage).
source to share
Yes, there are tools like this, I think it is known as "profiler optimization".
There are a number of optimizations. It is important to note that paging is based on backups as well as the use of code caches. Many modern processors have one code cache, perhaps a second level of code cache or a second unified data and code cache, perhaps a third level of cache.
The simplest thing is to move all of your most used functions to one place in the executable, say at the beginning. More complicated is that less frequently used branches are moved to some completely different part of the file.
Some instruction set architectures, such as PowerPC, have branch prediction bits in their machine code. Optimization with a profiler tries to do this more profitably.
Apple used this program for Macintosh Wizards - for Classic Mac OS - using the "MrPlus" tool. I think GCC can do it. I expect LLVM can, but I don't know how.
source to share