Dynamic optimization of running programs
I was told that running programs generate probability data used to optimize repetitive instructions.
For example, if the if-then-else control structure evaluates to TRUE 8/10 times, then the next time the if-then-else statement is executed, there is an 80% chance that the condition will be TRUE. These statistics are used to query the equipment to load the appropriate data into registers, assuming the result is TRUE. The goal is to speed up the process. If the operator actually evaluates to TRUE, the data is already loaded into the appropriate registers. If the operator evaluates to FALSE, then other data is loaded and simply written over what was decided "more likely".
I find it difficult to understand how the probability calculations do not expose the weighted cost of executing the decisions she is trying to improve. Is this something really happening? Is this happening at the hardware level? Is there a name for this? I can find any information about the topic.
source to share
It's done. This is called branch prediction. The cost is nontrivial, but it is handled by specialized hardware, so the cost is almost entirely related to the additional circuitry - it does not affect the time it takes to execute the code.
This means that the real cost would be one missed opportunity - that is, if there was some other way of designing a processor that used this number of circuits for some other purpose and got more out of it. My closest guess is that the answer is usually not an industry forecast, usually very efficient in terms of return on investment.
source to share