Why put the most likely condition in an `if-else` statement?

I've heard this often enough to actually question him - a lot of people say that in the if-else statement, you need to put a condition that is most likely true first. So, if it condition

is likely to be false most of the time, put !condition

in an if statement, otherwise use condition

. Some contrived illustration of what I mean:

if (likely) {
    // do this
} else {
    // do that
}

if (!unlikely) {
    // do this
} else {
    // do that
}

      

Some say this is more efficient - perhaps due to branch prediction or some other optimization, I never asked when this topic was broken, but as far as I can tell, there will always be one test and both paths will jump ...

So my question is, is there a compelling reason (where a “compelling reason” might be a tiny efficiency factor) why the condition that is most likely to be true should be the first in the if-else statement?

+3


source to share


1 answer


There are two reasons why an order might matter:

  • multiple branches of multiple if / else_if / else_if / else statements have different probabilities due to known data distribution

Sample sorting cells, most of which are yellow once (90%), but some are orange (5%) and some have different colors (5%).

 if (apple.isYellow) {...}
 else if (apple.isOrange) {....}
 else {...}   

      

against.

 if (!apple.isYellow && !apple.isOrange) {...}   
 else if (apple.isOrange ) {....}
 else  {...}

      

In the first example 90% of apples are checked with only one if check and 10% hit 2, but in the second example only 5% get into one check and 95% get into two.



So, if you know this is a meaningful difference between the chances of one branch being used, it might be useful to move it to the first condition.

Note that your sample is single, if that level is different.

  1. low-level CPU optimization, which can favor one of the branches (it is also more about incoming data that sequentially falls into one state branch).

For earlier / simpler processors, it may be necessary to flush the parsing pipeline if a conditional branch occurs versus the case when the code is executed sequentially. Therefore, this may be the reason for such a proposal.

Example (fake build):

    IF R1 > R2 JUMP ElseBranch
    ADD R2, R3 -> R4  // CPU decodes this command WHILE executing previous IF
    ....
    JUMP EndCondition  
ElseBranch:    
    ADD 4, R3 -> R4   // CPU will have to decodes this command AFTER
                      // and drop results of parsing  ADD R2, R3 -> R4  
    ....
EndCondition: 
    ....

      

Modern processors shouldn't have a problem as they will parse instructions for both branches. They even have branch prediction logic for conditions. Therefore, if the condition is mostly resolved, the one-way processor will assume that the condition will be resolved that way and will start executing the code in that branch until the check is complete. As far as I know, on current processors, it doesn't matter if this is the first or alternate state branch. Check out Why is it faster to process a sorted array than an unsorted array? for more information on this.

+3


source







All Articles