C # compiler optimization

Why is the compiler doing to optimize my code?

I have 2 functions:

public void x1() {
  x++;
  x++;
}
public void x2() {
  x += 2;
}
public void x3() {
  x = x + 2;
}
public void y3() {
  x = x * x + x * x;
}

      

And this is what I see with ILSpy after compiling in Release mode:

// test1.Something
public void x1()
{
    this.x++;
    this.x++;
}

// test1.Something
public void x2()
{
    this.x += 2;
}
// test1.Something
public void x3()
{ 
    this.x += 2;
}
// test1.Something
public void y3()
{
    this.x = this.x * this.x + this.x * this.x;
}

      

x2 and x3 might be ok. But why is x1 not optimized for the same result? No reason to keep it in 2 increments? And why isn't y3 x=2*(x*x)

? Shouldn't it be faster than that x*x+x*x

?

Does this lead to the question? What's the optimization of the C # compiler if not such simple things?

When I read the articles you hear a lot about writing code, write it readable and the compiler will do the rest. But in this case, the compiler does almost nothing.


Adding another example:

public void x1() {
  int a = 1;
  int b = 1;
  int c = 1;
  x = a + b + c;
}

      

and using ILSpy:

// test1.Something
public void x1()
{
    int a = 1;
    int b = 1;
    int c = 1;
    this.x = a + b + c;
}

      

Why isn't this.x = 3?

+3


source to share


2 answers


The compiler cannot do this optimization without assuming that the variable is x

not being executed concurrently with your running method. Otherwise, it can change the behavior of your method in a discoverable way.

Consider a situation when the object being referenced this

is accessed simultaneously from two threads. The protector A

sets x

to zero multiple times ; thread B

calls multiple times x1()

.

If the compiler optimizes x1

as equivalent x2

, the two observable states for x

after your experiment would be 0

and 2

:

  • If it A

    ends before B

    , you get2

  • If it B

    ends before A

    , you get0



If the A

pre-empts are B

in the middle, you still get 2

.

However, the original version x1

allows three outcomes: it x

might be 0

, 1

or 2

.

  • If it A

    ends before B

    , you get2

  • If it B

    ends before A

    , you get0

  • If B

    gets pre-empted after the first increment, then it A

    ends and then B

    executes before completion, you get 1

    .
+6


source


x1

and are x2

NOT the same:

if there x

were a public field and were available in a multithreaded environment, it is possible that the second thread mutates x

between the two calls, which would not be possible with the code in x2

.

For y2

, if +

and / or *

were overloaded for type x

, then x*x + x*x

may differ from 2*x*x

.

The compiler will optimize things like (not an exhaustive list).



  • removing local variables that are not used (freeing registers)
  • removing code that does not affect the logical flow or output.
  • inlining calls to simple methods

Compiler optimizations should not change the behavior of the program (although it does). So reordering / combining math operations is out of scope for optimization.

write it readable and the compiler does the rest.

Well, the compiler can do some optimization, but there is still a lot that can be done to improve performance during development. Yes, readable code is definitely valuable, but the compiler's job is to generate a working IL that matches your source code, not change the source code faster.

+4


source







All Articles