Why increment a value this way in python?

I found some example code in python and I understand the code, but there is one line that I don't know why it is written like this:

c = 1    
c = (c + 1) & 255

      

I know that it increments the value of the C variable in one, but I don't understand & 255

can someone explain this to me?

+3


source to share


4 answers


What this code does: reset c

to zero when it reaches 256, so the sequence goes:

1 -> 2 -> 3 -> ... -> 254 -> 255 -> 0 -> 1 -> ...

      

This is another way to write:



c = (c + 1) % 256

      

One way to think about what % 256

limits the range of outcomes to 0

.. 255

.

Note that & 255

it only works because it 256

is a power of two, whereas the modulus formulation works for any number.

+7


source


The operator &

is striking and. Each bit in the result is equal 1

if and only if the corresponding bits in both arguments 1

. Let's look at a few examples:

>>> a = 10 # 0b1010
>>> b = 12 # 0b1100
>>> bin(a & b)
0b1000
>>> a = 300 # 0b100101100
>>> b = 255 # 0b011111111
>>> bin(a & b)
'0b101100'

      

You may have noticed that 255 0b011111111

, and the way &

works, this means that the result will have the same last 8 bits as the other argument, and no other bits.

This is a good way to mask a byte insertion value (an 8-bit unsigned integer) by "flipping" and ignoring any overflow. Similarly, you can use 31 or 65535 to mask the value in nybble or word.

So this code:

c = (c + 1) & 255

      

... means we treat it c

as a byte, increment it and roll it over and ignore any overflow. *



This can be useful if it c

really is a low-level byte-oriented value that acts like an LED counter that cycles through 256 positions and starts from the beginning on the RPi board.


In C, some people use it as a shortcut for % 256

, even if the value is not really meant to represent a byte, simply because it gives the same result and, at least in C on 1970s hardware, is faster. But in C on 2015 hardware, it might not be that fast. And in Python this is almost certainly not the case. (Actually, from a quick test, it's a bit slower ... but it's also so close that even if it's not a measurement error, you'll never be interested.)

So, if you see someone writing this as an "optimization" in Python, you have to strip their translator and make them work in assembler for a week until they get it done.


* As sapi points out in a comment to NPE's answer , it would probably make more sense to write this as & 0xFF

or maybe & 0b11111111

because either one is obviously the highest byte value than 255

.

+5


source


x % (2 ^ N)

equal x & ((2 ^ N) -1)

. &

is a bit operator. This is for optimization as modulo is slow but bit fast.

http://en.wikipedia.org/wiki/Bitwise_operation#AND

+3


source


It's a little faster.

Modular:

%timeit 2 % 255
10000000 loops, best of 3: 37.7 ns per loop

      

bitwise:

%timeit 2 & 255
10000000 loops, best of 3: 33.3 ns per loop

      

I think using modulo is more pythonic.

Other timings suggest both methods take the same amount of time:

%timeit 100 % 256
10000000 loops, best of 3: 34 ns per loop

%timeit 100 & 256
10000000 loops, best of 3: 33 ns per loop

      

If you do this often enough, you will find the opposite relationship:

%timeit 256 % 256
10000000 loops, best of 3: 33.7 ns per loop

%timeit 256 & 256
10000000 loops, best of 3: 34.5 ns per loop

      

Running the test again will give slightly different numbers. Therefore, sometimes one is faster than the other, and vice versa.

0


source







All Articles