C # gets the top 4 bits of a uint32 starting with the first significant bit
My need is to have some (essentially pseudo) random number uint32
, I need the first 4 bits with a 1st bit that is not 0, for example
... 000100101 => 1001
1000 ... 0001 => 1000
... 0001 => 0001
... 0000 => 0000
etc. I understand that I have to use something like this
uint num = 1157 (some random number)
uint high = num >> offset
The problem is I don't know where the first bit is, so I cannot use >>
with a constant variable. Can someone please explain how to find this offset
?
source to share
You can compute the most significant bit (HSB) first and then shift accordingly. Can you do it:
int hsb = -4;
for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
if(hsb < 0) {
hsb = 0;
}
uint result = num >> hsb;
So, we first try to find the index of the most significant bit (or that index minus four). We do this by zooming in hsb
and out cnum
(copy num
) to the right until there are cnum
no more bits set.
Next, we guarantee that such a set bit exists and that it is at least index four (if not, nothing is done). As a result, the original num
right shift by hsb
.
If I run this on 0x123
, I get 0x9
in an interactive shell csharp
:
csharp> uint num = 0x123;
csharp> int hsb = -4;
csharp> for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
csharp> if(hsb < 0) {
> hsb = 0;
> }
csharp> uint result = num >> hsb;
csharp> result
9
0x123
is 0001 0010 0011
in binary format. So:
0001 0010 0011
1 001
And 1001
there is 9
.
source to share
One of the major flaws of my previous changes to this answer was the lack of consideration for approval. This solution should (in theory) be optimized by the compiler into an appropriate bit-break for whatever architecture you're working on.
if(i == 0)
return 0;
while(num < 128)
num *= 2;
return num >> 28;
source to share
Determining the position of the most significant non-zero bit is the same as calculating the base 2 logarithm. To quickly perform this process on a "modern processor" there are "bit shifts":
int GetLog2Plus1(uint value)
{
value |= value >> 1;
value |= value >> 2;
value |= value >> 4;
value |= value >> 8;
value |= value >> 16;
value -= value >> 1 & 0x55555555;
value = (value >> 2 & 0x33333333) + (value & 0x33333333);
value = (value >> 4) + value & 0x0F0F0F0F;
value += value >> 8;
value += value >> 16;
value &= 0x0000003F;
return (int) value;
}
This will return a number between 0 and 32:
Value + Log2 + 1 ------------------------------------------- + ------ --- 0b0000_0000_0000_0000_0000_0000_0000_0000U | 0 0b0000_0000_0000_0000_0000_0000_0000_0001U | 1 0b0000_0000_0000_0000_0000_0000_0000_0010U | 2 0b0000_0000_0000_0000_0000_0000_0000_0011U | 2 0b0000_0000_0000_0000_0000_0000_0000_0100U | 3 ... 0b0111_1111_1111_1111_1111_1111_1111_1111U | 31 0b1000_0000_0000_0000_0000_0000_0000_0000U | 32 0b1000_0000_0000_0000_0000_0000_0000_0001U | 32 ... | 0b1111_1111_1111_1111_1111_1111_1111_1111U | 32
(Math nitpickers have noticed that the logarithm of 0 is undefined. However, I hope the above table demonstrates how this is handled and makes sense for this problem.)
You can then compute the most significant non-zero bits, given that you want the least significant 4 bits if the value is less than 8 (where log2 + 1 is <4):
var log2Plus1 = GetLog2Plus1(value);
var bitsToShift = Math.Max(log2Plus1 - 4, 0);
var upper4Bits = value >> bitsToShift;
source to share