Why does this happen differently in C # and C ++?

In C #

var buffer = new byte[] {71, 20, 0, 0, 9, 0, 0, 0};

var g = (ulong) ((uint) (buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24) |
                    (long) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32);

      

In C ++

#define byte unsigned char
#define uint unsigned int
#define ulong unsigned long long

byte buffer[8] = {71, 20, 0, 0, 9, 0, 0, 0};

ulong g = (ulong) ((uint) (buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24) |
                    (long) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32);

      

C # 38654710855

outputs, C ++ outputs 5199

.

Why? I scratch it on the clock for hours ...

Edit: C # has the correct output.

Thanks for helping everyone :) Jack Aidley's answer was the first, so I'll mark this as the accepted answer. Other answers were correct as well, but I cannot accept multiple answers: \

+3


source to share


4 answers


C ++ doesn't work because you're going to long

, which is usually 32-bit in most current C ++ implementations, but whose exact length is left to the developer. You want long long

.



Also, please read Bikeshedder's more complete answer below. He is perfectly correct that a fixed size typedefs is a more reliable way to do this.

+6


source


The problem is that the long type in C++

is still 4 bytes or 32 bits (on most compilers) and so your calculation overflows it. In C#

yet long

equivalent C++

long long

, and is 64 bits, so the result of the expression in the type of fit.



+4


source


Yours unsigned long

doesn't have 64 bits. You can easily check this by using sizeof(unsigned long)

which should return 4 (= 32 bits) instead of 8 (= 64 bits).

Don't use int / short / long if you expect them to be of a certain size. The standard only specifies short <= int <= long <= long long

and defines the minimum size. They can be the same size. long

guaranteed to be at least 32 bits, and long long

guaranteed to be at least 64 bits. (See Page 22 of the C ++ Standard ) However, I would strongly recommend against this and stick with stdint if you really want to work with a specific size.

Use <cstdint>

(C ++ 11) or <cstdint.h>

(98 C ++), and certain types of uint8_t

, uint16_t

, uint32_t

, uint64_t

.

Corrected C ++ code

#include <stdint.h>
#include <iostream>

int main(int argc, char *argv[]) {
    uint8_t buffer[8] = {71, 20, 0, 0, 9, 0, 0, 0};
    uint64_t g = (uint64_t) ((uint32_t) (buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24) |
                              (int64_t) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32);
    std::cout << g << std::endl;
    return 0;
}

      

Demo with output: http://codepad.org/e8GOuvMp

+2


source


There is a subtle mistake in your spells.

long in C # is a 64-bit integer. long in C ++ is usually a 32-bit integer.

So yours (long) (buffer[4] | buffer[5] << 8 | buffer[6] << 16 | buffer[7] << 24) << 32)

has a different meaning when you execute it in C # or C ++.

+1


source







All Articles