C ++ Double to Binary Representation (Reinterpret)
I recently decided to create a program that would allow me to print out the exact pattern of an instance bit of any type in C ++. I start with primitive built-in types. I am facing a problem when printing the binary representation of a type double
.
Here's my code:
#include <iostream>
using namespace std;
void toBinary(ostream& o, char a)
{
const size_t size = sizeof(a) * 8;
for (int i = size - 1; i >= 0; --i){
bool b = a & (1UL << i);
o << b;
}
}
void toBinary(ostream& o, double d)
{
const size_t size = sizeof(d);
for (int i = 0; i < size; ++i){
char* c = reinterpret_cast<char*>(&d) + i;
toBinary(o, *c);
}
}
int main()
{
int a = 5;
cout << a << " as binary: ";
toBinary(cout, static_cast<char>(a));
cout << "\n";
double d = 5;
cout << d << " as double binary: ";
toBinary(cout, d);
cout << "\n";
}
My output is as follows: 5 as binary: 00000101
5 as binary code: 0000000000000000000000000000000000000000000000000001010001000000
However, I know that 5 as a floating point representation: 01000000 00010100 00000000 00000000 00000000 00000000 00000000 00000000
Maybe I don’t understand anything here, but I didn’t write the line reinterpret_cast<char*>(&d) + i
I wrote to handle double*
like char*
so that adding i
to it would advance the pointer to sizeof(char)
instead sizeof(double)
. (What exactly do I want here)? What am I doing wrong?
source to share
If you interpret a numeric type as a "sequence of bytes", you are exposed to machine endianess: on some platform, the most significant byte is stored first, and the other way around.
Just observe your number in 8-bit groups by reading it from last group to first and get what you expect.
Note that this same problem occurs with integers as well: 5 (in 32 bits) -
00000101-00000000-00000000-00000000
but not
00000000-00000000-00000000-00000101
as you expect.
source to share