Connection between Float and unsigned char array?

I am parsing a stream of bytes which, upon receipt, forms an array uint8

. It is known in advance what the contents of the array should be, which can be an integer, string, or float. All that is needed is to reinterpret the data for these types. Float is giving me some trouble though.

My question is , will the following framework work as expected without any surprises? (Overlay memory aliases, padding, endiancy, etc.). And if not, what would be the best way to achieve this with minimal sane code?

union BytesToFloat{
    float f;
    uint8 bytes[4];
}

      

As a background, this data is taken from the save data, so the computer recording the data may not be the same as the computer reading it.

change

After reading one of the comments regarding the entity, will this structure and help function better fit, or will endian remain a problem (or there might be something else annoying outside of that)

union IntToFloat{
    float f;
    uint32 i;
};

uint32 CharToLong(unsigned char * c){
    uint32 val = c[0];
    val <<= 8;
    val |= c[1];
    val <<= 8;
    val |= c[2]; 
    val <<= 8;
    val |= c[3];
    return val;
}

      

+3


source to share


2 answers


You can slightly improve the reliability of yours union

(more in theory than practice) by replacing 4

with sizeof(float)

.

However, you have to face other network problems. There is no guarantee that both ends of the connection are using IEEE 754 floating point format (for example, IBM's zSeries mainframes). There is also no guarantee that both sides use the same byte order (Intel architecture uses small endian, most other architectures use large endian). You need to know the byte ordering of the source and destination computers to interpret the data correctly (but the DRDA protocol used by IBM to communicate with the SQL DBMS works the same way as the "sink makes the right" convention).



Byte ordering and byte ordering are practical issues; the floating point format tends to be less of a problem if you don't expect to play with mainframe systems (they tend to be an obstacle on IEEE 754, primarily because their formats were established before IEEE 754 standardization).

Often the best way to send data is with plain text format . It has the advantages of easy debugging and avoids many (but not all) complex problems with number representation. However, if your main protocol is binary, it would be strange to change it to floating point text.

+1


source


Yes, there may be surprises. At both ends.

How this code will work, the implementation will be implemented in both read and write.



On the positive side, often the transport medium (network or disk) is so slow that adding code to decrypt the read / write will not have a significant performance impact.

Another good thing: code rarely needs to run on any platform. Make sure what you are doing works on every platform you support and you are good.

0


source







All Articles