Implement an existing network interface defining bit-fields with respect to endianess in C ++ 11

I am currently developing a C ++ 11 library for the remote server of an existing control interface described in the Control Interface Document (ICD).

The interface is based on TCP / IPv4 and uses network byte order (aka Big Endian).

Requirement: The library must be developed cross-platform .

Note. I want to develop a solution without (ab) using a preprocessor.

After a short research on the WWW, I discovered Boost.Endian , which solves problems related to sequencing multibyte data types. My approach is as follows:

  • Serialize (multibyte) data types to a stream via std :: basic_ostream :: write , more precisely via os.write(reinterpret_cast<char const *>(&kData), sizeof(kData))

    .
  • Convert std::basic_ostream

    to std::vector<std::uint8_t>

    .
  • Send std::vector<std::uint8_t>

    via Boost.Asio over the web.

So far so good. Everything seems to work as planned and the solution should be platform independent.

Now comes the tricky part: ICD describes multi-word messages and a word is 8 bits long. The message can contain multiple fields and the field does not have to be byte-aligned, which means that one word can contain multiple fields.

Example. Consider the following message format (the message body starts with word 10):

Word | Bit(s) | Name
-----|--------|----------
 10  |  0-2   |  a
 10  |   3    |  b
 10  |   4    |  c
 10  |   5    |  d
 10  |   6    |  e
 10  |   7    |  RESERVED
 11  |   16   |  f

      

etc.

So now I need a solution to be able to model and serialize a bit based interface.

I have considered the following approaches:

code:

// Using a arithmetic type from the `boost::endian` namespace does not work.
using Byte = std::uint8_t;
using BitSet = boost::dynamic_bitset<Byte>;
BitSet bitSetForA{3, 1};
BitSet bitSetForB{1};
// [...]
BitSet bitSetForF{16, 0x400}; // 1024

      

So, 1024 in the above example always serializes to 00 04

instead of 04 00

on my machine.

I really don't know what is the most pragmatic approach to solve my problem. Maybe you can point me in the right direction.

In conclusion, I need a recipe to implement the existing network interface definition bit of a field in a platform independent way, relative to the native byte ordering of the machine on which the library was compiled.

+3


source to share


1 answer


Someone recently kindly pointed me to a good content article that I got some inspiration from.

std::bitset

has a method to_ulong

that can be used to return an integer representation of a bitfield (end-independent), and the following code will print your output in the correct order:

#include <iostream>
#include <iomanip>
#include <bitset>

int main()
{
  std::bitset<16> flags;
  flags[10] = true;
  unsigned long rawflags = flags.to_ulong();
  std::cout << std::setfill('0') << std::setw(2) << std::hex
            << (rawflags & 0x0FF) // little byte in the beginning                                                     
            << std::setw(2)
            << ((rawflags>>8) & 0x0FF) // big byte in the end                                                         
            << std::endl;
}

      

Note that no solution using bitfields will work in this case, as the bits will be replaced with small finite machines!



eg. in a structure like this:

struct bits {
  unsigned char first_bit:1;
  unsigned char rest:7;
};
union {
  bits b;
  unsigned char raw;
};

      

Setting b.fist_bit to 1 will result in a raw value of 1 or 128 depending on the entity!

NTN

0


source







All Articles