Creating an interface for serializable types in C #?

In Haskell, we can do something like this:

class Binary a where
    encode :: a -> ByteString
    decode :: ByteString -> a

instance Binary Int where
    encode x = ...
    decode bytes = ...

      

Defines the interface Binary

that it implements Int

. Now we can convert Int

to byte arrays and vice versa. I want to implement the same thing in C # and my first instinct is to create an interface:

interface Binary<T> {
    byte[] Encode(T);
    static T Decode(byte[]);
}

      

But this fails, because it Decode

cannot be static

.

How could I implement this in C # as cleanly as possible?

Note that I don't need a solution that creates "empty" or partially initialized T

and then fills it in by calling not static

Decode

: it would be messy and leave the window open during which object usage is a potential source of error.

Thank!

+3


source to share


3 answers


There is no real way to add a static method to an interface directly, however it is possible to either add this method to the extension class or implement it inside a derived class (which is pretty much all we can do with static methods)

But what would I do in this case (if for some reason I decided not to use the standard .NET serialization framework.), I would probably create my own serializer class:

interface ISerializer<T> : IDisposable
{
   virtual byte[] Serialize(T instance); 
   virtual T DeSerialize(byte[] stream);
}

Class MySerializer<T> : ISerializer<T>
{
   public override byte[] Serialize(object instance)
   {
      // .. serialization logic
   }

   public override T DeSerialize(byte[] stream)
   {
      // .. deserialization logic
   }

   public void Dispose()
   {
      // .. dispose all managed resources here
   }
}

class MyClass
{
}

      



Using:

MyClass instance = new MyClass();
MyClass newInstance = null;    

using(ISerializer<MyClass> serializer = new MySerializer<MyClass>())
{
    bytes[] bytes = serializer.Serialize(instance);    
    newInstance = serializer.DeSerialize(bytes);
}

      

+2


source


I usually end up splitting the code this way into two types: one base class or interface (depending on my needs), usually a generic one, that represents the actual data, and one static helper class with static and extended methods. For this example, I would probably do something like this:

public interface IBinary<T>
{
    byte[] Encode(); // Alterantive definition as extension method below
}

public static class Binary
{
    public static T Decode<T>(byte[] bytes) where T : IBinary<T>ยจ
    {
        // deserialization logic here
    }

    // If you want, you can define Encode() as an extension method instead:
    public static byte[] Encode<T, TBinary>(this TBinary binary)
        where TBinary : IBinary<T>
    {
        // serialization logic here
    }
}

      

You will now use this hierarchy by creating a class like this



public class BinaryEncodableInteger : IBinary<int>
{
    // must have this if defined in interface,
    // but if defined as extension method you get it for free
    public byte[] Encode()
    {
        // serialization logic here
    }
}

      

and use the class like

var binInt = new BinaryEncodableInteger();
var bytes = binInt.Encode();
var decoded = Binary.Decode<BinaryEncodableInteger>(bytes);

      

+1


source


I think there are interface

too many differences between C # and Haskell to solve this problem in both cases. I would rather use implicit operators

. Although they do not offer a general solution to this problem as far as I know, as they also work through static methods.

I have defined a simple class Binary

containing an array byte

, as well as conversion operators from and to int

.

class Binary
{
    public Binary(byte[] value)
    {
        this.Value = value.ToArray();
    }

    public byte[] Value { get; private set; }

    // User-defined conversion from Binary to int 
    public static implicit operator int(Binary b)
    {
        return b.Value[0] + (b.Value[1] << 8) + (b.Value[2] << 16) + (b.Value[3] << 24);
    }
    //  User-defined conversion from int to Binary
    public static implicit operator Binary(int i)
    {
        var result = new byte[4];
        result[0] = (byte)(i & 0xFF);
        result[1] = (byte)(i >> 8 & 0xFF);
        result[2] = (byte)(i >> 16 & 0xFF);
        result[3] = (byte)(i >> 24 & 0xFF);

        return new Binary(result);
    }
}

      

With NUnit, I can test it like this:

[TestCase(5, 0x05, 0x00, 0x00, 0x00)]
[TestCase(1500, 0xDC, 0x05, 0x00, 0x00)]
public void TestConvert(int i, byte b0, byte b1, byte b2, byte b3)
{
    Binary testBinary = i;
    Assert.AreEqual(b0, testBinary.Value[0]);
    Assert.AreEqual(b1, testBinary.Value[1]);
    Assert.AreEqual(b2, testBinary.Value[2]);
    Assert.AreEqual(b3, testBinary.Value[3]);

    int testInt = new Binary(new[] { b0, b1, b2, b3 });

    Assert.AreEqual(testInt, i);
}

      

You can go ahead and implement your own specific logic, but that should demonstrate the principle.

0


source







All Articles