Send opencv Mat image over Qt websocket to HTML client

I am trying to write an application in C ++ using Qt 5.7, it basically needs to be a websocket server using qwebsocket for this, capable of sending an image developed with opencv to an HTML client. What I am trying to do is encode the image in base64, transmit and on the client put the encoded string in the src of the image tag. To test, I can send / receive text messages correctly, so the websocket architecture works, but I have some problems with images. These are my code snippets:

Server

    cv::Mat imgIn;
    imgIn = cv::imread("/home/me/color.png",CV_LOAD_IMAGE_COLOR);
    QByteArray Img((char*)(imgIn.data),imgIn.total()*imgIn.elemSize());
    QByteArray Img64 = Img.toBase64();
    pClient->sendBinaryMessage(Img64);

      

Client

<img id="ItemPreview" src="" style="border:5px solid black" />

      

....

websocket.binaryType = "arraybuffer";

websocket.onmessage = function (evt) {
console.log( "Message received :", evt.data );
document.getElementById("ItemPreview").src = "data:image/png;base64," + evt.data;
};

      

I think most of the problems are on the Server because the base64 sequence I got from the image is different from the one I can get from the online image / base64 converter. On the client, I get this error in the console and nothing is displayed:

data: image / png; base64, [object ArrayBuffer]: 1 GET data: image / png; base64, [object ArrayBuffer] net :: ERR_INVALID_URL

Any hints?

Decision

Thanks to suggestions, I can provide a working code:

Server

    imgIn = cv::imread("/home/me/color.png", CV_LOAD_IMAGE_UNCHANGED);
    std::vector<uchar> buffer;
    cv::imencode(".png",imgIn,buffer);
    std::string s = base64_encode(buffer.data(),buffer.size());
    pClient->sendTextMessage(QString::fromStdString(s));

      

Client

Removed this line:

    websocket.binaryType = "arraybuffer";

      

Server base64 encoding is done with this code:

Encode / Decode base64

+3


source to share


1 answer


This line on the server:

imgIn = cv::imread("/home/me/color.png",CV_LOAD_IMAGE_COLOR);

      

decodes a PNG image and puts it in memory as a load of pixel data (plus maybe some line padding that you won't account for, see below). This is what you base64 encoding.

This line in the client:

document.getElementById("ItemPreview").src = "data:image/png;base64," + evt.data;

      



expects a PNG image, but that's not what you're sending; you just pushed a load of raw pixel data without any measurements or information about steps or formats or whatever.

If your client wants PNG, you would have to use something like imencode

to write PNG data to buffer memory and base64 instead.

Another important thing to note is that decoded images can have a line padding ... multiple bytes at the end of each line for memory alignment purposes. Consequently, the actual length of each line of the image may be greater than the width of the image multiplied by the size of each pixel in bytes. This means that this operation:

QByteArray Img((char*)(imgIn.data),imgIn.total()*imgIn.elemSize());

      

cannot, in fact, wrap the entire image buffer in yours QByteArray

. There are various ways to check the step / step of an image, but it is best to read the cv::Mat

docs as it shouldn't be worth repeating them all here. It only matters if you are doing rough image manipulation at the byte level, like you are here. If you are using imencode

, you don't need to worry about it.

+2


source







All Articles