Save bitmap to video (libavcodec ffmpeg)

I would like to convert HBitmap to video stream using libavcodec. I am getting my HBitmap using:

HBITMAP hCaptureBitmap =CreateCompatibleBitmap(hDesktopDC, nScreenWidth, nScreenHeight);
SelectObject(hCaptureDC,hCaptureBitmap); 
BitBlt(hCaptureDC,0,0,nScreenWidth,nScreenHeight,hDesktopDC,0,0,SRCCOPY);

      

And I would like to convert it to YUV (which is required by the codec I am using). For this I use:

SwsContext *fooContext = sws_getContext(c->width,c->height,PIX_FMT_BGR32,   c->width,c->height,PIX_FMT_YUV420P,SWS_FAST_BILINEAR,NULL,NULL,NULL);

uint8_t *movie_dib_bits = reinterpret_cast<uint8_t *>(bm.bmBits) + bm.bmWidthBytes * (bm.bmHeight - 1);

int dibrowbytes = -bm.bmWidthBytes;

uint8_t* data_out[1];
int stride_out[1];
data_out[0] = movie_dib_bits;
stride_out[0] = dibrowbytes;

sws_scale(fooContext,data_out,stride_out,0,c->height,picture->data,picture->linesize);  

      

But that doesn't work at all ... Any idea why? Or how can I do it differently?

Thank!

+2


source to share


1 answer


I'm not familiar with the stuff you are using to get the bitmap, but assuming it is correct and you have a pointer to 32-bit / pixel BGR data, try something like this:

uint8_t* inbuffer;
int in_width, in_height, out_width, out_height;

//here, make sure inbuffer points to the input BGR32 data, 
//and the input and output dimensions are set correctly.

//calculate the bytes needed for the output image
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);

//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

//create ffmpeg frame structures.  These do not allocate space for image data, 
//just the pointers and other information about the image.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();

//this will set the pointers in the frame structures to the right points in 
//the input and output buffers.
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);

//create the conversion context
SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);

//encode the frame here...

//free memory
av_free(outbuffer);
av_free(inpic);
av_free(outpic);

      



Of course, if you are going to transform a sequence of frames, just do your selections once at the beginning and release them once at the end.

+5


source







All Articles