Display dynamic text on CVPixelBufferRef while recording video

I am recording video and audio using AVCaptureVideoDataOutput

and AVCaptureAudioDataOutput

and in a delegate method captureOutput:didOutputSampleBuffer:fromConnection:

, I want to draw text on each individual sample buffer that I receive from the video link. The text changes about every frame (this is the stopwatch mark) and I want this to be overwritten on top of the recorded video data.

Here's what I have found so far:

//1.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

//2.   
UIImage *textImage = [self createTextImage];
CIImage *maskImage = [CIImage imageWithCGImage:textImage.CGImage];

//3.
CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSDictionary *options = [NSDictionary dictionaryWithObject:(__bridge id)colorSpace forKey:kCIImageColorSpace];
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];

//4.
CIFilter *filter = [CIFilter filterWithName:@"CIBlendWithMask"];
[filter setValue:inputImage forKey:@"inputImage"];
[filter setValue:maskImage forKey:@"inputMaskImage"];
CIImage *outputImage = [filter outputImage];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

//5.   
[self.renderContext render:outputImage toCVPixelBuffer:pixelBuffer bounds:[outputImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];

//6.
[self.pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:timestamp];

      

  • I take the pixel buffer here, just like pie.
  • I am using basic graphics to write text to a blank UIImage (which does createTextImage

    ). I was able to test if this step worked, I saved the image with text drawn on it to my photos.
  • I am creating a CGImage from a pixel buffer.
  • I am creating a CIFilter for CIBlendWithMask

    by setting the input image as created from a source pixel buffer and the input mask as CIImage

    made from an image with text drawn on it.
  • Finally, I pass the filter's output image to a pixelBuffer. CIContext

    was created in advance with [CIContext contextWithOptions:nil];

    .
  • After all this, I add a pixel buffer to mine pixelBufferAdaptor

    with the appropriate timestamp.

The video saved at the end of the recording has no visible changes to it, that is, the mask image was not drawn on the pixel buffers.

Anyone have an idea where I am going wrong? I've been stuck on this for days, any help would be so much appreciated.

EDIT:

- (UIImage *)createTextImage {
    UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height), NO, 1.0);
    NSMutableAttributedString *timeStamp = [[NSMutableAttributedString alloc]initWithString:self.timeLabel.text attributes:@{NSForegroundColorAttributeName:self.timeLabel.textColor, NSFontAttributeName: self.timeLabel.font}];
    NSMutableAttributedString *countDownString = [[NSMutableAttributedString alloc]initWithString:self.cDownLabel.text attributes:@{NSForegroundColorAttributeName:self.cDownLabel.textColor, NSFontAttributeName:self.cDownLabel.font}];
    [timeStamp drawAtPoint:self.timeLabel.center];
    [countDownString drawAtPoint:self.view.center];
    UIImage *blank = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return blank;
}

      

+3


source to share


3 answers


Do you want like below? enter image description here

Instead of using CIBlendWithMask

what you should use CISourceOverCompositing

, try this:



//4.
CIFilter *filter = [CIFilter filterWithName:@"CISourceOverCompositing"];
[filter setValue:maskImage forKey:kCIInputImageKey];
[filter setValue:inputImage forKey:kCIInputBackgroundImageKey];
CIImage *outputImage = [filter outputImage];

      

+4


source


You can also use CoreGraphics

it CoreText

to paint directly on top of an existing CVPixelBufferRef

one if it's RGBA (or on a copy if it's YUV). I have some sample code in this answer: fooobar.com/questions/2230886 / ...



+1


source


I asked Apple DTS about this same problem as all the approaches I had were very slow or doing strange things and they sent me this:

https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest?language=objc

This led me to a quick solution! you can bypass CVPixelBuffer altogether using CIFilters which is IMHO much easier to work with. So if you don't need to use CVPixelBuffer, then this approach will quickly become your new friend.

The combination of CIFilter (s) to compose the original image and the image with the text that I generated for each frame did the trick.

I hope this helps someone else!

0


source







All Articles