IOS Swift: Attempting to use the image write buffer to create a video, finishWritingWithCompletionHandler not called. The output video contains zero bytes

I am trying to just write just two frames from a static image to build a video. I've been looking around for timing a bit. The last step seems to be complete. WritingCompletionHandler was never called (finished writing ... never got out). Only zero byte .mp4 video is generated. And there were no mistakes. I can't figure out why. Here is the code I'm using:

func createBackgroundVideo(CompletionHandler: (path: String)->Void) {

    var maybeError: NSError?
    let fileMgr = NSFileManager.defaultManager()
    let docDirectory = NSHomeDirectory().stringByAppendingPathComponent("Documents")
    let videoOutputPath = docDirectory.stringByAppendingPathComponent(BgVideoName)

    if (!fileMgr.removeItemAtPath(videoOutputPath, error: &maybeError)) {
        NSLog("Umable to delete file: %@", maybeError!.localizedDescription)
    }

    println(videoOutputPath)

    let videoWriter = AVAssetWriter(
        URL: NSURL(fileURLWithPath: videoOutputPath),
        fileType: AVFileTypeQuickTimeMovie,
        error: &maybeError
    )

    var videoSettings = [
        AVVideoCodecKey: AVVideoCodecH264,
        AVVideoWidthKey: NSNumber(float: Float(videoWidth)),
        AVVideoHeightKey: NSNumber(float: Float(videoHeight))
    ]

    var avAssetInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    avAssetInput.expectsMediaDataInRealTime = true

    var adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: avAssetInput, sourcePixelBufferAttributes: nil)

    videoWriter.addInput(avAssetInput)
    videoWriter.startWriting()
    videoWriter.startSessionAtSourceTime(kCMTimeZero)

    var frameCount: Int64 = 0;
    var buffer: CVPixelBufferRef

    //buffer = PixelBuffer.pixelBufferFromCGImage2(self.bgImage.CGImage, andSize: CGSizeMake(videoWidth, videoHeight)).takeUnretainedValue()

    for i in 1...2 {
        buffer = PixelBuffer.pixelBufferFromCGImage2(self.bgImage.CGImage, andSize: CGSizeMake(videoWidth, videoHeight)).takeUnretainedValue()
        var appendOk = false
        var retries: Int = 0

        while (!appendOk && retries < 30) {
            if (adaptor.assetWriterInput.readyForMoreMediaData) {
                let frameTime = CMTimeMake(frameCount, 1);
                appendOk = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
                if (!appendOk) {
                    println("some erorr occurred", videoWriter.error)
                } else {
                    println("pixel written")
                }
            } else {
                println("adaptor is not ready....")
                NSThread.sleepForTimeInterval(0.1)
            }
            retries++
        }

        if (!appendOk) {
            println("Error appending image....")
        }

        frameCount++
    }

    avAssetInput.markAsFinished()
    videoWriter.finishWritingWithCompletionHandler({() -> Void in
        println("finished writing...")
        CompletionHandler(path: videoOutputPath)
    })
}

      

I am calling the pixel buffer from the CGImage method written in Obj-c (I added the headers and bridges headers seems to work fine):

+ (CVPixelBufferRef) pixelBufferFromCGImage2: (CGImageRef) image andSize:(CGSize) size {

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          size.width,
                                          size.height,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    if (status != kCVReturnSuccess){
        NSLog(@"Failed to create pixel buffer");
    }

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                                 size.height, 8, 4*size.width, rgbColorSpace,
                                                 kCGImageAlphaPremultipliedFirst);

    float offsetY = size.height / 2 - CGImageGetHeight(image) / 2;
    float offsetX = size.width / 2 - CGImageGetWidth(image) / 2;

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(offsetX, offsetY, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

      

Thanks for reading.

+3


source to share


1 answer


Your video dictionary may not be complete. Try adjusting additional information about such an atom:

var videoCleanApertureSettings = [AVVideoCleanApertureWidthKey:Int(self.width),
                                 AVVideoCleanApertureHeightKey:Int(self.height),
                       AVVideoCleanApertureHorizontalOffsetKey:0,
                         AVVideoCleanApertureVerticalOffsetKey:0]

var videoAspectRatioSettings = [AVVideoPixelAspectRatioHorizontalSpacingKey:1,
                                  AVVideoPixelAspectRatioVerticalSpacingKey:1]

var codecSettings = [AVVideoCleanApertureKey:videoCleanApertureSettings,
                  AVVideoPixelAspectRatioKey:videoAspectRatioSettings]

var videoSettings = [AVVideoCodecKey:AVVideoCodecH264,
     AVVideoCompressionPropertiesKey:codecSettings,
                     AVVideoWidthKey:Int(self.width),
                    AVVideoHeightKey:Int(self.height)]

      

You start the video with a zero time stamp. This is normal:

[self.videoWriter startSessionAtSourceTime:kCMTimeZero];

      

Your timestamps of your video images may not be far enough away to see something. If you need a few seconds to display the images, you can do something like this:

int64_t newFrameNumber = (uint64_t)(presentationTimeInSeconds * 60.);
CMTime frameTime = CMTimeMake(newFrameNumber, 60);

      

By using 60 as the timeline, you get the ability to use seconds as a unit with good resolution.

To create a live slideshow, you can use NSDate to encode the timestamp:

int64_t newFrameNumber = (uint64_t)(fabs([self.videoStartDate timeIntervalSinceNow]) * 60.);

      

where self.videoStartDate is the value [NSDate date]

you set immediately after the video started.

CMTime tells the decoder when the image is displayed, not how long to display it. You start with a frameCount of 0, which tells the decoder to immediately render the first image. You might try starting at 1 to see if the video will show the first image a little later.



if you used startSessionAtSourceTime

then you must end the video with endSessionAtSourceTime

before you call finishWritingWithCompletionHandler

, otherwise the close may not be called. Pass in the last timestamp for endSessionAtSourceTime.

You can try the deprecated method from apple to see if this is a bug. After marking on completed call

videoWriter.finishWriting()

      

instead finishWritingWithCompletionHandler

and wait a bit until the file is closed by the disk. (i.e. using a send queue)

int64_t delayInSeconds = 1;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){

     // call your completion handler after the file has been written
})

      

Here's a quick version:

let delayInSeconds:Double = 0.5
let popTime = dispatch_time(DISPATCH_TIME_NOW, Int64(delayInSeconds * Double(NSEC_PER_SEC)))
dispatch_after(popTime, dispatch_get_main_queue(), {

   println("finished writing...")
   CompletionHandler(path: videoOutputPath)
})

      

Perhaps your set top box instance no longer exists after exiting your class. (The block is called asynchronously, but you specified the video device locally in your function. ARC may be freeing the object before the completion handler can be called.) Declare authors globally to fix this issue.

Hint:

Keep your CGColorSpace in memory (i.e. create var class or static var here) because it CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();

takes a long time to initialize. Doing this just once before you encode the video will dramatically speed up the execution speed of your applications!

+1


source







All Articles