Using VTCompressionSession like WWDC2014

There is essentially no documentation for this library, so I really need your help here.

Goal: I need H264 encoding (preferably with audio and video, but just video is fine and I'll just play around in a few days to get the audio to work) so I can stream it to an MPEG transport stream.

What I have: I have a camera that records and outputs fetch buffers. The inputs are the rear of the camera and the built-in microphone.

A few questions: A. Is it possible for the camera to output CMSampleBuffers in H264 format? I mean, in 2014 it is produced from VTCompressionSessions, but when writing my captureOutput I see that I already have a CMSampleBuffer ... B. How do I set up the VTCompressionSession? How is the session used? Some general discussion at this level can help people understand what's going on in this barely documented library.

The code is here (please ask more if you need it, I just put captureOutput because I don't know how relevant the rest of the code is):

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    println(CMSampleBufferGetFormatDescription(sampleBuffer))
    var imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    if imageBuffer != nil {
        var pixelBuffer = imageBuffer as CVPixelBufferRef
        var timeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer as CMSampleBufferRef)
        //Do some VTCompressionSession stuff
    }
}

      

Thanks everyone!

+3


source to share





All Articles