AVFoundation captureOutput didOutputSampleBuffer Delay

I am using AVFoundation's captureOutput didOutputSampleBuffer to extract the image, which will then be used for the filter.

  self.bufferFrameQueue = DispatchQueue(label: "bufferFrame queue", qos: DispatchQoS.background,  attributes: [], autoreleaseFrequency: .inherit)

  self.videoDataOutput = AVCaptureVideoDataOutput()
  if self.session.canAddOutput(self.videoDataOutput) {
       self.session.addOutput(videoDataOutput)
       self.videoDataOutput!.alwaysDiscardsLateVideoFrames = true
       self.videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
           self.videoDataOutput!.setSampleBufferDelegate(self, queue: self.bufferFrameQueue)
  }



 func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

        connection.videoOrientation = .portrait

        let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let ciImage = CIImage(cvPixelBuffer: pixelBuffer)

        DispatchQueue.main.async {
            self.cameraBufferImage = ciImage
        }
}

      

Above all, self.cameraBufferImage is updated when there is a new output fetch buffer.

Then when the filter button is clicked I use self.cameraBufferImage like this:

  func filterButtonPressed() {

    if var inputImage = self.cameraBufferImage {

      if let currentFilter = CIFilter(name: "CISepiaTone") {
        currentFilter.setValue(inputImage, forKey: "inputImage")
        currentFilter.setValue(1, forKey: "inputIntensity")
        if let output = currentFilter.outputImage {
          if let cgimg = self.context.createCGImage(output, from: inputImage.extent) {

            self.filterImageLayer = CALayer()
            self.filterImageLayer!.frame = self.imagePreviewView.bounds
            self.filterImageLayer!.contents = cgimg
            self.filterImageLayer!.contentsGravity = kCAGravityResizeAspectFill
            self.imagePreviewView.layer.addSublayer(self.filterImageLayer!)

          }
        }
      }
    }
  }

      

When the method above is called, it grabs the "current" self.cameraBufferImage and uses it to apply the filter. This works great at normal exposure times (below 1 / 15th of a second or so)

Problem

When the exposure time is slow, i.e. 1/3 second, it takes some time to apply the filter (about 1/3 second). This delay is only present the first time after startup. If this is done again, there will be no delay at all.


Thoughts

I understand that if the exposure duration is 1/3 of a second, the doOutputSampleBuffer is only updated every 1/3 of a second. However, why this initial delay? Shouldn't it be missing self.cameraBufferImage at the time and not waiting?

  • Queue?
  • CMSampleBuffer keeps the problem? (Although there is no CFRetain on Swift 3)

Update

Apple Documentation

Delegates receive this message whenever output is captured and outputs a new video frame, decodes or re-encodes it as specified by its videoSettings property. Delegates can use the provided video frame in combination with other APIs for further processing.

This method is called on the dispatch queue specified by the output of the sampleBufferCallbackQueue property. It is called intermittently, so it should be effective to prevent capture performance issues, including dropped frames.

If you need to refer to the CMSampleBuffer object outside of this method, this method should be CFRetain followed by CFRelease when you're done with it.

To maintain optimal performance, some fetch buffers directly reference memory pools that may need to be reused by the system device and other capture inputs. This is often the case for Native capture of an uncompressed device, where blocks of memory are copied as little as possible. If multiple sample buffers have been referencing such pools for too long, the inputs will no longer be able to copy new patterns into memory, and these patterns will be deleted.

If your application calls for flushing the samples by saving if the CMSampleBuffer objects are too long, but this requires accessing the sample data for a long period of time, consider copying the data to a new buffer and then freeing the sample buffer (if it was previously saved), like so that the memory it refers to can be reused.

+3


source to share





All Articles