How to add camera preview to three custom uiviews in ios swift

I need to create an application with video processing function.

My requirement is to create 3 views with camera preview. The first view should display the original video, the second should display a flip of the original video capture, and the last view should display the inverted colors of the original video.

I started developing with this requirement. First I created 3 views and required properties of Camera Capture

    @IBOutlet weak var captureView: UIView!
    @IBOutlet weak var flipView: UIView!
    @IBOutlet weak var InvertView: UIView!

    //Camera Capture requiered properties
    var videoDataOutput: AVCaptureVideoDataOutput!
    var videoDataOutputQueue: DispatchQueue!
    var previewLayer:AVCaptureVideoPreviewLayer!
    var captureDevice : AVCaptureDevice!
    let session = AVCaptureSession()
    var replicationLayer: CAReplicatorLayer!

      

enter image description here

Now I called AVCaptureVideoDataOutputSampleBufferDelegate to start the camera session.

extension ViewController:  AVCaptureVideoDataOutputSampleBufferDelegate{
    func setupAVCapture(){
        session.sessionPreset = AVCaptureSessionPreset640x480
        guard let device = AVCaptureDevice
            .defaultDevice(withDeviceType: .builtInWideAngleCamera,
                           mediaType: AVMediaTypeVideo,
                           position: .back) else{
                            return
        }
        captureDevice = device
        beginSession()
    }

    func beginSession(){
        var err : NSError? = nil
        var deviceInput:AVCaptureDeviceInput?
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)
        } catch let error as NSError {
            err = error
            deviceInput = nil
        }
        if err != nil {
            print("error: \(err?.localizedDescription)");
        }
        if self.session.canAddInput(deviceInput){
            self.session.addInput(deviceInput);
        }

        videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.alwaysDiscardsLateVideoFrames=true
        videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
        videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
        if session.canAddOutput(self.videoDataOutput){
            session.addOutput(self.videoDataOutput)
        }
        videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true

        self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
        self.previewLayer.frame = self.captureView.bounds
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect

        self.replicationLayer = CAReplicatorLayer()
        self.replicationLayer.frame = self.captureView.bounds
        self.replicationLayer.instanceCount = 1 //
        self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height / 1, 0.0)

        self.replicationLayer.addSublayer(self.previewLayer)
        self.captureView.layer.addSublayer(self.replicationLayer)
        self.flipView.layer.addSublayer(self.replicationLayer)
        self.InvertView.layer.addSublayer(self.replicationLayer)

        session.startRunning()
    }

    func captureOutput(_ captureOutput: AVCaptureOutput!,
                       didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
                       from connection: AVCaptureConnection!) {
        // do stuff here
    }

    // clean up AVCapture
    func stopCamera(){
        session.stopRunning()
    }

}

      

Here I used CAReplicatorLayer to show video capture in 3 views. I have specified self.replicationLayer.instanceCount as 1. Then I got the result as follows.

enter image description here

If I set self.replicationLayer.instanceCount to be 3, then I got the result as follows.

enter image description here

So tell me how to show video capture in 3 different views. And give some ideas to convert original exciting videos to flip and invert colors. Thanks in advance.

+3


source to share


1 answer


Finally I found an answer with JohnnySlagle / Multiple-Camera-Feeds .

I have created three views, for example

@property (weak, nonatomic) IBOutlet UIView *video1;
@property (weak, nonatomic) IBOutlet UIView *video2;
@property (weak, nonatomic) IBOutlet UIView *video3;

      

Then setUpFeedViews changed slightly



- (void)setupFeedViews {
    NSUInteger numberOfFeedViews = 3;

    for (NSUInteger i = 0; i < numberOfFeedViews; i++) {
        VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)];
        feedView.tag = i+1;
        switch (i) {
            case 0:
                [self.video1 addSubview:feedView];
                break;
            case 1:
                [self.video2 addSubview:feedView];
                break;
            case 2:
                [self.video3 addSubview:feedView];
                break;
            default:
                break;
        }
        [self.feedViews addObject:feedView];
    }
}

      

Then filters are applied in AVCaptureVideoDataOutputSampleBufferDelegate

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;

    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;


    for (VideoFeedView *feedView in self.feedViews) {
        CGFloat previewAspect = feedView.viewBounds.size.width  / feedView.viewBounds.size.height;
        // we want to maintain the aspect radio of the screen size, so we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }
        [feedView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // This is necessary for non-power-of-two textures
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        if (feedView.tag == 1) {
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else if (feedView.tag == 2) {
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)];
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)];
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else {
            CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"];
            [effectFilter setValue:sourceImage forKey:kCIInputImageKey];
            CIImage *invertImage = [effectFilter outputImage];
            if (invertImage) {
                [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        }
        [feedView display];
    }
}

      

Here it is. This successfully meets my requirements.

+2


source







All Articles