IOS Screenshot / App Recording Techniques

I have a rather visually complex application with a basic UIViewController and several UIViews (subclassed and extended by me) on it. I run UIAlertViews and UIPopOverControllers periodically.

I'm working on a video recording solution so that when users work through the app, it records what happens for later analysis.

I have a partially working solution, but it is very slow (actually cannot take more than 1 frame per second), has some kinks (images are currently rotating and skewed, but I think I can fix it) and this is not mine idea of ​​a perfect solution.

I bounced off that line of thinking and went for a solution that uses UIGetImageFromCurrentImageContext (), but it keeps giving me nil

images even when called from drawRect:

.

I feel like I don't want to keep calling drawRect:

to get a screenshot! I don't want to actually initiate any additional drawing, just capture what's on the screen.

I'm glad to post the code I'm using, but it doesn't work yet. Does anyone know of a good solution to do what I'm looking for?

The only solution I found doesn't completely work for me as it never grabs UIAlertViews and other overlaid views.

Any help?

Thank!

cone

+3


source to share


4 answers


I was unable to do full size live video encoding. However, as an alternative, consider this.

Instead of recording footage, record actions (with their timestamps) as they occur. Then when you want to replay, just repeat the steps. You already have the code because you are executing it in "real life".

All you do is repeat the same actions in relation to each other over time.

EDIT



If you want to try the recording, here's what I did (mind you, I left it ... it was an experiment in progress, so just take it as an example of how I approached it ... nothing happens, ready). I was able to record live audio / video at 640x360, but that resolution was too low for me. It looked great on iPad, but awful when I moved the video to my Mac and watched it there.

I had higher resolution issues. I adapted the bulk of this code from the RosyWriter sample project. Here are the basic routines to set up asset recording, start recording, and add UIImage to the video stream.

Good luck.

CGSize const VIDEO_SIZE = { 640, 360 };

- (void) startRecording
{
    dispatch_async(movieWritingQueue, ^{
        NSLog(@"startRecording called in state 0x%04x", state);
        if (state != STATE_IDLE) return;
        state = STATE_STARTING_RECORDING;
        NSLog(@"startRecording changed state to 0x%04x", state);

        NSError *error;
        //assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeQuickTimeMovie error:&error];
        assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeMPEG4 error:&error];
        if (error) {
            [self showError:error];
        }
        [self removeFile:movieURL];
        [self resumeCaptureSession];
        [self.delegate recordingWillStart];
    }); 
}


// TODO: this is where we write an image into the movie stream...
- (void) writeImage:(UIImage*)inImage
{
    static CFTimeInterval const minInterval = 1.0 / 10.0;

    static CFAbsoluteTime lastFrameWrittenWallClockTime;
    CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
    CFTimeInterval timeBetweenFrames = thisFrameWallClockTime - lastFrameWrittenWallClockTime;
    if (timeBetweenFrames < minInterval) return;

    // Not really accurate, but we just want to limit the rate we try to write frames...
    lastFrameWrittenWallClockTime = thisFrameWallClockTime;

    dispatch_async(movieWritingQueue, ^{
        if ( !assetWriter ) return;

        if ((state & STATE_STARTING_RECORDING) && !(state & STATE_MASK_VIDEO_READY)) {
            if ([self setupAssetWriterImageInput:inImage]) {
                [self videoIsReady];
            }
        }
        if (state != STATE_RECORDING) return;
        if (assetWriter.status != AVAssetWriterStatusWriting) return;

        CGImageRef cgImage = CGImageCreateCopy([inImage CGImage]);
        if (assetWriterVideoIn.readyForMoreMediaData) {
            CVPixelBufferRef pixelBuffer = NULL;

            // Resize the original image...
            if (!CGSizeEqualToSize(inImage.size, VIDEO_SIZE)) {
                // Build a context that the same dimensions as the new size
                CGRect newRect = CGRectIntegral(CGRectMake(0, 0, VIDEO_SIZE.width, VIDEO_SIZE.height));
                CGContextRef bitmap = CGBitmapContextCreate(NULL,
                                                            newRect.size.width,
                                                            newRect.size.height,
                                                            CGImageGetBitsPerComponent(cgImage),
                                                            0,
                                                            CGImageGetColorSpace(cgImage),
                                                            CGImageGetBitmapInfo(cgImage));

                // Rotate and/or flip the image if required by its orientation
                //CGContextConcatCTM(bitmap, transform);

                // Set the quality level to use when rescaling
                CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);

                // Draw into the context; this scales the image
                //CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
                CGContextDrawImage(bitmap, newRect, cgImage);

                // Get the resized image from the context and a UIImage
                CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
                CGContextRelease(bitmap);
                CGImageRelease(cgImage);
                cgImage = newImageRef;
            }

            CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));

            int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.assetWriterPixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
            if(status != 0){
                //could not get a buffer from the pool
                NSLog(@"Error creating pixel buffer:  status=%d", status);
            }
            // set image data into pixel buffer
            CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
            uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);

            // Danger, Will Robinson!!!!!  USE_BLOCK_IN_FRAME warning...
            CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);

            if(status == 0){
                //CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
                CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
                CMTime presentationTime = CMTimeAdd(firstBufferTimeStamp, CMTimeMake(elapsedTime * TIME_SCALE, TIME_SCALE));
                BOOL success = [self.assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];


                if (!success)
                    NSLog(@"Warning:  Unable to write buffer to video");
            }

            //clean up
            CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
            CVPixelBufferRelease( pixelBuffer );
            CFRelease(image);
            CGImageRelease(cgImage);
        } else {
            NSLog(@"Not ready for video data");
        }
    });
}


-(BOOL) setupAssetWriterImageInput:(UIImage*)image
{
    NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                           [NSNumber numberWithDouble:1024.0*1024.0], AVVideoAverageBitRateKey,
                                           nil ];

    NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   //[NSNumber numberWithInt:image.size.width], AVVideoWidthKey,
                                   //[NSNumber numberWithInt:image.size.height], AVVideoHeightKey,
                                   [NSNumber numberWithInt:VIDEO_SIZE.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:VIDEO_SIZE.height], AVVideoHeightKey,

                                   videoCompressionProps, AVVideoCompressionPropertiesKey,
                                   nil];
    NSLog(@"videoSettings: %@", videoSettings);

    assetWriterVideoIn = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
    NSParameterAssert(assetWriterVideoIn);
    assetWriterVideoIn.expectsMediaDataInRealTime = YES;
    NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                      [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

    self.assetWriterPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoIn sourcePixelBufferAttributes:bufferAttributes];

    //add input
    if ([assetWriter canAddInput:assetWriterVideoIn]) {
        [assetWriter addInput:assetWriterVideoIn];
    }
    else {
        NSLog(@"Couldn't add asset writer video input.");
        return NO;
    }

    return YES;
}

      

+2


source


Perhaps you can have better results using UIGetScreenImage

, in conjunction with AVAssetWriter

, to save an h264 mp4 file eg.

Add AVAssetWriterInput

to your resource owner and call:

- (BOOL)appendSampleBuffer:(CMSampleBufferRef)sampleBuffer

      

at a regular interval (maybe you can schedule a timer on your main thread) and create a fetch buffer from the UIImage collected by the call UIGetScreenImage()

.



AVAssetWriterInputPixelBufferAdaptor

can also be helpful with this method:

- (BOOL)appendPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime

      

You can look at this question to convert CGImageRef

to CVPixelBufferRef

.

Note in UIGetScreenImage

, this is a private API, but Apple is allowed: https://devforums.apple.com/message/149553#149553

+1


source


I think you should take a big step back and think about the end effect you are trying to achieve. I don't think continuing this path will lead you to very much success. If your ultimate goal is simply analytics about the actions your users are taking, there are quite a few iOS-focused analytics tools at your disposal. You should be capturing information that is more useful than just screen capture. Unfortunately, since your question is very focused on the frame rate issues that you found, I cannot determine your ultimate goal.

I cannot recommend against using private APIs and trying to write screen data in real time. It's just too much information to process. This comes from a game development veteran who pushed iOS devices to their limits. That said, if you come up with a workable solution (probably something very low-key involving CoreGraphics and highly compressed images), I would love to hear about it.

+1


source


If you're targeting an iPhone 4S or iPad 2 (or newer) then AirPlay Mirroring might do what you need it to. There is a Mac app called Reflection that will receive AirPlay content and even has the ability to record .mov directly. This may not be feasible to support a diverse set of users or locations, but it was helpful to me.

+1


source







All Articles