Resizing UIImage - Performance Issues

My goal is to use AVFoundation

to capture and display (using an overlay image) the captured image - must be identical to the one in the preview layer.

Working with the iPhone 4 screen size "is fine as it simply involves resizing the captured image. However, working with the iPhone 3.5's screen size is more complex, requiring resizing and cropping."

While I have code that works for both camera positions (front and back), the code for resizing / cropping an image captured by the rear camera has some performance issues. When resizing the image, I reduced the performance issue to a higher image context. The broader context of the image is necessary to preserve the image quality of the "retina", whereas the anterior camera is so poor that the image does not have the quality of the "retina".

Given code:

UIGraphicsBeginImageContext(CGSizeMake(width, height)) 

// where: width = 640, height = 1138
// image dimensions = 1080 x 1920 

image.drawInRect(CGRectMake(0, 0, width, height))
image = UIGraphicsGetImageFromCurrentImageContext()


I have searched around but cannot find another, more efficient way to do this. Can anyone help me overcome this performance issue? Thanks to


source to share

1 answer

It is not entirely clear from your question what your end results and results are, and what your real-time requirements are. I will try to cover the main points you need with some assumptions about what you are doing.

There is a solution there that is not clear in your question, but what you should do. If you are lagging behind in processing, are you making frame drops, drop quality, or input lag? Each of these is a valid approach, but eventually you will have to do one of them. If you don't choose, the choice will be made for you (usually for input lag, but AVFoundation might start dropping frames for you, depending on where you are doing your processing).

For a specific question about cropping, perhaps you need a tool CGImageCreateWithImageInRect

. This will almost certainly be much faster than your current cropping solution (assuming you can resize over time, which is what you suggest).


is a special case because it just looks into an existing image and can therefore be very fast. In general, however, you should avoid creating new ones CGImage

or UIImage

more than you need. Usually you want to work with basic CIImage

. For example, you can transform CIImage

with imageByApplyingTransform

to scale it and imageByCroppingToRect

to crop it. CIImage

allows you to create images until that happens. As the docs say, this is indeed a "recipe" for the image. You can combine different filters and settings and then apply them all in one big GPU transfer. Moving data to the GPU and back to the CPU is insanely expensive; you want to do it exactly once. And if you're behind and need to drop the frame, you can resetCIImage

without creating it.

If you have access to YUV data from a camera (if you are working with CVPixelBuffer

) then CIImage

even more powerful because it can avoid RBGA conversion. CIImage

also has the option to turn off color management (which doesn't matter if you just resize and crop, but does matter if you change color at all). Turning off color management can be a big win. For real-time operation, CoreImage can also run in EAGLContext

and live on the GPU, rather than being copied back and forth to the CPU. If you eat, you want it.

Read the Basic Pictures Programming Guide first to see what you can do. Then I recommend Basic Image Effects and Techniques from WWDC 2013 and Achievements in Basic Image from WWDC 2014 .



All Articles