Resizing UIImage - Performance Issues
My goal is to use
to capture and display (using an overlay image) the captured image - must be identical to the one in the preview layer.
Working with the iPhone 4 screen size "is fine as it simply involves resizing the captured image. However, working with the iPhone 3.5's screen size is more complex, requiring resizing and cropping."
While I have code that works for both camera positions (front and back), the code for resizing / cropping an image captured by the rear camera has some performance issues. When resizing the image, I reduced the performance issue to a higher image context. The broader context of the image is necessary to preserve the image quality of the "retina", whereas the anterior camera is so poor that the image does not have the quality of the "retina".
UIGraphicsBeginImageContext(CGSizeMake(width, height)) // where: width = 640, height = 1138 // image dimensions = 1080 x 1920 image.drawInRect(CGRectMake(0, 0, width, height)) image = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext()
I have searched around but cannot find another, more efficient way to do this. Can anyone help me overcome this performance issue? Thanks to
source to share
It is not entirely clear from your question what your end results and results are, and what your real-time requirements are. I will try to cover the main points you need with some assumptions about what you are doing.
There is a solution there that is not clear in your question, but what you should do. If you are lagging behind in processing, are you making frame drops, drop quality, or input lag? Each of these is a valid approach, but eventually you will have to do one of them. If you don't choose, the choice will be made for you (usually for input lag, but AVFoundation might start dropping frames for you, depending on where you are doing your processing).
For a specific question about cropping, perhaps you need a tool
. This will almost certainly be much faster than your current cropping solution (assuming you can resize over time, which is what you suggest).
is a special case because it just looks into an existing image and can therefore be very fast. In general, however, you should avoid creating new ones
more than you need. Usually you want to work with basic
. For example, you can transform
to scale it and
to crop it.
allows you to create images until that happens. As the docs say, this is indeed a "recipe" for the image. You can combine different filters and settings and then apply them all in one big GPU transfer. Moving data to the GPU and back to the CPU is insanely expensive; you want to do it exactly once. And if you're behind and need to drop the frame, you can reset
without creating it.
If you have access to YUV data from a camera (if you are working with
even more powerful because it can avoid RBGA conversion.
also has the option to turn off color management (which doesn't matter if you just resize and crop, but does matter if you change color at all). Turning off color management can be a big win. For real-time operation, CoreImage can also run in
and live on the GPU, rather than being copied back and forth to the CPU. If you eat, you want it.
source to share