PIL vs Python-GD for harvest and resize

I am creating custom images which I later convert to an image pyramid for Seadragon AJAX. A pyramid of images and images is created using PIL. It currently takes several hours to create images and an image pyramid for about 100 images with a total width and height of about 32,000,000 by 1,000 (yes, the image is very long and narrow). The performance is roughly similar to the other algorithm I've tried (i.e. deepzoom.py). I plan to see if python-gd will perform better due to the fact that most of its functions are coded in C (from the GD library). I would have preferred a significant performance boost, but I'm curious to hear from others. In particular, resizing and cropping is slow in PIL (w / Image.ANTIALIAS). Would it get much better if I use Python-GD?

Thanks in advance for your comments and suggestions.

EDIT: The performance difference between PIL and python-GD seems to be minimal. I am refactoring my code to reduce performance bottlenecks and enable multi-processor support. I have tested the python multiprocessor module. The results are encouraging.

+2


source to share


2 answers


PIL is mostly found in C.



Smoothing is slow. When you turn anti-aliasing off, what happens with speed?

+1


source


VIPS includes fast deepzoom creator . I timed deepzoom.py

and on my car I see:

$ time ./wtc.py 
real    0m29.601s
user    0m29.158s
sys     0m0.408s
peak RES 450mb

      

where wtc.jpg

is a 10,000 x 10,000 pixel RGB JPG image and wtc.py

uses these settings .



VIPS is about three times faster and needs a quarter of the memory:

$ time vips dzsave wtc.jpg wtc --overlap 2 --tile-size 128 --suffix .png[compression=0]
real    0m10.819s
user    0m37.084s
sys     0m15.314s
peak RES 100mb

      

I'm not sure why sys is much higher.

0


source







All Articles