PIL Image Scaling Intensity

What's the simplest / cleanest way to scale PIL image intensities?

Suppose I have a 16-bit image from a 12-bit camera, so only the values ​​0-4095 are used. I would like to scale the intensities so that the entire range 0-65535 is used. What is the simplest / cleanest way to do this when the image is represented as a PIL image type?

The best solution I've come up with so far:

pixels = img.getdata()
img.putdata(pixels, 16)

      

This works, but it always leaves the least significant four bits blank. Ideally, I would like to shift each value four bits to the left and then copy the four most significant bits to the four least significant bits. I don't know how to do it quickly.

+2


source to share


5 answers


Since you know the pixel values ​​are 0-4095, I cannot find a faster way than this:

new_image= image.point(lambda value: value<<4 | value>>8)

      

According to the documentation , the lambda function will be called at most 4096 times regardless of the size of your image.

EDIT: Since the function given to the point must have a view argument * scale + offset

for the image I

, then this can be best exploited with a function point

:

new_image= image.point(lambda argument: argument*16)

      



The maximum output pixel value will be 65520.

Second take:

Modified version of your own solution using itertools

to improve efficiency:

import itertools as it # for brevity
import operator

def scale_12to16(image):
    new_image= image.copy()
    new_image.putdata(
        it.imap(operator.or_,
            it.imap(operator.lshift, image.getdata(), it.repeat(4)),
            it.imap(operator.rshift, image.getdata(), it.repeat(8))
        )
    )
    return new_image

      

This avoids limiting the function argument point

.

+2


source


Why do you want to copy 4 msb back to 4 lsb? You only have 12 significant bits of information per pixel. Nothing you have done will improve this. If you're ok with only 4K intensity, which is fine for most applications, your decision is correct and probably optimal. If you need more shading levels then, as David posted, recalculate using a histogram. But it will be significantly slower.



But, copying 4 msb to 4 lsb is NOT the way to go :)

+2


source


You need to do histogram stretch ( link to a similar question I answered ), not histogram flattening: histogram stretch http://cct.rncan.gc.ca/resource/tutor/fundam/images/linstre.gif

Image source

In your case, you need to multiply all pixel values ​​by 16 , which is a factor between the two dynamic ranges (65536/4096).

+2


source


What you need to do is Adjust the histogram . How to do it with python and pil:

EDIT: Code to shift each value four bits to the left, then copy the four most significant bits into the four least significant bits ...

def f(n):
   return  n<<4 + int(bin(n)[2:6],2)

print(f(0))
print(f(2**12))

# output
>>> 0
    65664 # Oops > 2^16

      

+1


source


Perhaps you need to pass 16. (float) instead of 16 (int). I tried to test it, but for some reason putdata is not propagating at all ... So I hope this just works for you.

0


source







All Articles