Determine the difference between stops between images without EXIF ​​data

I have a set of images of the same scene, but shot with different exposures. These images do not have EXIF ​​data, so there is no way to extract useful information like f-stop, shutter speed, etc.

What I am trying to do is determine the difference in stops between images, i.e. Image1 is +1.3 image stops.

My current approach is to first calculate the luminance from the RGB values ​​of the image using the equation

L = 0.2126 * R + 0.7152 * G + 0.0722 * B

I've seen different numbers are used in the equation, but in general, this shouldn't greatly affect the end result L.

After that I get the log brightness of the image.

exp(avg of log(luminance of image))

      

But somehow the brightness of the log-avg doesn't seem to give much indication of the difference in exposure between images. Any ideas on how to tell the difference in exposure?

edit: on c / c ++

+3


source to share


1 answer


You have two problems to solve:

1. Linearize image data

(In case it isn't obvious what you mean: twice the light collected by your pixel will be twice the intensity value in your linearized image.)

Your image input might be (enough) linearized already β†’ you can skip to part 2. If your content is coming from a camera and it's a JPEG, then it certainly isn't.

The real "solution" to this problem is to find the camera response function that you want to invert and apply to the image data to obtain linear intensity values. This is by no means a trivial task. The EMoR model is widely used in all kinds of software (Photoshop, PTGui, Photomatix, etc.) to describe camera response functions. Some open source software solving this problem (but using a different iirc model), PFScalibrate .

Having said that, you can get rid of the simple reverse gamma application. For this one can find a rough "gestimation" for the correct gamma value:

  • Capture an evenly lit static scene with two exposure values ​​e and e / 2
  • apply a couple of inverse gamma transforms (for example 1.8 to 2.4 in 0.1 steps) on both images
  • multiply all short exposure images by 2.0 and subtract them from the corresponding long exposure images
  • choose a gamma that will produce the smallest overall difference.

2. Find the actual difference in irradiation in the feet, i.e. log2 (scale factor)



Assuming the scene was stationary (no moving objects or camera), this is relatively simple:

sum1 = sum2 = 0
foreach pixel pair (p1,p2) from the two images:
  if p1 or p2 is close to 0 or 255:
    skip this pair
  sum1 += p1 and sum2 += p2
return log2(sum1 / sum2)

      

On large images this will work just as well and much faster if you are a sub-sample of the images.

If the camera was static, but the scene was not (moving objects), this starts to work less well. In this case, I got acceptable results by simply repeating the above procedure a few times and using the result of the previous run as an estimate of the correct scale factor, and then discarding pixel pairs that were too far from the current estimate. So basically replacing the above line with if

this:

if <see above> or if abs(log2(p1/p2) - estimate) > 0.5:

      

I would stop repeating after a fixed number of iterations, or if two consecutive estimates are close enough to each other.

EDIT: a note on converting to brightness

You don't need to do this at all (as Tony D already mentioned) and if you insist then do it after the linearization step (as Mark Ransom pointed out). In an ideal setting (static scene, no noise, no de-mosaic, no quantization), each channel of each pixel will have the same ratio p1/p2

(if none of them are saturated). Therefore, the relative weight of the different channels is irrelevant. You can sum all pixels / channels (weighting R, G and B equally) or perhaps use only the green channel.

+1


source







All Articles