Higher sampling for image projection

My software needs to judge the ranges of the spectrum, and given the location of the bands, find the peak point and the bandwidths.

enter image description here

I learned how to draw an image and find the width of each peak .


But I need a better way to find the projection .

The method used reduces an image that is 1600 pixels wide (for example, 1600X40) to a 1600 clock sequence. Ideally, I would like to reduce the image to a 10,000-long sequence using the same image.

I want a longer sequence since 1600 dots also provide a lower resolution . One point makes a big difference (4% difference if the group is rated 18 to 19).

How to get a projection longer from the same image?

The code I used is: stackoverflow

import Image
from scipy import *
from scipy.optimize import leastsq

# Load the picture with PIL, process if needed
pic         = asarray(Image.open("band2.png"))

# Average the pixel values along vertical axis
pic_avg     = pic.mean(axis=2)
projection  = pic_avg.sum(axis=0)

# Set the min value to zero for a nice fit
projection /= projection.mean()
projection -= projection.min()

      

+1


source to share


2 answers


What you want to do is called interpolation . Scipy has an interpolation module with a number of different functions for different situations, take a look here or specifically for the images here .

Here is a recently asked question with some sample code and a graph that shows what's going on.

But it is really important to understand that interpolation will not make your data more accurate, so it will not help you in this situation.



If you want more accurate results, you need more accurate data. There is no other way. You need to start with a higher resolution image. (If you reformat or interpolate, the results are actually less accurate !)

Update - as the question changes

@Hooked made a good point. Another way to think about this is that instead of just averaging right away (which throws away the variance in the data), you can create 40 plots (e.g. your bottom one in your hosted image) from each horizontal line in your spectral image, all of these plots will be very similar, but with some changes in peak position, height, and width. You must calculate the position, height, and width of each of these peaks in each of these 40 images, then combine this data (peak match across 40 plots) and use the corresponding variance as an estimate of the error (for peak position, height, and width) using the central limit theorem. This way you can make the most of your data. However, I supposethat this implies some independence between each of the lines in the spectrogram, what may or may not be so?

+7


source


I would like to offer more details for @fraxel's answer (to comment on a comment). He's right that you can't get more information than what you put in, but I think it takes some elaboration ...

  • You are projecting your data from 1600x40 -> 1600

    , which seems like you are throwing some data away. While technically correct, the whole prediction point is to bring higher dimensional data into lower dimensions. It only makes sense if ...
  • Your data can be presented in the lower dimension. Correct me if I'm wrong, but it looks like your data is indeed one-dimensional, the vertical axis is a measure of the variability of that particular point on the x-axis (wavelength?).
  • Given that the projection makes sense, what is the best way to summarize the data for each specific wavelength point? In my previous answer , you can see that I took the average for each point. In the absence of other information about the specific properties of the system, this is a reasonable first-order approximation.
  • You can store more information if you like. Below I plotted the variance along the y-axis. This tells me that your measurements have high volatility when the signal is higher and low volatility when the signal is lower (which seems useful!): enter image description here
  • What you need to do is decide what you are going to do with that extra 40 pixels of data before projection. They mean something physically, and your job as a researcher is to interpret and design that data in a meaningful way!


The code to create the image is below, the specification. the data was taken from the screenshot of your original post:

import Image
from scipy import *
from scipy.optimize import leastsq

# Load the picture with PIL, process if needed
pic         = asarray(Image.open("spec2.png"))

# Average the pixel values along vertical axis
pic_avg     = pic.mean(axis=2)
projection  = pic_avg.sum(axis=0)

# Compute the variance
variance = pic_avg.var(axis=0)

from pylab import *

scale = 1/40.

X_val = range(projection.shape[0])
errorbar(X_val,projection*scale,yerr=variance*scale)
imshow(pic,origin='lower',alpha=.8)
axis('tight')
show()

      

+2


source







All Articles