False Alarm Rate vs. ROC Curve

How to plot per window false positives versus miss rate (or false alarm probability) and ROC (receiver response curve) for a video-based object detection application? How to determine the number of false positives and hits? an example will be very helpful.

+3


source to share


1 answer


It's pretty straightforward. Store all your true positive (H0) values ​​in one array and all your negative (H1) values ​​in another.

Sort both lists.

Find the maximum value from both lists and the lowest value from both lists. Divide the range by the appropriate number (for example 1000), this will be your step.

Now we go from the minimum to the maximum by the step value.

For each estimated value, find the point in the h0 and h1 array that is greater than this value. Divide this index by the number of values ​​in the h0 / h1 array and multiply by 100 (giving you a percentage).

  • False rejection (fr) = h0 index percentage.
  • False acceptance (fa) = 100 - (percentage of the h1 index).

Site fa against, 100 - fr.



To calculate the EER, you need to find the minimum distance between the fr and fa values ​​calculated above.

float diff = fabsf( fr - fa );
if ( diff < minDiff )
{
    minDiff = diff;
    minFr   = fr;
    minFa   = fa;
}

      

And then at the end the EER is calculated like this:

float eer = (minFr + minFa) / 2.0f;

      

Edit . The values ​​you get for H0 and H1 are simply estimates indicating the "likelihood" that your match is correct. You have to calculate these numbers somewhere as you have to decide whether you recognize your object or not based on this metric.

The H0 list is the scores you get when you have certain matches. The H1 list is the grades you get when you have certain discrepancies.

+2


source







All Articles