How to understand facial recognition xml
I have studied faces using opencv_trainedcascade.exe. I have a series of xml files for different stages. Each xml file has internal nodes and leafVlaues and one of them is shown below.
<opencv_storage> <stage0> <maxWeakCount>3</maxWeakCount> <stageThreshold>-1.3019366264343262e+000</stageThreshold> <weakClassifiers> <_> <internalNodes> 0 -1 2711 -2099201 -2623493 -774797061 -2162625 -827343685 -5535541 -1163949377 -21761</internalNodes> <leafValues> -9.2679738998413086e-001 6.0445684194564819e-001</leafValues></_> <_> <internalNodes> 0 -1 1533 -252379683 -203697739 1410462197 1435881947 -74449473 -1147414357 1510080511 -1</internalNodes> <leafValues> -9.1606438159942627e-001 6.2200444936752319e-001</leafValues></_> <_> <internalNodes> 0 -1 917 -42468780 -11479728 -745548289 -2371181 -23070497 -552607093 -74777633 -536871937</internalNodes> <leafValues> -9.2716777324676514e-001 5.4092508554458618e-001</leafValues></_></weakClassifiers></stage0> </opencv_storage>
My queries (1) What do these stageThreshold, internalNodes and leafValues mean? (2) On the actual face recognition as they are used in the cascade classifier, I have read several articles for the Adaboost algorithm. But I don't quite understand. Thanks to
source to share
After dug up the detect_based_tracker.cpp file, I now understood what internalNodes, leafValues and stagethreshold are and how they are used. When we look at lbpcascade_frontalface.xml, we see a list of rectangles. These are rectangles of formed face images (i.e., these areas have different features and can be used to differentiate faces from non-face images). There are 139 rectangles for lbpcascade_frontalface.xml. Each x, y dot rectangle is multiplied by a constant number to create an additional three rectangles, so one rectangle represents four rectangles.
Then I will explain what internalNode is.
<internalNodes> 0 -1 13 -163512766 -769593758 -10027009 -262145 -514457854 -193593353 -524289 -1</internalNodes>
The first two numbers 0 -1 represent left and right. I think they represent the left leafValue and the right leafValue. The third is the function index. If we put these 139 rectangles into an array, this function index refers to the array index. This means which rectangle should represent. The last eight numbers represent corner point subtractions from the four rectangles. They are calculated from integral images, so the numbers are quite large.
But I'm not really sure how the leafValues are calculated, but the sum of these leafValues is compared to the stageThreshold for a face or face decision.
Here's what I figured out from debugging the code. If someone explains how leafValues are calculated this would be a complete solution for my request. Thanks to
source to share