HoG + SVM & # 8594; HOGDescriptor :: setsvmdetector

I want to transfer HOGDescriptor

(CPU interface) using a trainable SVM. The HOG descriptor suggests a method setSVMDetector(const vector<float>& detector)

and I ask what should be in vector<float>& detector

?

I have a prepared SVM that can generate an xml file. I want to use hog.setSVMdetector(const vector<float>& detector)

for a custom dataset. How do we use this feature for our own data? Please suggest any solution.

I am using MS VS to execute code.

+3


source to share


1 answer


This detector (or set of coefficients) must be computed from your trainable model (XML file). This XML file contains all information about your model / classifier (most important support vectors ). These coefficients are calculated from the support vectors. If you are using OpenCV SVM, you can use this code (check answer) to compute a detector , which you can directly use to configure your HOG detector.



A couple of notes: in their answer, they call a detector (or a set of coefficients) as support_vector . But they are one and the same. Also, use class labels like +1 (positive) and -1 (negative). Otherwise, you may receive incorrect detections.

0


source







All Articles