Choosing the best features in a vector function with Adaboost

I read some documentation on how Adaboost works, but there are some questions regarding this.

I also read that Adaboost also picks the best features from the data besides the weighted weak classifiers and uses them during the testing phase for efficient classification.

How does Adaboost pick the best features from the data?

Correct me if my understanding of Adaboost is wrong!

+3


source to share


2 answers


In some cases, weak classifiers in Adaboost are (almost) equal to functions. In other words, using a single function for classification can lead to slightly better than random performance, so it can be used as a weak classifier. Adaboost will find a set of the best weak classifiers given the training data, so if the weak classifiers are equal to features, then you will have an indication of the most useful features.



An example of weak function-like classifiers are decision chicks .

+5


source


OK, adaboost picks features based on its main student, tree. For a single tree, there are several ways to assess how one feature contributes to the tree, somewhere relative importance. For adaboosting, an ensamble method containing several such trees, the relative significance of each function to the final model can be calculated by measuring the significance of each function for each tree and then the average.



Hope this can help you.

+1


source







All Articles