When are accuracy and recall inversely related?

I am reading about accuracy and memorization in machine learning.

Question 1 : When are accuracy and response inversely related? That is, when does a situation arise where you can improve your accuracy, but at the cost of a lower recall, and vice versa? The Wikipedia article states:

There is often an inverse relationship between accuracy and recall, where you can increase it by reducing others. Brain surgery is an obvious example of a compromise.

However, I have seen results from experimental experiments where both accuracy and recall are improved at the same time (for example, when using different or more functions).

In what scenarios is the inverse relationship true?

Question 2 . I am familiar with the concept of accuracy and recall in two areas: information retrieval (for example, "returning the 100 most relevant pages from a 1MM page") and binary classification (for example, "classifying each of these 100 patients as having an illness or not"). Are precision and recall inversely related in both or in one of these fields?

+3


source to share


2 answers


The inverse relationship is only performed when you have some parameter in the system that you can change to get more / less results. Then there is the direct relationship: you lower the threshold to get more results, and among them some are TP and some FP. This, in fact, does not always mean that accuracy or recall will rise and fall at the same time - real relationships can be compared using the ROC curve . As for Q2, accuracy and recall are not necessarily inversely related in both of these problems.



So how do you increase feedback or accuracy without impacting others at the same time? Usually by improving an algorithm or model. That is, when you simply change the parameters of a given model, feedback will usually be performed, although you must remember that it will also usually be non-linear. But if you add more descriptive features to the model, for example, you can increase both metrics at the same time.

+3


source


For the first question, I am interpreting these concepts in terms of how restrictive your results are. If you are more restrictive, I mean if you are more "demanding" the results, you want it to be more accurate. To do this, you can give up on some of the correct results if everything you get is correct. This way you increase your accuracy and omit your feedback.

Conversely, if you don't mind getting some wrong results, as long as you get all the correct ones, you raise your feedback and lower your accuracy.



As for the second question, if I look at it from the point of view of the points above, I can say that yes, they are inversely related.

To the best of my knowledge, in order to be able to increase accuracy, precision, and recall, you will need either a better model (more appropriate for your problem) or better data (or both).

+1


source







All Articles