Understanding the mAP metrics for Object Detection

29.06.2019
By

https://medium.com/@timothycarlen/understanding-the-map-evaluation-metric-for-object-detection-a07fe6962cf3

To understand the AP, it is necessary to understand the precision and recall of a classifier. For a more comprehensive explanation of these terms, the wikipedia article is a nice place to start. Briefly, in this context, precision measures the “false positive rate” or the ratio of true object detections to the total number of objects that the classifier predicted. If you have a precision score of close to 1.0 then there is a high likelihood that whatever the classifier predicts as a positive detection is in fact a correct prediction. Recall measures the “false negative rate” or the ratio of true object detections to the total number of objects in the data set. If you have a recall score close to 1.0 then almost all objects that are in your dataset will be positively detected by the model. Finally, it is very important to note that the there is an inverse relationship between precision and recall and that these metrics are dependent on the model score threshold that you set (as well as of course, the quality of the model).

 

Добавить комментарий