Object detection is one of the most classic topics in Computer Vision. As the name suggests, it detects object, identifies object’s corresponding class and label it with smallest possible box. In this article, we are going to examine some most important terms related with object detection.
IoU, Intersection over Union, is one of fundamental criterion to describe how well our algorithm box the object. To be more precise, is the ratio of two sets’ intersection and their union, namely
Intuitively, we can see that if two sets share lots of elements, the IoU should be large. This makes IoU a good indicator for us to compare the boxes generated by our object detection algorithm with the label made by human. Thus, we can define a number between 0 and 1, IoU threshold, to differential what a good label is.
Before we proceed to the definition of Precision and Recall, let’s refresh ourselves with some basic statistics knowledge.
We define the case that we classify an instance into its true class, depending on its true class label, as True Positive or True Negative. In similar fashion, we could also define False Positive and False Negative if we classify it incorrectly. Here we have a table to help you visualize it.
With that in mind, we define Precision to be the fraction between number of True Positives and number of positive calls (True Positives and False Positives), namely
Similarly, we define Recall to be the ratio of the number of TPs to the number of all positive cases, i.e.
Here is an direct application of Precision and Recall. We can use there two to generate a curve. It is easy to see that we want both Precision and Recall high, namely we want our cutoff point to be in the top right corner.
Average Precision is the area under the PR Curve. It evaluates the average precision of an algorithm on certain task.
Mean Average Precision maybe one of the most important criterion for evaluating your model performance. As the name may suggest, it gives out the mean Average Precision of a model on all tasks.
As the name may sound intimidating, F-measure is just the weighted average of Precision and Recall. We can define
Usually Precision and Recall cannot get better at the same time, namely if we want to improve our Precision, our Recall value might get lower, and vice versa.
Post by Graviti, September 2021