What is: Balanced L1 Loss?
Source | Libra R-CNN: Towards Balanced Learning for Object Detection |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Balanced L1 Loss is a loss function used for the object detection task. Classification and localization problems are solved simultaneously under the guidance of a multi-task loss since Fast R-CNN, defined as:
and are objective functions corresponding to recognition and localization respectively. Predictions and targets in are denoted as and . is the corresponding regression results with class . is the regression target. is used for tuning the loss weight under multi-task learning. We call samples with a loss greater than or equal to 1.0 outliers. The other samples are called inliers.
A natural solution for balancing the involved tasks is to tune the loss weights of them. However, owing to the unbounded regression targets, directly raising the weight of localization loss will make the model more sensitive to outliers. These outliers, which can be regarded as hard samples, will produce excessively large gradients that are harmful to the training process. The inliers, which can be regarded as the easy samples, contribute little gradient to the overall gradients compared with the outliers. To be more specific, inliers only contribute 30% gradients average per sample compared with outliers. Considering these issues, the authors introduced the balanced L1 loss, which is denoted as .
Balanced L1 loss is derived from the conventional smooth L1 loss, in which an inflection point is set to separate inliers from outliners, and clip the large gradients produced by outliers with a maximum value of 1.0, as shown by the dashed lines in the Figure to the right. The key idea of balanced L1 loss is promoting the crucial regression gradients, i.e. gradients from inliers (accurate samples), to rebalance the involved samples and tasks, thus achieving a more balanced training within classification, overall localization and accurate localization. Localization loss uses balanced L1 loss is defined as:
The Figure to the right shows that the balanced L1 loss increases the gradients of inliers under the control of a factor denoted as . A small increases more gradient for inliers, but the gradients of outliers are not influenced. Besides, an overall promotion magnification controlled by γ is also brought in for tuning the upper bound of regression errors, which can help the objective function better balancing involved tasks. The two factors that control different aspects are mutually enhanced to reach a more balanced training. is used to ensure has the same value for both formulations in the equation below.
By integrating the gradient formulation above, we can get the balanced L1 loss as:
in which the parameters , , and are constrained by . The default parameters are set as and