Viet-Anh on Software Logo

What is: Seesaw Loss?

SourceSeesaw Loss for Long-Tailed Instance Segmentation
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Seesaw Loss is a loss function for long-tailed instance segmentation. It dynamically re-balances the gradients of positive and negative samples on a tail class with two complementary factors: mitigation factor and compensation factor. The mitigation factor reduces punishments to tail categories w.r.t the ratio of cumulative training instances between different categories. Meanwhile, the compensation factor increases the penalty of misclassified instances to avoid false positives of tail categories. The synergy of the two factors enables Seesaw Loss to mitigate the overwhelming punishments on tail classes as well as compensate for the risk of misclassification caused by diminished penalties.

L_seesaw(x)=C_i=1y_ilog(σ^_i)L\_{seesaw}\left(\mathbf{x}\right) = - \sum^{C}\_{i=1}y\_{i}\log\left(\hat{\sigma}\_{i}\right)

with σ_i^=ez_iC_j1S_ijez_j+ez_i\text{with } \hat{\sigma\_{i}} = \frac{e^{z\_{i}}}{- \sum^{C}\_{j\neq{1}}\mathcal{S}\_{ij}e^{z\_{j}}+e^{z\_{i}} }

Here S_ij\mathcal{S}\_{ij} works as a tunable balancing factor between different classes. By a careful design of S_ij\mathcal{S}\_{ij}, Seesaw loss adjusts the punishments on class j from positive samples of class ii. Seesaw loss determines S_ij\mathcal{S}\_{ij} by a mitigation factor and a compensation factor, as:

S_ij=M_ijC_ij\mathcal{S}\_{ij} =\mathcal{M}\_{ij} · \mathcal{C}\_{ij}

The mitigation factor M_ij\mathcal{M}\_{ij} decreases the penalty on tail class jj according to a ratio of instance numbers between tail class jj and head class ii. The compensation factor C_ij\mathcal{C}\_{ij} increases the penalty on class jj whenever an instance of class ii is misclassified to class jj.