What is: Self-Adversarial Negative Sampling?
Source | RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Self-Adversarial Negative Sampling is a negative sampling technique used for methods like word embeddings and knowledge graph embeddings. The traditional negative sampling loss from word2vec for optimizing distance-based models be written as:
where is a fixed margin, is the sigmoid function, and is the -th negative triplet.
The negative sampling loss samples the negative triplets in a uniform way. Such a uniform negative sampling suffers the problem of inefficiency since many samples are obviously false as training goes on, which does not provide any meaningful information. Therefore, the authors propose an approach called self-adversarial negative sampling, which samples negative triples according to the current embedding model. Specifically, we sample negative triples from the following distribution:
where is the temperature of sampling. Moreover, since the sampling procedure may be costly, the authors treat the above probability as the weight of the negative sample. Therefore, the final negative sampling loss with self-adversarial training takes the following form: