What is: Primer?
Source | Primer: Searching for Efficient Transformers for Language Modeling |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Primer is a Transformer-based architecture that improves upon the Transformer architecture with two improvements found through neural architecture search: squared RELU activations in the feedforward block, and depthwise convolutions added to the attention multi-head projections: resulting in a new module called Multi-DConv-Head-Attention.