What is: SAINT?
Source | SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
SAINT is a hybrid deep learning approach to solving tabular data problems. SAINT performs attention over both rows and columns, and it includes an enhanced embedding method. The architecture, pre-training and training pipeline are as follows:
- layers with 2 attention blocks each, one self-attention block, and a novel intersample attention blocks that computes attention across samples are used.
- For pre-training, this involves minimizing the contrastive and denoising losses between a given data point and its views generated by CutMix and mixup. During finetuning/regular training, data passes through an embedding layer and then the SAINT model. Lastly, the contextual embeddings from SAINT are used to pass only the embedding corresponding to the CLS token through an MLP to obtain the final prediction.