What is: Progressive Neural Architecture Search?
Source | Progressive Neural Architecture Search |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Progressive Neural Architecture Search, or PNAS, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go.
At iteration of the algorithm, we have a set of candidate cells (each of size blocks), which we train and evaluate on a dataset of interest. Since this process is expensive, PNAS also learns a model or surrogate function which can predict the performance of a structure without needing to train it. We then expand the candidates of size into children, each of size . The surrogate function is used to rank all of the children, pick the top , and then train and evaluate them. We continue in this way until , which is the maximum number of blocks we want to use in a cell.