Published on Sat Dec 28 2019

NAS evaluation is frustratingly hard

Antoine Yang, Pedro M. Esperança, Fabio M. Carlucci

Neural Architecture Search is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue.

0
0
0
Abstract

Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of NAS methods on datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method's relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macro-structure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between and cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls. The code used is available at https://github.com/antoyang/NAS-Benchmark.

Tue Jun 23 2020
Neural Networks
NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search
Neural Architecture Search (NAS) is an open and challenging problem in machine learning. While NAS offers great promise, the prohibitive computational demand of most of the existing NAS methods makes it difficult to directly search the architectures on large-scale tasks. We propose to analyze the architecture transferability of different NAS methods.
0
0
0
Mon Feb 25 2019
Machine Learning
NAS-Bench-101: Towards Reproducible Neural Architecture Search
NAS-Bench-101 is the first public architecture dataset for NAS research. It allows researchers to evaluate the quality of a diverse range of models in milliseconds.
0
0
0
Sun Feb 21 2021
Machine Learning
Stronger NAS with Weaker Predictors
Neural Architecture Search (NAS) often trains and evaluates a large number of perceptions. We propose a paradigm shift from fitting the whole architecture space using one strong predictor, to progressively fitting a search path towards the top-performance sub-space through weaker predictors.
3
0
0
Fri Apr 02 2021
Neural Networks
How Powerful are Performance Predictors in Neural Architecture Search?
0
0
0
Thu Feb 21 2019
Machine Learning
Evaluating the Search Phase of Neural Architecture Search
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task.
0
0
0
Thu Aug 26 2021
Machine Learning
Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
Neural Architecture Search (NAS) has been studied to automate the discovery of top-performer neural networks. NAS suffers from heavy resourceconsumingconsumption and often incurs search bias due to truncated training orroximations. This work targets designing a principled and unified training-free framework.
1
0
0