Published on Wed Jan 13 2021

Reproducing Activation Function for Deep Learning

Senwei Liang, Liyao Lyu, Chunmei Wang, Haizhao Yang

We propose reproducing activation functions (RAFs) to improve deep learning accuracy. RAFs can be used for various applications ranging from computer vision to scientific computing. With RAFs, the errors of audio/video reconstruction, PDEs, and eigenvalue problems are decreased.

0
0
0
Abstract

We propose reproducing activation functions (RAFs) to improve deep learning accuracy for various applications ranging from computer vision to scientific computing. The idea is to employ several basic functions and their learnable linear combination to construct neuron-wise data-driven activation functions for each neuron. Armed with RAFs, neural networks (NNs) can reproduce traditional approximation tools and, therefore, approximate target functions with a smaller number of parameters than traditional NNs. In NN training, RAFs can generate neural tangent kernels (NTKs) with a better condition number than traditional activation functions lessening the spectral bias of deep learning. As demonstrated by extensive numerical tests, the proposed RAFs can facilitate the convergence of deep learning optimization for a solution with higher accuracy than existing deep learning solvers for audio/image/video reconstruction, PDEs, and eigenvalue problems. With RAFs, the errors of audio/video reconstruction, PDEs, and eigenvalue problems are decreased by over 14%, 73%, 99%, respectively, compared with baseline, while the performance of image reconstruction increases by 58%.