Network training algorithms have heavily concentrated on the learning of connection weights. Little effort has been made to learn the amplitude of the activation functions, which defines the range of values that the function can take. This paper introduces novel algorithms to learn the amplitudes of non-linear activations in layered networks, without any assumption on their analytical form. Three instances of the algorithms are developed: (i) a common amplitude is shared among all the nonlinear units; (ii) each layer has its own amplitude; (iii) neuron-specific amplitudes are allowed. Experimental results validate the approach to a large extent, showing a dramatic improvement in performance over the nets with fixed amplitudes.
Trentin, E. (1999). Activation functions with learnable amplitude. In Proceedings of IJCNN99, IEEE-INNS International Joint Conference on Neural Networks (pp.1794-1798). Springer.
Activation functions with learnable amplitude
TRENTIN EDMONDO
1999-01-01
Abstract
Network training algorithms have heavily concentrated on the learning of connection weights. Little effort has been made to learn the amplitude of the activation functions, which defines the range of values that the function can take. This paper introduces novel algorithms to learn the amplitudes of non-linear activations in layered networks, without any assumption on their analytical form. Three instances of the algorithms are developed: (i) a common amplitude is shared among all the nonlinear units; (ii) each layer has its own amplitude; (iii) neuron-specific amplitudes are allowed. Experimental results validate the approach to a large extent, showing a dramatic improvement in performance over the nets with fixed amplitudes.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/5149
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo