Говорят, что функции активации в нейронных сетях помогают ввести нелинейность .
- Что это значит?
- Что означает нелинейность в этом контексте?
- Как помогает введение этой нелинейности ?
- Существуют ли другие цели активации функций ?
Говорят, что функции активации в нейронных сетях помогают ввести нелинейность .
Ответы:
Почти все функциональные возможности, предоставляемые нелинейными функциями активации, даны другими ответами. Позвольте мне подвести их итог:
Сигмоид
Это одна из наиболее распространенных функций активации, которая монотонно увеличивается везде. Это обычно используется в конечном выходном узле, так как он сдавливает значения между 0 и 1 (если выходной сигнал должен быть 0
или 1
). Таким образом, считается, что значение выше 0,5, 1
а значение меньше 0,5 0
, хотя 0.5
может быть установлен другой порог (не ). Его главное преимущество заключается в том, что его дифференциация проста и использует уже рассчитанные значения, и предположительно нейроны подковообразных крабов имеют эту функцию активации в своих нейронах.
Tanh
Это имеет преимущество перед функцией активации сигмоида, так как имеет тенденцию центрировать выход в 0, что имеет эффект лучшего обучения на последующих слоях (действует как нормализатор характеристик). Хорошее объяснение здесь . Отрицательные и положительные выходные значения могут рассматриваться как 0
и 1
соответственно. Используется в основном в RNN.
Функция активации Re-Lu - это еще одна очень распространенная простая нелинейная (линейная в положительном диапазоне и отрицательном диапазоне независимо друг от друга) функция активации, которая имеет преимущество в устранении проблемы исчезающего градиента, с которой сталкиваются два вышеупомянутых, т.е. градиент имеет тенденцию к0
как х стремится к + бесконечности или-бесконечности. Вот ответ о мощности аппроксимации Ре-Лу, несмотря на ее кажущуюся линейность. У ReLu есть недостаток в том, что у них есть мертвые нейроны, что приводит к увеличению NN.
Также вы можете создавать свои собственные функции активации в зависимости от вашей специализированной проблемы. У вас может быть квадратичная функция активации, которая будет намного лучше приближать квадратичные функции. Но тогда вам нужно спроектировать функцию стоимости, которая должна быть несколько выпуклой по своей природе, чтобы вы могли оптимизировать ее, используя дифференциалы первого порядка, и NN фактически сходится к достойному результату. Это основная причина, по которой используются стандартные функции активации. Но я верю, что при наличии надлежащих математических инструментов существует огромный потенциал для новых и эксцентричных функций активации.
For example, say you are trying to approximate a single variable quadratic function say . This will be best approximated by a quadratic activation where and will be the trainable parameters. But designing a loss function which follows the conventional first order derivative method (gradient descent) can be quite tough for non-monotically increasing function.
For Mathematicians: In the sigmoid activation function we see that is always < 1
. By binomial expansion, or by reverse calculation of the infinite GP series we get thus each power of can be thought of as a multiplication of several decaying exponentials based on a feature , for eaxmple . Thus each feature has a say in the scaling of the graph of .
Another way of thinking would be to expand the exponentials according to Taylor Series:
So we get a very complex combination, with all the possible polynomial combinations of input variables present. I believe if a Neural Network is structured correctly the NN can fine tune the these polynomial combinations by just modifying the connection weights and selecting polynomial terms maximum useful, and rejecting terms by subtracting output of 2 nodes weighted properly.
The activation can work in the same way since output of . I am not sure how Re-Lu's work though, but due to itsrigid structure and probelm of dead neurons werequire larger networks with ReLu's for good approximation.
But for a formal mathematical proof one has to look at the Universal Approximation Theorem.
For non-mathematicians some better insights visit these links:
Activation Functions by Andrew Ng - for more formal and scientific answer
How does neural network classifier classify from just drawing a decision plane?
Differentiable activation function A visual proof that neural nets can compute any function
If you only had linear layers in a neural network, all the layers would essentially collapse to one linear layer, and, therefore, a "deep" neural network architecture effectively wouldn't be deep anymore but just a linear classifier.
where corresponds to the matrix that represents the network weights and biases for one layer, and to the activation function.
Now, with the introduction of a non-linear activation unit after every linear transformation, this won't happen anymore.
Each layer can now build up on the results of the preceding non-linear layer which essentially leads to a complex non-linear function that is able to approximate every possible function with the right weighting and enough depth/width.
Let's first talk about linearity. Linearity means the map (a function), , used is a linear map, that is, it satisfies the following two conditions
You should be familiar with this definition if you have studied linear algebra in the past.
However, it's more important to think of linearity in terms of linear separability of data, which means the data can be separated into different classes by drawing a line (or hyperplane, if more than two dimensions), which represents a linear decision boundary, through the data. If we cannot do that, then the data is not linearly separable. Often times, data from a more complex (and thus more relevant) problem setting is not linearly separable, so it is in our interest to model these.
To model nonlinear decision boundaries of data, we can utilize a neural network that introduces non-linearity. Neural networks classify data that is not linearly separable by transforming data using some nonlinear function (or our activation function), so the resulting transformed points become linearly separable.
Different activation functions are used for different problem setting contexts. You can read more about that in the book Deep Learning (Adaptive Computation and Machine Learning series).
For an example of non linearly separable data, see the XOR data set.
Can you draw a single line to separate the two classes?
Consider a very simple neural network, with just 2 layers, where the first has 2 neurons and the last 1 neuron, and the input size is 2. The inputs are and .
The weights of the first layer are and . We do not have activations, so the outputs of the neurons in the first layer are
Let's calculate the output of the last layer with weights and
Just substitute and and you will get:
or
And look at this! If we create NN just with one layer with weights and it will be equivalent to our 2 layers NN.
The conclusion: without nonlinearity, the computational power of a multilayer NN is equal to 1-layer NN.
Also, you can think of the sigmoid function as differentiable IF the statement that gives a probability. And adding new layers can create new, more complex combinations of IF statements. For example, the first layer combines features and gives probabilities that there are eyes, tail, and ears on the picture, the second combines new, more complex features from the last layer and gives probability that there is a cat.
For more information: Hacker's guide to Neural Networks.
First Degree Linear Polynomials
Non-linearity is not the correct mathematical term. Those that use it probably intend to refer to a first degree polynomial relationship between input and output, the kind of relationship that would be graphed as a straight line, a flat plane, or a higher degree surface with no curvature.
To model relations more complex than y = a1x1 + a2x2 + ... + b, more than just those two terms of a Taylor series approximation is needed.
Tune-able Functions with Non-zero Curvature
Artificial networks such as the multi-layer perceptron and its variants are matrices of functions with non-zero curvature that, when taken collectively as a circuit, can be tuned with attenuation grids to approximate more complex functions of non-zero curvature. These more complex functions generally have multiple inputs (independent variables).
The attenuation grids are simply matrix-vector products, the matrix being the parameters that are tuned to create a circuit that approximates the more complex curved, multivariate function with simpler curved functions.
Oriented with the multi-dimensional signal entering at the left and the result appearing on the right (left-to-right causality), as in the electrical engineering convention, the vertical columns are called layers of activations, mostly for historical reasons. They are actually arrays of simple curved functions. The most commonly used activations today are these.
The identity function is sometimes used to pass through signals untouched for various structural convenience reasons.
These are less used but were in vogue at one point or another. They are still used but have lost popularity because they place additional overhead on back propagation computations and tend to lose in contests for speed and accuracy.
The more complex of these can be parametrized and all of them can be perturbed with pseudo-random noise to improve reliability.
Why Bother With All of That?
Artificial networks are not necessary for tuning well developed classes of relationships between input and desired output. For instance, these are easily optimized using well developed optimization techniques.
For these, approaches developed long before the advent of artificial networks can often arrive at an optimal solution with less computational overhead and more precision and reliability.
Where artificial networks excel is in the acquisition of functions about which the practitioner is largely ignorant or the tuning of the parameters of known functions for which specific convergence methods have not yet been devised.
Multi-layer perceptrons (ANNs) tune the parameters (attenuation matrix) during training. Tuning is directed by gradient descent or one of its variants to produce a digital approximation of an analog circuit that models the unknown functions. The gradient descent is driven by some criteria toward which circuit behavior is driven by comparing outputs with that criteria. The criteria can be any of these.
In Summary
In summary, activation functions provide the building blocks that can be used repeatedly in two dimensions of the network structure so that, combined with an attenuation matrix to vary the weight of signaling from layer to layer, is known to be able to approximate an arbitrary and complex function.
Deeper Network Excitement
The post-millenial excitement about deeper networks is because the patterns in two distinct classes of complex inputs have been successfully identified and put into use within larger business, consumer, and scientific markets.
Нет смысла для функции активации в искусственной сети, так же, как нет цели 3 для факторов числа 21. Многослойные персептроны и рекуррентные нейронные сети были определены как матрица ячеек, каждая из которых содержит одну , Удалите функции активации, и все, что осталось - это серия бесполезных умножений матриц. Уберите 3 из 21, и в результате получится не менее эффективный 21, но совершенно другое число 7.
Функции активации не помогают вводить нелинейность, они являются единственными компонентами прямого распространения в сети, которые не соответствуют полиномиальной форме первой степени. Если бы тысяча слоев имела функцию активации, где является константой, параметры и активации тысяч слоев могут быть сведены к единому точечному произведению, и никакая функция не может быть смоделирована глубокой сетью, кроме тех, которые сводятся к ,