Replacing the network of perceptrons with sigmoid neurons

This site gives a bit of mathematical development before introducing sigmoid neurons (neurons with sigmoid activation function), namely perceptrons. http://neuralnetworksanddeeplearning.com/chap1.html

It starts with perceptrons and goes to sigmoid neurons. Okay, but I can't seem to prove the second question, Sigmoid Neurons Mimicking Perceptrons, Part II, later in this chapter. I find it hard to believe that you can replace the network of perceptrons with a network of sigmoid neurons with constant biases and weights (you can easily build an example of a counter here: take the weight 17, -6, -3 for the third layer and one final neuron in the fourth layer, where b = - 3 and w = {17, -6} in wx + b> = 0, for {1,0,0} (including offset x_0) the perceptron net gives 0, while the sigmoid net may give 1).

Can anyone help me and tell me what I am missing or where am I going wrong? Thank.

+3


source to share


2 answers


No, you cannot, and not with weights unchanged. But sigmoids are continuous approximations of binary threshold units, and they should be similar. This page says:

Now replace all perceptrons in the network with sigmoid neurons and multiply the weights and biases by a positive constant c> 0. Show that in the limit as c → ∞, the behavior of this network of sigmoid neurons is exactly the same as the network of perceptrons.



It's true. As all weights are multiplied by large values, the tiny difference between the sigmoid units and the threshold units gets smaller and smaller. Very large inputs to the sigmoid always produce 0 or 1.

+2


source


The output of only perceptrons can be 1 or 0, whereas when wx + c = 0 the output is 1/2, so it will fail if wx + c = 0 for one of the perceptrons



enter image description here

+2


source







All Articles