Skip to main content

The Rosenblatt Perceptron

The perceptron is an artificial neuron, that is, a model of a biological neuron.

pg30_Image_6

The neuron receives stimuli on the dendrites, and in cases of sufficient stimuli, the neuron fires (also known as getting activated or excited) and outputs stimulus on its axon, which is transmitted to other neurons that have synaptic connections to the excited neuron. Synaptic signals can be excitatory or inhibitory; that is, some signals can prevent a neuron from firing instead of causing it to fire.

A perceptron is a type of artificial neuron. It sums up the inputs to compute an intermediate value z, which is fed to an activation function. The perceptron uses the sign function as an activation function, but other artificial neurons use other functions.

The perceptron consists of a computational unit, a number of inputs, each with an associated input weight, and a single output:

pg31_Image_7

Code snippet

# First element in vector x must be 1.
# Length of w and x must be n+1 for neuron with n inputs.
def compute_output(w, x):
z = 0.0
for i in range(len(w)):
z += x[i] * w[i] # Compute sum of weighted inputs
if z < 0: # Apply sign function
return -1
else:
return 1

Example of a Two-Input Perceptron

pg34_Image_11

Perceptron and the NAND Gate

Behavior of a Perceptron with Two Inputs:

X0X1X2W0*X0W1*X1W2*X2ZY
1−1<br>(False)−1<br>(False)0.90.60.52.01<br>(True)
11<br>(True)−1<br>(False)0.9−0.60.50.81<br>(True)
1−1<br>(False)1<br>(True)0.90.6−0.51.01<br>(True)
11<br>(True)1<br>(True)0.9−0.6−0.5−0.2−1<br>(False)

The table shows the inputs and the outputs, the intermediate values after applying the weights, as well as the sum before applying the activation function. Note what happens if we interpret the inputs and outputs as Boolean values, where –1 represents False and +1 represents True. The perceptron with these specific weights implements a NAND gate! Paraphrasing Nielsen, this is comforting because we know that by combining multiple NAND gates, we can build any logical function, but it is also kind of disappointing because we thought that neural networks were going to be something much more exciting than just Boolean logic (Nielsen, 2015).

Perceptron Learning Algorithm

Google Colab

Limitation of Perceptron

Perceptron can learn straight line functions (e.g. NAND) but unable to learn curved line functions (e.g. XOR). One suggested solution for this is to use multi-level perceptron, which is close to a deep neural network because of its hidden-layer approach.