The rule of the perceptron, introduced by Rosenblatt in 1958 [Ros62], is only applied to one layer of neurons. It is a supervised learning and the answers are binary. For each couple (sample, expected answer), the rule is applied to each component of the vector weight as follows:
D is the expected answer and
the learning coefficient.
The solution assumed by this correction is not optimal. This is explained by the fact that the threshold function of activation can give the same value of output for various values of input. It is not bijective. If the value of output is assumed right then no correction is performed, the solution is thus approximate.