previous    up   next

The Learning Step


The input values of the net, for the learning stage are examples in the outstanding cases, where each premise may be totally realized or not. The net produces its inner representation and hence, generalizes and resolves the intermediate situations. At this level the expressions related to each elementary premise are totally ignored. In fact, it is supposed to be "totally necessary".



Figure 36: A neural network for representation of ((A: very necessary AND B: totally necessary) OR (C: somehow forbidden AND D: totally forbidden))

   figure461

  

Table 1: I/O for learning the rule ``if favourable context then (A AND B), O(k)= f(I(A), I(B))'', tex2html_wrap_inline1183

I(A) I(B) O(K)
   1    1    1
   1    0    a
   1 -1    b
   0    0    0
   0    1    a
   0 -1    c
-1    1    b
-1    0    c
-1 -1 -1


The nets are hierarchical networks of nodes, that contain processing units which perform a memoryless nonlinear transformation on the weighted sum of their inputs.

A node produces a continuous-valued output between -1.0 and 1.0.

Weights are positive or negative real values during training. The output O(k) gives the realization degree of the rule belonging to the interval [+1 -1]:

The net uses the backpropagation learning algorithm [RHW92].

For instance , we learn the net such that:

Geometrically, the system simulates the behaviour of the function shown figure 37.

  
Figure 37: Behaviour of the system for the recognition of a rule tex2html_wrap_inline1185 (A and B are any given proposition).



For a multidimensional system, the net solves the system :

displaymath1755               

where:

When learning "IF favourable context THEN (A and B)", the inputs for the premises C and D are deactivated, and when learning "IF favourable context THEN (C and D)" the inputs for A and B are deactivated.



      previous    up   next     
  
 IRIT-UPS