previous    up   next

MONO-BAND CLASSIFICATION

  
Classification consists in assigning each pixel to a class, or assuming that it does not belong to any proposed classes and to store it in a class of rejection. The decision to rather assigning a pixel to a class from other is performed using the degree of membership of the pixel. The choice of the target class comes to find the class having the highest degree of membership.

For each pixel x of each image Ii  , a degree dic(x) of membership of each class c , is calculated by means of a probability or possibility measure. Fusion must, for a given pixel x, aggregate information about its belonging to some classes.

For this pixel x, information is represented by p vectors I (p information sources) of dimension k (k classes). Information fusion aggregates the measures contained in these p vectors onto a sole final vector I (figure 15). The pixel is then stored in the class having the highest degree of membership.


  

Figure 15: Information fusion with a pixel x.

\begin{figure}\begin{center}\epsfbox{c4-fusion.eps}\end{center}\end{figure}


To highlight that a correct classification can not be carried out by using one spectral band only, classification is achieved for each spectral band. I.e. classification of a spectral band Ib , takes only into account the information contained in this one.

The reasons for this impossibility are the lack of discrimination between each class, the impurity of the information brought by each pixel, and the noise of sensors spoiling the available information.



 Classification rates

Tables 5 to 8 give the rates of classification obtained by using a Bayes classification for each of the four spectral bands and the corresponding resulting images.

Those rates are lower when fusion is performed with the operator "minimum" of possibility theory.

It is clear that these isolated classifications lead to poor results:

There are different reasons for bad classification, for instance confusion between classes and noise due to sensors.



 Confusion between classes

Those bad rates are due to the confusion made between classes. As their spectrum are very similar, a weak grey level change is enough to classify a pixel from one class to another.

Confusion matrices of spectral band MSS4 to MSS7 show that class 2 is often merged

Classes which histogram has a narrow spread out width, and well marked peak, are recognized best (such classes 1 and 2 for example), whereas classes with a more spread out histogram are merged with the preceding ones and are less recognized.



 Noise

Another cause of these bad results is due to the noise added by the sensors to the images.

At last, classes 7 and 8 are generally the best recognized (each one is well recognized by three spectral bands), then classes 1, 2, 5, 6 and 9.

The less recognized classes are classes 8 and 9, and, to a lesser extent, classes 3, 4 and 6.

The conversion of the histograms into possibility distributions does not really improve the discernability between sources. It is thus necessary to resort to other techniques to improve classification. The fusion of data will make it possible to raise, at least better, ambiguities related to classification.




 



      previous    up   next     
  
 IRIT-UPS