Conjunctive
Fusion of the Four Spectral Bands MSS4 to MSS7
and Out-Image Data
In order to still improve classification, it is interesting to bring more information to the process of fusion. In addition to the spectral information provided by the spectral bands, information on the context of each class are added.
Out-image data have the same representation that spectral data: the samples of each class are used to build possibility distributions. Information thus arises in a numerical form like in the case of spectral information.
Out-image data ``distance to roads'' and ``plateaux''
The out-image data ``distance to roads'' and ``plateaux'' give the better improvement of classification rates. A mean classification of 63.94% is reached against 53.35% with only the spectral bands.
The image obtained (table 20) presents more homogeneous areas for each class, but large areas of unclassified pixels appear. These two phenomena can be expressed by:
By adding information ``distance to roads'' and ``plateaux'' to the spectral bands, classification is improved of 10.59 points compared to classification with only the spectral bands (see table 20).
Classes 1, 3, 5 and 9 are better recognized :
The decreasing of recognition of class 2, 4, 7 and 8 is significant:
The final rate of classification of 63.94% does not correspond to a
global improvement of classification, but to the best recognition of certain
classes to the detriment of the others.
Rates of classification obtained | |||
Class | Number of pixels
correctly classified |
Number of pixels in samples (B) |
Rates of pixels correctly classified (A/B) |
1 | 394 | 459 | 85.84% |
2 | 131 | 459 | 28.54% |
3 | 198 | 306 | 64.71% |
4 | 164 | 391 | 41.94% |
5 | 432 | 459 | 94.12% |
6 | 313 | 459 | 68.19% |
7 | 251 | 459 | 54.68% |
8 | 223 | 459 | 48.58% |
9 | 394 | 459 | 85.84% |
TOTAL | 2500 | 3910 | 63.94% |
---|
Classes | Class observed | Not | ||||||||
waited | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | classified |
1 | 85.84 | 1.09 | - | 0.22 | - | - | 0.22 | - | - | 12.64 |
2 | 2.18 | 28.54 | 11.98 | 23.75 | 14.81 | - | - | - | - | 18.74 |
3 | - | 5.56 | 64.71 | 5.56 | - | - | 0.33 | - | - | 23.86 |
4 | 0.26 | 4.60 | 17.14 | 41.94 | 5.12 | - | - | - | - | 30.95 |
5 | - | 0.87 | 0.44 | 2.40 | 94.12 | - | - | - | - | 2.18 |
6 | 7,84 | - | - | - | - | 68.19 | 2.40 | 0.22 | 0,44 | 20.92 |
7 | - | - | - | - | - | 1.31 | 54.68 | 4.14 | 7.41 | 32.46 |
8 | - | - | - | - | - | - | 1.74 | 48.58 | 43.79 | 5.88 |
9 | - | - | - | - | - | 1.74 | 9.15 | 1.96 | 85.84 | 1.31 |
Out-image
data ``distance to roads'', ``plateaux''
and ``valleys''
The addition of a third out-image data, ``valleys'', does not improve the classification. The rate passes to 62.69% against 63.94% with two out-image data. The more there are sources of information to fuse, and the more it is difficult to obtain a area of agreement for each source.
So the rate of unclassified pixels still increases (from 33.42% of pixels not classified with six images to 49.98% with seven images!). It is the result of the severe behaviour of the conjunctive fusion which is unable to correctly manage the conflicts between a great number of sources. The limit of performance of conjunctive fusion is reached. Beyond, the results will be more and more degraded.
Out-image data ``distance to roads'', ``plateaux'', ``valleys'' and ``slopes orientation''
The addition of a fourth out-image source, ``slopes orientation'', provides a decreasing of the average classification (from 62.69% for seven images to 59.51%). The conflict between the sources increases, involving the failure of the process of fusion in many cases and the none-classification of a great number of pixels (60.51%).
The degradation of the mean classification continues as sources to be fused are added. The fusion of the totality of the thirteen sources available leads to a very weak mean rate of classification of 22.89% (table 22) because almost all the pixels of the image (93.03%) are unclassified. The performance of conjunctive fusion is degraded by the problem of unclassified pixels. It is exactly the same problem with Bayes' rule. These methods of fusion are unsuited to fuse a great number of sources because they are unable to manage the conflict.
It is noted however that confusion between the classes largely decreases when the number of sources used increases. The reduction about the recognition of the classes is only due to the not classified pixels. All the pixels to be classified are located around the samples. The increase of the number of sources of information thus improves effectively the distinction of the classes. It remains to regulate the problem of the conflict between the sources so that classification becomes acceptable.
Rates of classification obtained | |||
Class | Number of pixels
correctly classified |
Number of pixels in samples (B) |
Rates of pixels correctly classified (A/B) |
1 | 64 | 459 | 13.94% |
2 | 88 | 459 | 19.17% |
3 | 79 | 306 | 25.82% |
4 | 44 | 391 | 11.25% |
5 | 99 | 459 | 21.57% |
6 | 185 | 459 | 40.31% |
7 | 62 | 459 | 13.51% |
8 | 37 | 459 | 8.06% |
9 | 237 | 459 | 51.63% |
TOTAL | 895 | 3910 | 22.89% |
---|
Classes observed | ||||||||||
Class expected | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Unclassified |
1 | 13.94 | - | - | - | - | - | - | - | - | 86.06 |
2 | - | 19.17 | - | - | - | - | - | - | - | 80.83 |
3 | - | - | 25.82 | 0.33 | - | - | - | - | - | 73.86 |
4 | - | - | - | 11.25 | - | - | - | - | - | 88.75 |
5 | - | - | - | 1,09 | 21.57 | - | - | - | - | 77.34 |
6 | 3,49 | - | - | - | - | 40.31 | - | - | - | 56.21 |
7 | - | - | - | - | - | - | 13.51 | - | - | 86.49 |
8 | - | - | - | - | - | - | - | 8.06 | - | 91.94 |
9 | - | - | - | - | - | - | - | - | 51.63 | 48.37 |