We can try to maintain the calculation of the degree of consensus over the distributions of possibility built for each class by each source from the samples.
A degree of global consensus is then calculated for the whole image and used for each pixel. The advantage of such a method is that the degree of consensus obtained was calculated on the whole possibility distributions. All information describing each class is thus used.
But the distributions handled here come from spectral information (spectral bands), or geographical (out-image data). The fields of definition of each source are not the same and are often separated. The histogram of the grey levels, characteristic of a class c, can be with low values (dark grey levels) for a given spectral band, and with high values (clear grey levels) for another spectral band.
In the same way for the out-image data, this class c can be at low altitudes (small numerical values), but far from the roads (great numerical values).
How is it possible to find an intersection when the fields of definition of each source are different? Unless using a homogeneous common representation of information, the calculation of the degree of consensus on such distributions is impossible.
Moreover, Dubois and Prade [Dubois and Prade, 1994b] [Dubois and Prade, 1994a] defined adaptive fusion on an identical field of definition for all the sources. It is thus necessary to translate the possibility distributions of each source in a field of joint definition to be able to calculate the degree of consensus.
In addition, to have a global degree of consensus means that this single degree is used for all the pixels of the image. The behaviour of adaptive fusion is thus defined once for all, for the whole image! There is then no adaptive behaviour.
For these reasons, the solution consisting in calculating a global degree of consensus for the whole image is discarded.